1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 5253 5254 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042 6043 6044 6045 6046 6047 6048 6049 6050 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6143 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 6170 6171 6172 6173 6174 6175 6176 6177 6178 6179 6180 6181 6182 6183 6184 6185 6186 6187 6188 6189 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 6219 6220 6221 6222 6223 6224 6225 6226 6227 6228 6229 6230 6231 6232 6233 6234 6235 6236 6237 6238 6239 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 6252 6253 6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 6284 6285 6286 6287 6288 6289 6290 6291 6292 6293 6294 6295 6296 6297 6298 6299 6300 6301 6302 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 6320 6321 6322 6323 6324 6325 6326 6327 6328 6329 6330 6331 6332 6333 6334 6335 6336 6337 6338 6339 6340 6341 6342 6343 6344 6345 6346 6347 6348 6349 6350 6351 6352 6353 6354 6355 6356 6357 6358 6359 6360 6361 6362 6363 6364 6365 6366 6367 6368 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 6395 6396 6397 6398 6399 6400 6401 6402 6403 6404 6405 6406 6407 6408 6409 6410 6411 6412 6413 6414 6415 6416 6417 6418 6419 6420 6421 6422 6423 6424 6425 6426 6427 6428 6429 6430 6431 6432 6433 6434 6435 6436 6437 6438 6439 6440 6441 6442 6443 6444 6445 6446 6447 6448 6449 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 6492 6493 6494 6495 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 6515 6516 6517 6518 6519 6520 6521 6522 6523 6524 6525 6526 6527 6528 6529 6530 6531 6532 6533 6534 6535 6536 6537 6538 6539 6540 6541 6542 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 6568 6569 6570 6571 6572 6573 6574 6575 6576 6577 6578 6579 6580 6581 6582 6583 6584 6585 6586 6587 6588 6589 6590 6591 6592 6593 6594 6595 6596 6597 6598 6599 6600 6601 6602 6603 6604 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 6666 6667 6668 6669 6670 6671 6672 6673 6674 6675 6676 6677 6678 6679 6680 6681 6682 6683 6684 6685 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 6701 6702 6703 6704 6705 6706 6707 6708 6709 6710 6711 6712 6713 6714 6715 6716 6717 6718 6719 6720 6721 6722 6723 6724 6725 6726 6727 6728 6729 6730 6731 6732 6733 6734 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 6749 6750 6751 6752 6753 6754 6755 6756 6757 6758 6759 6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 6785 6786 6787 6788 6789 6790 6791 6792 6793 6794 6795 6796 6797 6798 6799 6800 6801 6802 6803 6804 6805 6806 6807 6808 6809 6810 6811 6812 6813 6814 6815 6816 6817 6818 6819 6820 6821 6822 6823 6824 6825 6826 6827 6828 6829 6830 6831 6832 6833 6834 6835 6836 6837 6838 6839 6840 6841 6842 6843 6844 6845 6846 6847 6848 6849 6850 6851 6852 6853 6854 6855 6856 6857 6858 6859 6860 6861 6862 6863 6864 6865 6866 6867 6868 6869 6870 6871 6872 6873 6874 6875 6876 6877 6878 6879 6880 6881 6882 6883 6884 6885 6886 6887 6888 6889 6890 6891 6892 6893 6894 6895 6896 6897 6898 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 6910 6911 6912 6913 6914 6915 6916 6917 6918 6919 6920 6921 6922 6923 6924 6925 6926 6927 6928 6929 6930 6931 6932 6933 6934 6935 6936 6937 6938 6939 6940 6941 6942 6943 6944 6945 6946 6947 6948 6949 6950 6951 6952 6953 6954 6955 6956 6957 6958 6959 6960 6961 6962 6963 6964 6965 6966 6967 6968 6969 6970 6971 6972 6973 6974 6975 6976 6977 6978 6979 6980 6981 6982 6983 6984 6985 6986 6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 7007 7008 7009 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 7026 7027 7028 7029 7030 7031 7032 7033 7034 7035 7036 7037 7038 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048 7049 7050 7051 7052 7053 7054 7055 7056 7057 7058 7059 7060 7061 7062 7063 7064 7065 7066 7067 7068 7069 7070 7071 7072 7073 7074 7075 7076 7077 7078 7079 7080 7081 7082 7083 7084 7085 7086 7087 7088 7089 7090 7091 7092 7093 7094 7095 7096 7097 7098 7099 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 7116 7117 7118 7119 7120 7121 7122 7123 7124 7125 7126 7127 7128 7129 7130 7131 7132 7133 7134 7135 7136 7137 7138 7139 7140 7141 7142 7143 7144 7145 7146 7147 7148 7149 7150 7151 7152 7153 7154 7155 7156 7157 7158 7159 7160 7161 7162 7163 7164 7165 7166 7167 7168 7169 7170 7171 7172 7173 7174 7175 7176 7177 7178 7179 7180 7181 7182 7183 7184 7185 7186 7187 7188 7189 7190 7191 7192 7193 7194 7195 7196 7197 7198 7199 7200 7201 7202 7203 7204 7205 7206 7207 7208 7209 7210 7211 7212 7213 7214 7215 7216 7217 7218 7219 7220 7221 7222 7223 7224 7225 7226 7227 7228 7229 7230 7231 7232 7233 7234 7235 7236 7237 7238 7239 7240 7241 7242 7243 7244 7245 7246 7247 7248 7249 7250 7251 7252 7253 7254 7255 7256 7257 7258 7259 7260 7261 7262 7263 7264 7265 7266 7267 7268 7269 7270 7271 7272 7273 7274 7275 7276 7277 7278 7279 7280 7281 7282 7283 7284 7285 7286 7287 7288 7289 7290 7291 7292 7293 7294 7295 7296 7297 7298 7299 7300 7301 7302 7303 7304 7305 7306 7307 7308 7309 7310 7311 7312 7313 7314 7315 7316 7317 7318 7319 7320 7321 7322 7323 7324 7325 7326 7327 7328 7329 7330 7331 7332 7333 7334 7335 7336 7337 7338 7339 7340 7341 7342 7343 7344 7345 7346 7347 7348 7349 7350 7351 7352 7353 7354 7355 7356 7357 7358 7359 7360 7361 7362 7363 7364 7365 7366 7367 7368 7369 7370 7371 7372 7373 7374 7375 7376 7377 7378 7379 7380 7381 7382 7383 7384 7385 7386 7387 7388 7389 7390 7391 7392 7393 7394 7395 7396 7397 7398 7399 7400 7401 7402 7403 7404 7405 7406 7407 7408 7409 7410 7411 7412 7413 7414 7415 7416 7417 7418 7419 7420 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 7432 7433 7434 7435 7436 7437 7438 7439 7440 7441 7442 7443 7444 7445 7446 7447 7448 7449 7450 7451 7452 7453 7454 7455 7456 7457 7458 7459 7460 7461 7462 7463 7464 7465 7466 7467 7468 7469 7470 7471 7472 7473 7474 7475 7476 7477 7478 7479 7480 7481 7482 7483 7484 7485 7486 7487 7488 7489 7490 7491 7492 7493 7494 7495 7496 7497 7498 7499 7500 7501 7502 7503 7504 7505 7506 7507 7508 7509 7510 7511 7512 7513 7514 7515 7516 7517 7518 7519 7520 7521 7522 7523 7524 7525 7526 7527 7528 7529 7530 7531 7532 7533 7534 7535 7536 7537 7538 7539 7540 7541 7542 7543 7544 7545 7546 7547 7548 7549 7550 7551 7552 7553 7554 7555 7556 7557 7558 7559 7560 7561 7562 7563 7564 7565 7566 7567 7568 7569 7570 7571 7572 7573 7574 7575 7576 7577 7578 7579 7580 7581 7582 7583 7584 7585 7586 7587 7588 7589 7590 7591 7592 7593 7594 7595 7596 7597 7598 7599 7600 7601 7602 7603 7604 7605 7606 7607 7608 7609 7610 7611 7612 7613 7614 7615 7616 7617 7618 7619 7620 7621 7622 7623 7624 7625 7626 7627 7628 7629 7630 7631 7632 7633 7634 7635 7636 7637 7638 7639 7640 7641 7642 7643 7644 7645 7646 7647 7648 7649 7650 7651 7652 7653 7654 7655 7656 7657 7658 7659 7660 7661 7662 7663 7664 7665 7666 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 7688 7689 7690 7691 7692 7693 7694 7695 7696 7697 7698 7699 7700 7701 7702 7703 7704 7705 7706 7707 7708 7709 7710 7711 7712 7713 7714 7715 7716 7717 7718 7719 7720 7721 7722 7723 7724 7725 7726 7727 7728 7729 7730 7731 7732 7733 7734 7735 7736 7737 7738 7739 7740 7741 7742 7743 7744 7745 7746 7747 7748 7749 7750 7751 7752 7753 7754 7755 7756 7757 7758 7759 7760 7761 7762 7763 7764 7765 7766 7767 7768 7769 7770 7771 7772 7773 7774 7775 7776 7777 7778 7779 7780 7781 7782 7783 7784 7785 7786 7787 7788 7789 7790 7791 7792 7793 7794 7795 7796 7797 7798 7799 7800 7801 7802 7803 7804 7805 7806 7807 7808 7809 7810 7811 7812 7813 7814 7815 7816 7817 7818 7819 7820 7821 7822 7823 7824 7825 7826 7827 7828 7829 7830 7831 7832 7833 7834 7835 7836 7837 7838 7839 7840 7841 7842 7843 7844 7845 7846 7847 7848 7849 7850 7851 7852 7853 7854 7855 7856 7857 7858 7859 7860 7861 7862 7863 7864 7865 7866 7867 7868 7869 7870 7871 7872 7873 7874 7875 7876 7877 7878 7879 7880 7881 7882 7883 7884 7885 7886 7887 7888 7889 7890 7891 7892 7893 7894 7895 7896 7897 7898 7899 7900 7901 7902 7903 7904 7905 7906 7907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 7921 7922 7923 7924 7925 7926 7927 7928 7929 7930 7931 7932 7933 7934 7935 7936 7937 7938 7939 7940 7941 7942 7943 7944 7945 7946 7947 7948 7949 7950 7951 7952 7953 7954 7955 7956 7957 7958 7959 7960 7961 7962 7963 7964 7965 7966 7967 7968 7969 7970 7971 7972 7973 7974 7975 7976 7977 7978 7979 7980 7981 7982 7983 7984 7985 7986 7987 7988 7989 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 8003 8004 8005 8006 8007 8008 8009 8010 8011 8012 8013 8014 8015 8016 8017 8018 8019 8020 8021 8022 8023 8024 8025 8026 8027 8028 8029 8030 8031 8032 8033 8034 8035 8036 8037 8038 8039 8040 8041 8042 8043 8044 8045 8046 8047 8048 8049 8050 8051 8052 8053 8054 8055 8056 8057 8058 8059 8060 8061 8062 8063 8064 8065 8066 8067 8068 8069 8070 8071 8072 8073 8074 8075 8076 8077 8078 8079 8080 8081 8082 8083 8084 8085 8086 8087 8088 8089 8090 8091 8092 8093 8094 8095 8096 8097 8098 8099 8100 8101 8102 8103 8104 8105 8106 8107 8108 8109 8110 8111 8112 8113 8114 8115 8116 8117 8118 8119 8120 8121 8122 8123 8124 8125 8126 8127 8128 8129 8130 8131 8132 8133 8134 8135 8136 8137 8138 8139 8140 8141 8142 8143 8144 8145 8146 8147 8148 8149 8150 8151 8152 8153 8154 8155 8156 8157 8158 8159 8160 8161 8162 8163 8164 8165 8166 8167 8168 8169 8170 8171 8172 8173 8174 8175 8176 8177 8178 8179 8180 8181 8182 8183 8184 8185 8186 8187 8188 8189 8190 8191 8192 8193 8194 8195 8196 8197 8198 8199 8200 8201 8202 8203 8204 8205 8206 8207 8208 8209 8210 8211 8212 8213 8214 8215 8216 8217 8218 8219 8220 8221 8222 8223 8224 8225 8226 8227 8228 8229 8230 8231 8232 8233 8234 8235 8236 8237 8238 8239 8240 8241 8242 8243 8244 8245 8246 8247 8248 8249 8250 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 8264 8265 8266 8267 8268 8269 8270 8271 8272 8273 8274 8275 8276 8277 8278 8279 8280 8281 8282 8283 8284 8285 8286 8287 8288 8289 8290 8291 8292 8293 8294 8295 8296 8297 8298 8299 8300 8301 8302 8303 8304 8305 8306 8307 8308 8309 8310 8311 8312 8313 8314 8315 8316 8317 8318 8319 8320 8321 8322 8323 8324 8325 8326 8327 8328 8329 8330 8331 8332 8333 8334 8335 8336 8337 8338 8339 8340 8341 8342 8343 8344 8345 8346 8347 8348 8349 8350 8351 8352 8353 8354 8355 8356 8357 8358 8359 8360 8361 8362 8363 8364 8365 8366 8367 8368 8369 8370 8371 8372 8373 8374 8375 8376 8377 8378 8379 8380 8381 8382 8383 8384 8385 8386 8387 8388 8389 8390 8391 8392 8393 8394 8395 8396 8397 8398 8399 8400 8401 8402 8403 8404 8405 8406 8407 8408 8409 8410 8411 8412 8413 8414 8415 8416 8417 8418 8419 8420 8421 8422 8423 8424 8425 8426 8427 8428 8429 8430 8431 8432 8433 8434 8435 8436 8437 8438 8439 8440 8441 8442 8443 8444 8445 8446 8447 8448 8449 8450 8451 8452 8453 8454 8455 8456 8457 8458 8459 8460 8461 8462 8463 8464 8465 8466 8467 8468 8469 8470 8471 8472 8473 8474 8475 8476 8477 8478 8479 8480 8481 8482 8483 8484 8485 8486 8487 8488 8489 8490 8491 8492 8493 8494 8495 8496 8497 8498 8499 8500 8501 8502 8503 8504 8505 8506 8507 8508 8509 8510 8511 8512 8513 8514 8515 8516 8517 8518 8519 8520 8521 8522 8523 8524 8525 8526 8527 8528 8529 8530 8531 8532 8533 8534 8535 8536 8537 8538 8539 8540 8541 8542 8543 8544 8545 8546 8547 8548 8549 8550 8551 8552 8553 8554 8555 8556 8557 8558 8559 8560 8561 8562 8563 8564 8565 8566 8567 8568 8569 8570 8571 8572 8573 8574 8575 8576 8577 8578 8579 8580 8581 8582 8583 8584 8585 8586 8587 8588 8589 8590 8591 8592 8593 8594 8595 8596 8597 8598 8599 8600 8601 8602 8603 8604 8605 8606 8607 8608 8609 8610 8611 8612 8613 8614 8615 8616 8617 8618 8619 8620 8621 8622 8623 8624 8625 8626 8627 8628 8629 8630 8631 8632 8633 8634 8635 8636 8637 8638 8639 8640 8641 8642 8643 8644 8645 8646 8647 8648 8649 8650 8651 8652 8653 8654 8655 8656 8657 8658 8659 8660 8661 8662 8663 8664 8665 8666 8667 8668 8669 8670 8671 8672 8673 8674 8675 8676 8677 8678 8679 8680 8681 8682 8683 8684 8685 8686 8687 8688 8689 8690 8691 8692 8693 8694 8695 8696 8697 8698 8699 8700 8701 8702 8703 8704 8705 8706 8707 8708 8709 8710 8711 8712 8713 8714 8715 8716 8717 8718 8719 8720 8721 8722 8723 8724 8725 8726 8727 8728 8729 8730 8731 8732 8733 8734 8735 8736 8737 8738 8739 8740 8741 8742 8743 8744 8745 8746 8747 8748 8749 8750 8751 8752 8753 8754 8755 8756 8757 8758 8759 8760 8761 8762 8763 8764 8765 8766 8767 8768 8769 8770 8771 8772 8773 8774 8775 8776 8777 8778 8779 8780 8781 8782 8783 8784 8785 8786 8787 8788 8789 8790 8791 8792 8793 8794 8795 8796 8797 8798 8799 8800 8801 8802 8803 8804 8805 8806 8807 8808 8809 8810 8811 8812 8813 8814 8815 8816 8817 8818 8819 8820 8821 8822 8823 8824 8825 8826 8827 8828 8829 8830 8831 8832 8833 8834 8835 8836 8837 8838 8839 8840 8841 8842 8843 8844 8845 8846 8847 8848 8849 8850 8851 8852 8853 8854 8855 8856 8857 8858 8859 8860 8861 8862 8863 8864 8865 8866 8867 8868 8869 8870 8871 8872 8873 8874 8875 8876 8877 8878 8879 8880 8881 8882 8883 8884 8885 8886 8887 8888 8889 8890 8891 8892 8893 8894 8895 8896 8897 8898 8899 8900 8901 8902 8903 8904 8905 8906 8907 8908 8909 8910 8911 8912 8913 8914 8915 8916 8917 8918 8919 8920 8921 8922 8923 8924 8925 8926 8927 8928 8929 8930 8931 8932 8933 8934 8935 8936 8937 8938 8939 8940 8941 8942 8943 8944 8945 8946 8947 8948 8949 8950 8951 8952 8953 8954 8955 8956 8957 8958 8959 8960 8961 8962 8963 8964 8965 8966 8967 8968 8969 8970 8971 8972 8973 8974 8975 8976 8977 8978 8979 8980 8981 8982 8983 8984 8985 8986 8987 8988 8989 8990 8991 8992 8993 8994 8995 8996 8997 8998 8999 9000 9001 9002 9003 9004 9005 9006 9007 9008 9009 9010 9011 9012 9013 9014 9015 9016 9017 9018 9019 9020 9021 9022 9023 9024 9025 9026 9027 9028 9029 9030 9031 9032 9033 9034 9035 9036 9037 9038 9039 9040 9041 9042 9043 9044 9045 9046 9047 9048 9049 9050 9051 9052 9053 9054 9055 9056 9057 9058 9059 9060 9061 9062 9063 9064 9065 9066 9067 9068 9069 9070 9071 9072 9073 9074 9075 9076 9077 9078 9079 9080 9081 9082 9083 9084 9085 9086 9087 9088 9089 9090 9091 9092 9093 9094 9095 9096 9097 9098 9099 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 9110 9111 9112 9113 9114 9115 9116 9117 9118 9119 9120 9121 9122 9123 9124 9125 9126 9127 9128 9129 9130 9131 9132 9133 9134 9135 9136 9137 9138 9139 9140 9141 9142 9143 9144 9145 9146 9147 9148 9149 9150 9151 9152 9153 9154 9155 9156 9157 9158 9159 9160 9161 9162 9163 9164 9165 9166 9167 9168 9169 9170 9171 9172 9173 9174 9175 9176 9177 9178 9179 9180 9181 9182 9183 9184 9185 9186 9187 9188 9189 9190 9191 9192 9193 9194 9195 9196 9197 9198 9199 9200 9201 9202 9203 9204 9205 9206 9207 9208 9209 9210 9211 9212 9213 9214 9215 9216 9217 9218 9219 9220 9221 9222 9223 9224 9225 9226 9227 9228 9229 9230 9231 9232 9233 9234 9235 9236 9237 9238 9239 9240 9241 9242 9243 9244 9245 9246 9247 9248 9249 9250 9251 9252 9253 9254 9255 9256 9257 9258 9259 9260 9261 9262 9263 9264 9265 9266 9267 9268 9269 9270 9271 9272 9273 9274 9275 9276 9277 9278 9279 9280 9281 9282 9283 9284 9285 9286 9287 9288 9289 9290 9291 9292 9293 9294 9295 9296 9297 9298 9299 9300 9301 9302 9303 9304 9305 9306 9307 9308 9309 9310 9311 9312 9313 9314 9315 9316 9317 9318 9319 9320 9321 9322 9323 9324 9325 9326 9327 9328 9329 9330 9331 9332 9333 9334 9335 9336 9337 9338 9339 9340 9341 9342 9343 9344 9345 9346 9347 9348 9349 9350 9351 9352 9353 9354 9355 9356 9357 9358 9359 9360 9361 9362 9363 9364 9365 9366 9367 9368 9369 9370 9371 9372 9373 9374 9375 9376 9377 9378 9379 9380 9381 9382 9383 9384 9385 9386 9387 9388 9389 9390 9391 9392 9393 9394 9395 9396 9397 9398 9399 9400 9401 9402 9403 9404 9405 9406 9407 9408 9409 9410 9411 9412 9413 9414 9415 9416 9417 9418 9419 9420 9421 9422 9423 9424 9425 9426 9427 9428 9429 9430 9431 9432 9433 9434 9435 9436 9437 9438 9439 9440 9441 9442 9443 9444 9445 9446 9447 9448 9449 9450 9451 9452 9453 9454 9455 9456 9457 9458 9459 9460 9461 9462 9463 9464 9465 9466 9467 9468 9469 9470 9471 9472 9473 9474 9475 9476 9477 9478 9479 9480 9481 9482 9483 9484 9485 9486 9487 9488 9489 9490 9491 9492 9493 9494 9495 9496 9497 9498 9499 9500 9501 9502 9503 9504 9505 9506 9507 9508 9509 9510 9511 9512 9513 9514 9515 9516 9517 9518 9519 9520 9521 9522 9523 9524 9525 9526 9527 9528 9529 9530 9531 9532 9533 9534 9535 9536 9537 9538 9539 9540 9541 9542 9543 9544 9545 9546 9547 9548 9549 9550 9551 9552 9553 9554 9555 9556 9557 9558 9559 9560 9561 9562 9563 9564 9565 9566 9567 9568 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 9583 9584 9585 9586 9587 9588 9589 9590 9591 9592 9593 9594 9595 9596 9597 9598 9599 9600 9601 9602 9603 9604 9605 9606 9607 9608 9609 9610 9611 9612 9613 9614 9615 9616 9617 9618 9619 9620 9621 9622 9623 9624 9625 9626 9627 9628 9629 9630 9631 9632 9633 9634 9635 9636 9637 9638 9639 9640 9641 9642 9643 9644 9645 9646 9647 9648 9649 9650 9651 9652 9653 9654 9655 9656 9657 9658 9659 9660 9661 9662 9663 9664 9665 9666 9667 9668 9669 9670 9671 9672 9673 9674 9675 9676 9677 9678 9679 9680 9681 9682 9683 9684 9685 9686 9687 9688 9689 9690 9691 9692 9693 9694 9695 9696 9697 9698 9699 9700 9701 9702 9703 9704 9705 9706 9707 9708 9709 9710 9711 9712 9713 9714 9715 9716 9717 9718 9719 9720 9721 9722 9723 9724 9725 9726 9727 9728 9729 9730 9731 9732 9733 9734 9735 9736 9737 9738 9739 9740 9741 9742 9743 9744 9745 9746 9747 9748 9749 9750 9751 9752 9753 9754 9755 9756 9757 9758 9759 9760 9761 9762 9763 9764 9765 9766 9767 9768 9769 9770 9771 9772 9773 9774 9775 9776 9777 9778 9779 9780 9781 9782 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 9809 9810 9811 9812 9813 9814 9815 9816 9817 9818 9819 9820 9821 9822 9823 9824 9825 9826 9827 9828 9829 9830 9831 9832 9833 9834 9835 9836 9837 9838 9839 9840 9841 9842 9843 9844 9845 9846 9847 9848 9849 9850 9851 9852 9853 9854 9855 9856 9857 9858 9859 9860 9861 9862 9863 9864 9865 9866 9867 9868 9869 9870 9871 9872 9873 9874 9875 9876 9877 9878 9879 9880 9881 9882 9883 9884 9885 9886 9887 9888 9889 9890 9891 9892 9893 9894 9895 9896 9897 9898 9899 9900 9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913 9914 9915 9916 9917 9918 9919 9920 9921 9922 9923 9924 9925 9926 9927 9928 9929 9930 9931 9932 9933 9934 9935 9936 9937 9938 9939 9940 9941 9942 9943 9944 9945 9946 9947 9948 9949 9950 9951 9952 9953 9954 9955 9956 9957 9958 9959 9960 9961 9962 9963 9964 9965 9966 9967 9968 9969 9970 9971 9972 9973 9974 9975 9976 9977 9978 9979 9980 9981 9982 9983 9984 9985 9986 9987 9988 9989 9990 9991 9992 9993 9994 9995 9996 9997 9998 9999 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 10010 10011 10012 10013 10014 10015 10016 10017 10018 10019 10020 10021 10022 10023 10024 10025 10026 10027 10028 10029 10030 10031 10032 10033 10034 10035 10036 10037 10038 10039 10040 10041 10042 10043 10044 10045 10046 10047 10048 10049 10050 10051 10052 10053 10054 10055 10056 10057 10058 10059 10060 10061 10062 10063 10064 10065 10066 10067 10068 10069 10070 10071 10072 10073 10074 10075 10076 10077 10078 10079 10080 10081 10082 10083 10084 10085 10086 10087 10088 10089 10090 10091 10092 10093 10094 10095 10096 10097 10098 10099 10100 10101 10102 10103 10104 10105 10106 10107 10108 10109 10110 10111 10112 10113 10114 10115 10116 10117 10118 10119 10120 10121 10122 10123 10124 10125 10126 10127 10128 10129 10130 10131 10132 10133 10134 10135 10136 10137 10138 10139 10140 10141 10142 10143 10144 10145 10146 10147 10148 10149 10150 10151 10152 10153 10154 10155 10156 10157 10158 10159 10160 10161 10162 10163 10164 10165 10166 10167 10168 10169 10170 10171 10172 10173 10174 10175 10176 10177 10178 10179 10180 10181 10182 10183 10184 10185 10186 10187 10188 10189 10190 10191 10192 10193 10194 10195 10196 10197 10198 10199 10200 10201 10202 10203 10204 10205 10206 10207 10208 10209 10210 10211 10212 10213 10214 10215 10216 10217 10218 10219 10220 10221 10222 10223 10224 10225 10226 10227 10228 10229 10230 10231 10232 10233 10234 10235 10236 10237 10238 10239 10240 10241 10242 10243 10244 10245 10246 10247 10248 10249 10250 10251 10252 10253 10254 10255 10256 10257 10258 10259 10260 10261 10262 10263 10264 10265 10266 10267 10268 10269 10270 10271 10272 10273 10274 10275 10276 10277 10278 10279 10280 10281 10282 10283 10284 10285 10286 10287 10288 10289 10290 10291 10292 10293 10294 10295 10296 10297 10298 10299 10300 10301 10302 10303 10304 10305 10306 10307 10308 10309 10310 10311 10312 10313 10314 10315 10316 10317 10318 10319 10320 10321 10322 10323 10324 10325 10326 10327 10328 10329 10330 10331 10332 10333 10334 10335 10336 10337 10338 10339 10340 10341 10342 10343 10344 10345 10346 10347 10348 10349 10350 10351 10352 10353 10354 10355 10356 10357 10358 10359 10360 10361 10362 10363 10364 10365 10366 10367 10368 10369 10370 10371 10372 10373 10374 10375 10376 10377 10378 10379 10380 10381 10382 10383 10384 10385 10386 10387 10388 10389 10390 10391 10392 10393 10394 10395 10396 10397 10398 10399 10400 10401 10402 10403 10404 10405 10406 10407 10408 10409 10410 10411 10412 10413 10414 10415 10416 10417 10418 10419 10420 10421 10422 10423 10424 10425 10426 10427 10428 10429 10430 10431 10432 10433 10434 10435 10436 10437 10438 10439 10440 10441 10442 10443 10444 10445 10446 10447 10448 10449 10450 10451 10452 10453 10454 10455 10456 10457 10458 10459 10460 10461 10462 10463 10464 10465 10466 10467 10468 10469 10470 10471 10472 10473 10474 10475 10476 10477 10478 10479 10480 10481 10482 10483 10484 10485 10486 10487 10488 10489 10490 10491 10492 10493 10494 10495 10496 10497 10498 10499 10500 10501 10502 10503 10504 10505 10506 10507 10508 10509 10510 10511 10512 10513 10514 10515 10516 10517 10518 10519 10520 10521 10522 10523 10524 10525 10526 10527 10528 10529 10530 10531 10532 10533 10534 10535 10536 10537 10538 10539 10540 10541 10542 10543 10544 10545 10546 10547 10548 10549 10550 10551 10552 10553 10554 10555 10556 10557 10558 10559 10560 10561 10562 10563 10564 10565 10566 10567 10568 10569 10570 10571 10572 10573 10574 10575 10576 10577 10578 10579 10580 10581 10582 10583 10584 10585 10586 10587 10588 10589 10590 10591 10592 10593 10594 10595 10596 10597 10598 10599 10600 10601 10602 10603 10604 10605 10606 10607 10608 10609 10610 10611 10612 10613 10614 10615 10616 10617 10618 10619 10620 10621 10622 10623 10624 10625 10626 10627 10628 10629 10630 10631 10632 10633 10634 10635 10636 10637 10638 10639 10640 10641 10642 10643 10644 10645 10646 10647 10648 10649 10650 10651 10652 10653 10654 10655 10656 10657 10658 10659 10660 10661 10662 10663 10664 10665 10666 10667 10668 10669 10670 10671 10672 10673 10674 10675 10676 10677 10678 10679 10680 10681 10682 10683 10684 10685 10686 10687 10688 10689 10690 10691 10692 10693 10694 10695 10696 10697 10698 10699 10700 10701 10702 10703 10704 10705 10706 10707 10708 10709 10710 10711 10712 10713 10714 10715 10716 10717 10718 10719 10720 10721 10722 10723 10724 10725 10726 10727 10728 10729 10730 10731 10732 10733 10734 10735 10736 10737 10738 10739 10740 10741 10742 10743 10744 10745 10746 10747 10748 10749 10750 10751 10752 10753 10754 10755 10756 10757 10758 10759 10760 10761 10762 10763 10764 10765 10766 10767 10768 10769 10770 10771 10772 10773 10774 10775 10776 10777 10778 10779 10780 10781 10782 10783 10784 10785 10786 10787 10788 10789 10790 10791 10792 10793 10794 10795 10796 10797 10798 10799 10800 10801 10802 10803 10804 10805 10806 10807 10808 10809 10810 10811 10812 10813 10814 10815 10816 10817 10818 10819 10820 10821 10822 10823 10824 10825 10826 10827 10828 10829 10830 10831 10832 10833 10834 10835 10836 10837 10838 10839 10840 10841 10842 10843 10844 10845 10846 10847 10848 10849 10850 10851 10852 10853 10854 10855 10856 10857 10858 10859 10860 10861 10862 10863 10864 10865 10866 10867 10868 10869 10870 10871 10872 10873 10874 10875 10876 10877 10878 10879 10880 10881 10882 10883 10884 10885 10886 10887 10888 10889 10890 10891 10892 10893 10894 10895 10896 10897 10898 10899 10900 10901 10902 10903 10904 10905 10906 10907 10908 10909 10910 10911 10912 10913 10914 10915 10916 10917 10918 10919 10920 10921 10922 10923 10924 10925 10926 10927 10928 10929 10930 10931 10932 10933 10934 10935 10936 10937 10938 10939 10940 10941 10942 10943 10944 10945 10946 10947 10948 10949 10950 10951 10952 10953 10954 10955 10956 10957 10958 10959 10960 10961 10962 10963 10964 10965 10966 10967 10968 10969 10970 10971 10972 10973 10974 10975 10976 10977 10978 10979 10980 10981 10982 10983 10984 10985 10986 10987 10988 10989 10990 10991 10992 10993 10994 10995 10996 10997 10998 10999 11000 11001 11002 11003 11004 11005 11006 11007 11008 11009 11010 11011 11012 11013 11014 11015 11016 11017 11018 11019 11020 11021 11022 11023 11024 11025 11026 11027 11028 11029 11030 11031 11032 11033 11034 11035 11036 11037 11038 11039 11040 11041 11042 11043 11044 11045 11046 11047 11048 11049 11050 11051 11052 11053 11054 11055 11056 11057 11058 11059 11060 11061 11062 11063 11064 11065 11066 11067 11068 11069 11070 11071 11072 11073 11074 11075 11076 11077 11078 11079 11080 11081 11082 11083 11084 11085 11086 11087 11088 11089 11090 11091 11092 11093 11094 11095 11096 11097 11098 11099 11100 11101 11102 11103 11104 11105 11106 11107 11108 11109 11110 11111 11112 11113 11114 11115 11116 11117 11118 11119 11120 11121 11122 11123 11124 11125 11126 11127 11128 11129 11130 11131 11132 11133 11134 11135 11136 11137 11138 11139 11140 11141 11142 11143 11144 11145 11146 11147 11148 11149 11150 11151 11152 11153 11154 11155 11156 11157 11158 11159 11160 11161 11162 11163 11164 11165 11166 11167 11168 11169 11170 11171 11172 11173 11174 11175 11176 11177 11178 11179 11180 11181 11182 11183 11184 11185 11186 11187 11188 11189 11190 11191 11192 11193 11194 11195 11196 11197 11198 11199 11200 11201 11202 11203 11204 11205 11206 11207 11208 11209 11210 11211 11212 11213 11214 11215 11216 11217 11218 11219 11220 11221 11222 11223 11224 11225 11226 11227 11228 11229 11230 11231 11232 11233 11234 11235 11236 11237 11238 11239 11240 11241 11242 11243 11244 11245 11246 11247 11248 11249 11250 11251 11252 11253 11254 11255 11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 11269 11270 11271 11272 11273 11274 11275 11276 11277 11278 11279 11280 11281 11282 11283 11284 11285 11286 11287 11288 11289 11290 11291 11292 11293 11294 11295 11296 11297 11298 11299 11300 11301 11302 11303 11304 11305 11306 11307 11308 11309 11310 11311 11312 11313 11314 11315 11316 11317 11318 11319 11320 11321 11322 11323 11324 11325 11326 11327 11328 11329 11330 11331 11332 11333 11334 11335 11336 11337 11338 11339 11340 11341 11342 11343 11344 11345 11346 11347 11348 11349 11350 11351 11352 11353 11354 11355 11356 11357 11358 11359 11360 11361 11362 11363 11364 11365 11366 11367 11368 11369 11370 11371 11372 11373 11374 11375 11376 11377 11378 11379 11380 11381 11382 11383 11384 11385 11386 11387 11388 11389 11390 11391 11392 11393 11394 11395 11396 11397 11398 11399 11400 11401 11402 11403 11404 11405 11406 11407 11408 11409 11410 11411 11412 11413 11414 11415 11416 11417 11418 11419 11420 11421 11422 11423 11424 11425 11426 11427 11428 11429 11430 11431 11432 11433 11434 11435 11436 11437 11438 11439 11440 11441 11442 11443 11444 11445 11446 11447 11448 11449 11450 11451 11452 11453 11454 11455 11456 11457 11458 11459 11460 11461 11462 11463 11464 11465 11466 11467 11468 11469 11470 11471 11472 11473 11474 11475 11476 11477 11478 11479 11480 11481 11482 11483 11484 11485 11486 11487 11488 11489 11490 11491 11492 11493 11494 11495 11496 11497 11498 11499 11500 11501 11502 11503 11504 11505 11506 11507 11508 11509 11510 11511 11512 11513 11514 11515 11516 11517 11518 11519 11520 11521 11522 11523 11524 11525 11526 11527 11528 11529 11530 11531 11532 11533 11534 11535 11536 11537 11538 11539 11540 11541 11542 11543 11544 11545 11546 11547 11548 11549 11550 11551 11552 11553 11554 11555 11556 11557 11558 11559 11560 11561 11562 11563 11564 11565 11566 11567 11568 11569 11570 11571 11572 11573 11574 11575 11576 11577 11578 11579 11580 11581 11582 11583 11584 11585 11586 11587 11588 11589 11590 11591 11592 11593 11594 11595 11596 11597 11598 11599 11600 11601 11602 11603 11604 11605 11606 11607 11608 11609 11610 11611 11612 11613 11614 11615 11616 11617 11618 11619 11620 11621 11622 11623 11624 11625 11626 11627 11628 11629 11630 11631 11632 11633 11634 11635 11636 11637 11638 11639 11640 11641 11642 11643 11644 11645 11646 11647 11648 11649 11650 11651 11652 11653 11654 11655 11656 11657 11658 11659 11660 11661 11662 11663 11664 11665 11666 11667 11668 11669 11670 11671 11672 11673 11674 11675 11676 11677 11678 11679 11680 11681 11682 11683 11684 11685 11686 11687 11688 11689 11690 11691 11692 11693 11694 11695 11696 11697 11698 11699 11700 11701 11702 11703 11704 11705 11706 11707 11708 11709 11710 11711 11712 11713 11714 11715 11716 11717 11718 11719 11720 11721 11722 11723 11724 11725 11726 11727 11728 11729 11730 11731 11732 11733 11734 11735 11736 11737 11738 11739 11740 11741 11742 11743 11744 11745 11746 11747 11748 11749 11750 11751 11752 11753 11754 11755 11756 11757 11758 11759 11760 11761 11762 11763 11764 11765 11766 11767 11768 11769 11770 11771 11772 11773 11774 11775 11776 11777 11778 11779 11780 11781 11782 11783 11784 11785 11786 11787 11788 11789 11790 11791 11792 11793 11794 11795 11796 11797 11798 11799 11800 11801 11802 11803 11804 11805 11806 11807 11808 11809 11810 11811 11812 11813 11814 11815 11816 11817 11818 11819 11820 11821 11822 11823 11824 11825 11826 11827 11828 11829 11830 11831 11832 11833 11834 11835 11836 11837 11838 11839 11840 11841 11842 11843 11844 11845 11846 11847 11848 11849 11850 11851 11852 11853 11854 11855 11856 11857 11858 11859 11860 11861 11862 11863 11864 11865 11866 11867 11868 11869 11870 11871 11872 11873 11874 11875 11876 11877 11878 11879 11880 11881 11882 11883 11884 11885 11886 11887 11888 11889 11890 11891 11892 11893 11894 11895 11896 11897 11898 11899 11900 11901 11902 11903 11904 11905 11906 11907 11908 11909 11910 11911 11912 11913 11914 11915 11916 11917 11918 11919 11920 11921 11922 11923 11924 11925 11926 11927 11928 11929 11930 11931 11932 11933 11934 11935 11936 11937 11938 11939 11940 11941 11942 11943 11944 11945 11946 11947 11948 11949 11950 11951 11952 11953 11954 11955 11956 11957 11958 11959 11960 11961 11962 11963 11964 11965 11966 11967 11968 11969 11970 11971 11972 11973 11974 11975 11976 11977 11978 11979 11980 11981 11982 11983 11984 11985 11986 11987 11988 11989 11990 11991 11992 11993 11994 11995 11996 11997 11998 11999 12000 12001 12002 12003 12004 12005 12006 12007 12008 12009 12010 12011 12012 12013 12014 12015 12016 12017 12018 12019 12020 12021 12022 12023 12024 12025 12026 12027 12028 12029 12030 12031 12032 12033 12034 12035 12036 12037 12038 12039 12040 12041 12042 12043 12044 12045 12046 12047 12048 12049 12050 12051 12052 12053 12054 12055 12056 12057 12058 12059 12060 12061 12062 12063 12064 12065 12066 12067 12068 12069 12070 12071 12072 12073 12074 12075 12076 12077 12078 12079 12080 12081 12082 12083 12084 12085 12086 12087 12088 12089 12090 12091 12092 12093 12094 12095 12096 12097 12098 12099 12100 12101 12102 12103 12104 12105 12106 12107 12108 12109 12110 12111 12112 12113 12114 12115 12116 12117 12118 12119 12120 12121 12122 12123 12124 12125 12126 12127 12128 12129 12130 12131 12132 12133 12134 12135 12136 12137 12138 12139 12140 12141 12142 12143 12144 12145 12146 12147 12148 12149 12150 12151 12152 12153 12154 12155 12156 12157 12158 12159 12160 12161 12162 12163 12164 12165 12166 12167 12168 12169 12170 12171 12172 12173 12174 12175 12176 12177 12178 12179 12180 12181 12182 12183 12184 12185 12186 12187 12188 12189 12190 12191 12192 12193 12194 12195 12196 12197 12198 12199 12200 12201 12202 12203 12204 12205 12206 12207 12208 12209 12210 12211 12212 12213 12214 12215 12216 12217 12218 12219 12220 12221 12222 12223 12224 12225 12226 12227 12228 12229 12230 12231 12232 12233 12234 12235 12236 12237 12238 12239 12240 12241 12242 12243 12244 12245 12246 12247 12248 12249 12250 12251 12252 12253 12254 12255 12256 12257 12258 12259 12260 12261 12262 12263 12264 12265 12266 12267 12268 12269 12270 12271 12272 12273 12274 12275 12276 12277 12278 12279 12280 12281 12282 12283 12284 12285 12286 12287 12288 12289 12290 12291 12292 12293 12294 12295 12296 12297 12298 12299 12300 12301 12302 12303 12304 12305 12306 12307 12308 12309 12310 12311 12312 12313 12314 12315 12316 12317 12318 12319 12320 12321 12322 12323 12324 12325 12326 12327 12328 12329 12330 12331 12332 12333 12334 12335 12336 12337 12338 12339 12340 12341 12342 12343 12344 12345 12346 12347 12348 12349 12350 12351 12352 12353 12354 12355 12356 12357 12358 12359 12360 12361 12362 12363 12364 12365 12366 12367 12368 12369 12370 12371 12372 12373 12374 12375 12376 12377 12378 12379 12380 12381 12382 12383 12384 12385 12386 12387 12388 12389 12390 12391 12392 12393 12394 12395 12396 12397 12398 12399 12400 12401 12402 12403 12404 12405 12406 12407 12408 12409 12410 12411 12412 12413 12414 12415 12416 12417 12418 12419 12420 12421 12422 12423 12424 12425 12426 12427 12428 12429 12430 12431 12432 12433 12434 12435 12436 12437 12438 12439 12440 12441 12442 12443 12444 12445 12446 12447 12448 12449 12450 12451 12452 12453 12454 12455 12456 12457 12458 12459 12460 12461 12462 12463 12464 12465 12466 12467 12468 12469 12470 12471 12472 12473 12474 12475 12476 12477 12478 12479 12480 12481 12482 12483 12484 12485 12486 12487 12488 12489 12490 12491 12492 12493 12494 12495 12496 12497 12498 12499 12500 12501 12502 12503 12504 12505 12506 12507 12508 12509 12510 12511 12512 12513 12514 12515 12516 12517 12518 12519 12520 12521 12522 12523 12524 12525 12526 12527 12528 12529 12530 12531 12532 12533 12534 12535 12536 12537 12538 12539 12540 12541 12542 12543 12544 12545 12546 12547 12548 12549 12550 12551 12552 12553 12554 12555 12556 12557 12558 12559 12560 12561 12562 12563 12564 12565 12566 12567 12568 12569 12570 12571 12572 12573 12574 12575 12576 12577 12578 12579 12580 12581 12582 12583 12584 12585 12586 12587 12588 12589 12590 12591 12592 12593 12594 12595 12596 12597 12598 12599 12600 12601 12602 12603 12604 12605 12606 12607 12608 12609 12610 12611 12612 12613 12614 12615 12616 12617 12618 12619 12620 12621 12622 12623 12624 12625 12626 12627 12628 12629 12630 12631 12632 12633 12634 12635 12636 12637 12638 12639 12640 12641 12642 12643 12644 12645 12646 12647 12648 12649 12650 12651 12652 12653 12654 12655 12656 12657 12658 12659 12660 12661 12662 12663 12664 12665 12666 12667 12668 12669 12670 12671 12672 12673 12674 12675 12676 12677 12678 12679 12680 12681 12682 12683 12684 12685 12686 12687 12688 12689 12690 12691 12692 12693 12694 12695 12696 12697 12698 12699 12700 12701 12702 12703 12704 12705 12706 12707 12708 12709 12710 12711 12712 12713 12714 12715 12716 12717 12718 12719 12720 12721 12722 12723 12724 12725 12726 12727 12728 12729 12730 12731 12732 12733 12734 12735 12736 12737 12738 12739 12740 12741 12742 12743 12744 12745 12746 12747 12748 12749 12750 12751 12752 12753 12754 12755 12756 12757 12758 12759 12760 12761 12762 12763 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 12774 12775 12776 12777 12778 12779 12780 12781 12782 12783 12784 12785 12786 12787 12788 12789 12790 12791 12792 12793 12794 12795 12796 12797 12798 12799 12800 12801 12802 12803 12804 12805 12806 12807 12808 12809 12810 12811 12812 12813 12814 12815 12816 12817 12818 12819 12820 12821 12822 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 12836 12837 12838 12839 12840 12841 12842 12843 12844 12845 12846 12847 12848 12849 12850 12851 12852 12853 12854 12855 12856 12857 12858 12859 12860 12861 12862 12863 12864 12865 12866 12867 12868 12869 12870 12871 12872 12873 12874 12875 12876 12877 12878 12879 12880 12881 12882 12883 12884 12885 12886 12887 12888 12889 12890 12891 12892 12893 12894 12895 12896 12897 12898 12899 12900 12901 12902 12903 12904 12905 12906 12907 12908 12909 12910 12911 12912 12913 12914 12915 12916 12917 12918 12919 12920 12921 12922 12923 12924 12925 12926 12927 12928 12929 12930 12931 12932 12933 12934 12935 12936 12937 12938 12939 12940 12941 12942 12943 12944 12945 12946 12947 12948 12949 12950 12951 12952 12953 12954 12955 12956 12957 12958 12959 12960 12961 12962 12963 12964 12965 12966 12967 12968 12969 12970 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 12988 12989 12990 12991 12992 12993 12994 12995 12996 12997 12998 12999 13000 13001 13002 13003 13004 13005 13006 13007 13008 13009 13010 13011 13012 13013 13014 13015 13016 13017 13018 13019 13020 13021 13022 13023 13024 13025 13026 13027 13028 13029 13030 13031 13032 13033 13034 13035 13036 13037 13038 13039 13040 13041 13042 13043 13044 13045 13046 13047 13048 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13086 13087 13088 13089 13090 13091 13092 13093 13094 13095 13096 13097 13098 13099 13100 13101 13102 13103 13104 13105 13106 13107 13108 13109 13110 13111 13112 13113 13114 13115 13116 13117 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 13132 13133 13134 13135 13136 13137 13138 13139 13140 13141 13142 13143 13144 13145 13146 13147 13148 13149 13150 13151 13152 13153 13154 13155 13156 13157 13158 13159 13160 13161 13162 13163 13164 13165 13166 13167 13168 13169 13170 13171 13172 13173 13174 13175 13176 13177 13178 13179 13180 13181 13182 13183 13184 13185 13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 13197 13198 13199 13200 13201 13202 13203 13204 13205 13206 13207 13208 13209 13210 13211 13212 13213 13214 13215 13216 13217 13218 13219 13220 13221 13222 13223 13224 13225 13226 13227 13228 13229 13230 13231 13232 13233 13234 13235 13236 13237 13238 13239 13240 13241 13242 13243 13244 13245 13246 13247 13248 13249 13250 13251 13252 13253 13254 13255 13256 13257 13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 13272 13273 13274 13275 13276 13277 13278 13279 13280 13281 13282 13283 13284 13285 13286 13287 13288 13289 13290 13291 13292 13293 13294 13295 13296 13297 13298 13299 13300 13301 13302 13303 13304 13305 13306 13307 13308 13309 13310 13311 13312 13313 13314 13315 13316 13317 13318 13319 13320 13321 13322 13323 13324 13325 13326 13327 13328 13329 13330 13331 13332 13333 13334 13335 13336 13337 13338 13339 13340 13341 13342 13343 13344 13345 13346 13347 13348 13349 13350 13351 13352 13353 13354 13355 13356 13357 13358 13359 13360 13361 13362 13363 13364 13365 13366 13367 13368 13369 13370 13371 13372 13373 13374 13375 13376 13377 13378 13379 13380 13381 13382 13383 13384 13385 13386 13387 13388 13389 13390 13391 13392 13393 13394 13395 13396 13397 13398 13399 13400 13401 13402 13403 13404 13405 13406 13407 13408 13409 13410 13411 13412 13413 13414 13415 13416 13417 13418 13419 13420 13421 13422 13423 13424 13425 13426 13427 13428 13429 13430 13431 13432 13433 13434 13435 13436 13437 13438 13439 13440 13441 13442 13443 13444 13445 13446 13447 13448 13449 13450 13451 13452 13453 13454 13455 13456 13457 13458 13459 13460 13461 13462 13463 13464 13465 13466 13467 13468 13469 13470 13471 13472 13473 13474 13475 13476 13477 13478 13479 13480 13481 13482 13483 13484 13485 13486 13487 13488 13489 13490 13491 13492 13493 13494 13495 13496 13497 13498 13499 13500 13501 13502 13503 13504 13505 13506 13507 13508 13509 13510 13511 13512 13513 13514 13515 13516 13517 13518 13519 13520 13521 13522 13523 13524 13525 13526 13527 13528 13529 13530 13531 13532 13533 13534 13535 13536 13537 13538 13539 13540 13541 13542 13543 13544 13545 13546 13547 13548 13549 13550 13551 13552 13553 13554 13555 13556 13557 13558 13559 13560 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 13581 13582 13583 13584 13585 13586 13587 13588 13589 13590 13591 13592 13593 13594 13595 13596 13597 13598 13599 13600 13601 13602 13603 13604 13605 13606 13607 13608 13609 13610 13611 13612 13613 13614 13615 13616 13617 13618 13619 13620 13621 13622 13623 13624 13625 13626 13627 13628 13629 13630 13631 13632 13633 13634 13635 13636 13637 13638 13639 13640 13641 13642 13643 13644 13645 13646 13647 13648 13649 13650 13651 13652 13653 13654 13655 13656 13657 13658 13659 13660 13661 13662 13663 13664 13665 13666 13667 13668 13669 13670 13671 13672 13673 13674 13675 13676 13677 13678 13679 13680 13681 13682 13683 13684 13685 13686 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 13704 13705 13706 13707 13708 13709 13710 13711 13712 13713 13714 13715 13716 13717 13718 13719 13720 13721 13722 13723 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 13740 13741 13742 13743 13744 13745 13746 13747 13748 13749 13750 13751 13752 13753 13754 13755 13756 13757 13758 13759 13760 13761 13762 13763 13764 13765 13766 13767 13768 13769 13770 13771 13772 13773 13774 13775 13776 13777 13778 13779 13780 13781 13782 13783 13784 13785 13786 13787 13788 13789 13790 13791 13792 13793 13794 13795 13796 13797 13798 13799 13800 13801 13802 13803 13804 13805 13806 13807 13808 13809 13810 13811 13812 13813 13814 13815 13816 13817 13818 13819 13820 13821 13822 13823 13824 13825 13826 13827 13828 13829 13830 13831 13832 13833 13834 13835 13836 13837 13838 13839 13840 13841 13842 13843 13844 13845 13846 13847 13848 13849 13850 13851 13852 13853 13854 13855 13856 13857 13858 13859 13860 13861 13862 13863 13864 13865 13866 13867 13868 13869 13870 13871 13872 13873 13874 13875 13876 13877 13878 13879 13880 13881 13882 13883 13884 13885 13886 13887 13888 13889 13890 13891 13892 13893 13894 13895 13896 13897 13898 13899 13900 13901 13902 13903 13904 13905 13906 13907 13908 13909 13910 13911 13912 13913 13914 13915 13916 13917 13918 13919 13920 13921 13922 13923 13924 13925 13926 13927 13928 13929 13930 13931 13932 13933 13934 13935 13936 13937 13938 13939 13940 13941 13942 13943 13944 13945 13946 13947 13948 13949 13950 13951 13952 13953 13954 13955 13956 13957 13958 13959 13960 13961 13962 13963 13964 13965 13966 13967 13968 13969 13970 13971 13972 13973 13974 13975 13976 13977 13978 13979 13980 13981 13982 13983 13984 13985 13986 13987 13988 13989 13990 13991 13992 13993 13994 13995 13996 13997 13998 13999 14000 14001 14002 14003 14004 14005 14006 14007 14008 14009 14010 14011 14012 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 14025 14026 14027 14028 14029 14030 14031 14032 14033 14034 14035 14036 14037 14038 14039 14040 14041 14042 14043 14044 14045 14046 14047 14048 14049 14050 14051 14052 14053 14054 14055 14056 14057 14058 14059 14060 14061 14062 14063 14064 14065 14066 14067 14068 14069 14070 14071 14072 14073 14074 14075 14076 14077 14078 14079 14080 14081 14082 14083 14084 14085 14086 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 14097 14098 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 14130 14131 14132 14133 14134 14135 14136 14137 14138 14139 14140 14141 14142 14143 14144 14145 14146 14147 14148 14149 14150 14151 14152 14153 14154 14155 14156 14157 14158 14159 14160 14161 14162 14163 14164 14165 14166 14167 14168 14169 14170 14171 14172 14173 14174 14175 14176 14177 14178 14179 14180 14181 14182 14183 14184 14185 14186 14187 14188 14189 14190 14191 14192 14193 14194 14195 14196 14197 14198 14199 14200 14201 14202 14203 14204 14205 14206 14207 14208 14209 14210 14211 14212 14213 14214 14215 14216 14217 14218 14219 14220 14221 14222 14223 14224 14225 14226 14227 14228 14229 14230 14231 14232 14233 14234 14235 14236 14237 14238 14239 14240 14241 14242 14243 14244 14245 14246 14247 14248 14249 14250 14251 14252 14253 14254 14255 14256 14257 14258 14259 14260 14261 14262 14263 14264 14265 14266 14267 14268 14269 14270 14271 14272 14273 14274 14275 14276 14277 14278 14279 14280 14281 14282 14283 14284 14285 14286 14287 14288 14289 14290 14291 14292 14293 14294 14295 14296 14297 14298 14299 14300 14301 14302 14303 14304 14305 14306 14307 14308 14309 14310 14311 14312 14313 14314 14315 14316 14317 14318 14319 14320 14321 14322 14323 14324 14325 14326 14327 14328 14329 14330 14331 14332 14333 14334 14335 14336 14337 14338 14339 14340 14341 14342 14343 14344 14345 14346 14347 14348 14349 14350 14351 14352 14353 14354 14355 14356 14357 14358 14359 14360 14361 14362 14363 14364 14365 14366 14367 14368 14369 14370 14371 14372 14373 14374 14375 14376 14377 14378 14379 14380 14381 14382 14383 14384 14385 14386 14387 14388 14389 14390 14391 14392 14393 14394 14395 14396 14397 14398 14399 14400 14401 14402 14403 14404 14405 14406 14407 14408 14409 14410 14411 14412 14413 14414 14415 14416 14417 14418 14419 14420 14421 14422 14423 14424 14425 14426 14427 14428 14429 14430 14431 14432 14433 14434 14435 14436 14437 14438 14439 14440 14441 14442 14443 14444 14445 14446 14447 14448 14449 14450 14451 14452 14453 14454 14455 14456 14457 14458 14459 14460 14461 14462 14463 14464 14465 14466 14467 14468 14469 14470 14471 14472 14473 14474 14475 14476 14477 14478 14479 14480 14481 14482 14483 14484 14485 14486 14487 14488 14489 14490 14491 14492 14493 14494 14495 14496 14497 14498 14499 14500 14501 14502 14503 14504 14505 14506 14507 14508 14509 14510 14511 14512 14513 14514 14515 14516 14517 14518 14519 14520 14521 14522 14523 14524 14525 14526 14527 14528 14529 14530 14531 14532 14533 14534 14535 14536 14537 14538 14539 14540 14541 14542 14543 14544 14545 14546 14547 14548 14549 14550 14551 14552 14553 14554 14555 14556 14557 14558 14559 14560 14561 14562 14563 14564 14565 14566 14567 14568 14569 14570 14571 14572 14573 14574 14575 14576 14577 14578 14579 14580 14581 14582 14583 14584 14585 14586 14587 14588 14589 14590 14591 14592 14593 14594 14595 14596 14597 14598 14599 14600 14601 14602 14603 14604 14605 14606 14607 14608 14609 14610 14611 14612 14613 14614 14615 14616 14617 14618 14619 14620 14621 14622 14623 14624 14625 14626 14627 14628 14629 14630 14631 14632 14633 14634 14635 14636 14637 14638 14639 14640 14641 14642 14643 14644 14645 14646 14647 14648 14649 14650 14651 14652 14653 14654 14655 14656 14657 14658 14659 14660 14661 14662 14663 14664 14665 14666 14667 14668 14669 14670 14671 14672 14673 14674 14675 14676 14677 14678 14679 14680 14681 14682 14683 14684 14685 14686 14687 14688 14689 14690 14691 14692 14693 14694 14695 14696 14697 14698 14699 14700 14701 14702 14703 14704 14705 14706 14707 14708 14709 14710 14711 14712 14713 14714 14715 14716 14717 14718 14719 14720 14721 14722 14723 14724 14725 14726 14727 14728 14729 14730 14731 14732 14733 14734 14735 14736 14737 14738 14739 14740 14741 14742 14743 14744 14745 14746 14747 14748 14749 14750 14751 14752 14753 14754 14755 14756 14757 14758 14759 14760 14761 14762 14763 14764 14765 14766 14767 14768 14769 14770 14771 14772 14773 14774 14775 14776 14777 14778 14779 14780 14781 14782 14783 14784 14785 14786 14787 14788 14789 14790 14791 14792 14793 14794 14795 14796 14797 14798 14799 14800 14801 14802 14803 14804 14805 14806 14807 14808 14809 14810 14811 14812 14813 14814 14815 14816 14817 14818 14819 14820 14821 14822 14823 14824 14825 14826 14827 14828 14829 14830 14831 14832 14833 14834 14835 14836 14837 14838 14839 14840 14841 14842 14843 14844 14845 14846 14847 14848 14849 14850 14851 14852 14853 14854 14855 14856 14857 14858 14859 14860 14861 14862 14863 14864 14865 14866 14867 14868 14869 14870 14871 14872 14873 14874 14875 14876 14877 14878 14879 14880 14881 14882 14883 14884 14885 14886 14887 14888 14889 14890 14891 14892 14893 14894 14895 14896 14897 14898 14899 14900 14901 14902 14903 14904 14905 14906 14907 14908 14909 14910 14911 14912 14913 14914 14915 14916 14917 14918 14919 14920 14921 14922 14923 14924 14925 14926 14927 14928 14929 14930 14931 14932 14933 14934 14935 14936 14937 14938 14939 14940 14941 14942 14943 14944 14945 14946 14947 14948 14949 14950 14951 14952 14953 14954 14955 14956 14957 14958 14959 14960 14961 14962 14963 14964 14965 14966 14967 14968 14969 14970 14971 14972 14973 14974 14975 14976 14977 14978 14979 14980 14981 14982 14983 14984 14985 14986 14987 14988 14989 14990 14991 14992 14993 14994 14995 14996 14997 14998 14999 15000 15001 15002 15003 15004 15005 15006 15007 15008 15009 15010 15011 15012 15013 15014 15015 15016 15017 15018 15019 15020 15021 15022 15023 15024 15025 15026 15027 15028 15029 15030 15031 15032 15033 15034 15035 15036 15037 15038 15039 15040 15041 15042 15043 15044 15045 15046 15047 15048 15049 15050 15051 15052 15053 15054 15055 15056 15057 15058 15059 15060 15061 15062 15063 15064 15065 15066 15067 15068 15069 15070 15071 15072 15073 15074 15075 15076 15077 15078 15079 15080 15081 15082 15083 15084 15085 15086 15087 15088 15089 15090 15091 15092 15093 15094 15095 15096 15097 15098 15099 15100 15101 15102 15103 15104 15105 15106 15107 15108 15109 15110 15111 15112 15113 15114 15115 15116 15117 15118 15119 15120 15121 15122 15123 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 15137 15138 15139 15140 15141 15142 15143 15144 15145 15146 15147 15148 15149 15150 15151 15152 15153 15154 15155 15156 15157 15158 15159 15160 15161 15162 15163 15164 15165 15166 15167 15168 15169 15170 15171 15172 15173 15174 15175 15176 15177 15178 15179 15180 15181 15182 15183 15184 15185 15186 15187 15188 15189 15190 15191 15192 15193 15194 15195 15196 15197 15198 15199 15200 15201 15202 15203 15204 15205 15206 15207 15208 15209 15210 15211 15212 15213 15214 15215 15216 15217 15218 15219 15220 15221 15222 15223 15224 15225 15226 15227 15228 15229 15230 15231 15232 15233 15234 15235 15236 15237 15238 15239 15240 15241 15242 15243 15244 15245 15246 15247 15248 15249 15250 15251 15252 15253 15254 15255 15256 15257 15258 15259 15260 15261 15262 15263 15264 15265 15266 15267 15268 15269 15270 15271 15272 15273 15274 15275 15276 15277 15278 15279 15280 15281 15282 15283 15284 15285 15286 15287 15288 15289 15290 15291 15292 15293 15294 15295 15296 15297 15298 15299 15300 15301 15302 15303 15304 15305 15306 15307 15308 15309 15310 15311 15312 15313 15314 15315 15316 15317 15318 15319 15320 15321 15322 15323 15324 15325 15326 15327 15328 15329 15330 15331 15332 15333 15334 15335 15336 15337 15338 15339 15340 15341 15342 15343 15344 15345 15346 15347 15348 15349 15350 15351 15352 15353 15354 15355 15356 15357 15358 15359 15360 15361 15362 15363 15364 15365 15366 15367 15368 15369 15370 15371 15372 15373 15374 15375 15376 15377 15378 15379 15380 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397 15398 15399 15400 15401 15402 15403 15404 15405 15406 15407 15408 15409 15410 15411 15412 15413 15414 15415 15416 15417 15418 15419 15420 15421 15422 15423 15424 15425 15426 15427 15428 15429 15430 15431 15432 15433 15434 15435 15436 15437 15438 15439 15440 15441 15442 15443 15444 15445 15446 15447 15448 15449 15450 15451 15452 15453 15454 15455 15456 15457 15458 15459 15460 15461 15462 15463 15464 15465 15466 15467 15468 15469 15470 15471 15472 15473 15474 15475 15476 15477 15478 15479 15480 15481 15482 15483 15484 15485 15486 15487 15488 15489 15490 15491 15492 15493 15494 15495 15496 15497 15498 15499 15500 15501 15502 15503 15504 15505 15506 15507 15508 15509 15510 15511 15512 15513 15514 15515 15516 15517 15518 15519 15520 15521 15522 15523 15524 15525 15526 15527 15528 15529 15530 15531 15532 15533 15534 15535 15536 15537 15538 15539 15540 15541 15542 15543 15544 15545 15546 15547 15548 15549 15550 15551 15552 15553 15554 15555 15556 15557 15558 15559 15560 15561 15562 15563 15564 15565 15566 15567 15568 15569 15570 15571 15572 15573 15574 15575 15576 15577 15578 15579 15580 15581 15582 15583 15584 15585 15586 15587 15588 15589 15590 15591 15592 15593 15594 15595 15596 15597 15598 15599 15600 15601 15602 15603 15604 15605 15606 15607 15608 15609 15610 15611 15612 15613 15614 15615 15616 15617 15618 15619 15620 15621 15622 15623 15624 15625 15626 15627 15628 15629 15630 15631 15632 15633 15634 15635 15636 15637 15638 15639 15640 15641 15642 15643 15644 15645 15646 15647 15648 15649 15650 15651 15652 15653 15654 15655 15656 15657 15658 15659 15660 15661 15662 15663 15664 15665 15666 15667 15668 15669 15670 15671 15672 15673 15674 15675 15676 15677 15678 15679 15680 15681 15682 15683 15684 15685 15686 15687 15688 15689 15690 15691 15692 15693 15694 15695 15696 15697 15698 15699 15700 15701 15702 15703 15704 15705 15706 15707 15708 15709 15710 15711 15712 15713 15714 15715 15716 15717 15718 15719 15720 15721 15722 15723 15724 15725 15726 15727 15728 15729 15730 15731 15732 15733 15734 15735 15736 15737 15738 15739 15740 15741 15742 15743 15744 15745 15746 15747 15748 15749 15750 15751 15752 15753 15754 15755 15756 15757 15758 15759 15760 15761 15762 15763 15764 15765 15766 15767 15768 15769 15770 15771 15772 15773 15774 15775 15776 15777 15778 15779 15780 15781 15782 15783 15784 15785 15786 15787 15788 15789 15790 15791 15792 15793 15794 15795 15796 15797 15798 15799 15800 15801 15802 15803 15804 15805 15806 15807 15808 15809 15810 15811 15812 15813 15814 15815 15816 15817 15818 15819 15820 15821 15822 15823 15824 15825 15826 15827 15828 15829 15830 15831 15832 15833 15834 15835 15836 15837 15838 15839 15840 15841 15842 15843 15844 15845 15846 15847 15848 15849 15850 15851 15852 15853 15854 15855 15856 15857 15858 15859 15860 15861 15862 15863 15864 15865 15866 15867 15868 15869 15870 15871 15872 15873 15874 15875 15876 15877 15878 15879 15880 15881 15882 15883 15884 15885 15886 15887 15888 15889 15890 15891 15892 15893 15894 15895 15896 15897 15898 15899 15900 15901 15902 15903 15904 15905 15906 15907 15908 15909 15910 15911 15912 15913 15914 15915 15916 15917 15918 15919 15920 15921 15922 15923 15924 15925 15926 15927 15928 15929 15930 15931 15932 15933 15934 15935 15936 15937 15938 15939 15940 15941 15942 15943 15944 15945 15946 15947 15948 15949 15950 15951 15952 15953 15954 15955 15956 15957 15958 15959 15960 15961 15962 15963 15964 15965 15966 15967 15968 15969 15970 15971 15972 15973 15974 15975 15976 15977 15978 15979 15980 15981 15982 15983 15984 15985 15986 15987 15988 15989 15990 15991 15992 15993 15994 15995 15996 15997 15998 15999 16000 16001 16002 16003 16004 16005 16006 16007 16008 16009 16010 16011 16012 16013 16014 16015 16016 16017 16018 16019 16020 16021 16022 16023 16024 16025 16026 16027 16028 16029 16030 16031 16032 16033 16034 16035 16036 16037 16038 16039 16040 16041 16042 16043 16044 16045 16046 16047 16048 16049 16050 16051 16052 16053 16054 16055 16056 16057 16058 16059 16060 16061 16062 16063 16064 16065 16066 16067 16068 16069 16070 16071 16072 16073 16074 16075 16076 16077 16078 16079 16080 16081 16082 16083 16084 16085 16086 16087 16088 16089 16090 16091 16092 16093 16094 16095 16096 16097 16098 16099 16100 16101 16102 16103 16104 16105 16106 16107 16108 16109 16110 16111 16112 16113 16114 16115 16116 16117 16118 16119 16120 16121 16122 16123 16124 16125 16126 16127 16128 16129 16130 16131 16132 16133 16134 16135 16136 16137 16138 16139 16140 16141 16142 16143 16144 16145 16146 16147 16148 16149 16150 16151 16152 16153 16154 16155 16156 16157 16158 16159 16160 16161 16162 16163 16164 16165 16166 16167 16168 16169 16170 16171 16172 16173 16174 16175 16176 16177 16178 16179 16180 16181 16182 16183 16184 16185 16186 16187 16188 16189 16190 16191 16192 16193 16194 16195 16196 16197 16198 16199 16200 16201 16202 16203 16204 16205 16206 16207 16208 16209 16210 16211 16212 16213 16214 16215 16216 16217 16218 16219 16220 16221 16222 16223 16224 16225 16226 16227 16228 16229 16230 16231 16232 16233 16234 16235 16236 16237 16238 16239 16240 16241 16242 16243 16244 16245 16246 16247 16248 16249 16250 16251 16252 16253 16254 16255 16256 16257 16258 16259 16260 16261 16262 16263 16264 16265 16266 16267 16268 16269 16270 16271 16272 16273 16274 16275 16276 16277 16278 16279 16280 16281 16282 16283 16284 16285 16286 16287 16288 16289 16290 16291 16292 16293 16294 16295 16296 16297 16298 16299 16300 16301 16302 16303 16304 16305 16306 16307 16308 16309 16310 16311 16312 16313 16314 16315 16316 16317 16318 16319 16320 16321 16322 16323 16324 16325 16326 16327 16328 16329 16330 16331 16332 16333 16334 16335 16336 16337 16338 16339 16340 16341 16342 16343 16344 16345 16346 16347 16348 16349 16350 16351 16352 16353 16354 16355 16356 16357 16358 16359 16360 16361 16362 16363 16364 16365 16366 16367 16368 16369 16370 16371 16372 16373 16374 16375 16376 16377 16378 16379 16380 16381 16382 16383 16384 16385 16386 16387 16388 16389 16390 16391 16392 16393 16394 16395 16396 16397 16398 16399 16400 16401 16402 16403 16404 16405 16406 16407 16408 16409 16410 16411 16412 16413 16414 16415 16416 16417 16418 16419 16420 16421 16422 16423 16424 16425 16426 16427 16428 16429 16430 16431 16432 16433 16434 16435 16436 16437 16438 16439 16440 16441 16442 16443 16444 16445 16446 16447 16448 16449 16450 16451 16452 16453 16454 16455 16456 16457 16458 16459 16460 16461 16462 16463 16464 16465 16466 16467 16468 16469 16470 16471 16472 16473 16474 16475 16476 16477 16478 16479 16480 16481 16482 16483 16484 16485 16486 16487 16488 16489 16490 16491 16492 16493 16494 16495 16496 16497 16498 16499 16500 16501 16502 16503 16504 16505 16506 16507 16508 16509 16510 16511 16512 16513 16514 16515 16516 16517 16518 16519 16520 16521 16522 16523 16524 16525 16526 16527 16528 16529 16530 16531 16532 16533 16534 16535 16536 16537 16538 16539 16540 16541 16542 16543 16544 16545 16546 16547 16548 16549 16550 16551 16552 16553 16554 16555 16556 16557 16558 16559 16560 16561 16562 16563 16564 16565 16566 16567 16568 16569 16570 16571 16572 16573 16574 16575 16576 16577 16578 16579 16580 16581 16582 16583 16584 16585 16586 16587 16588 16589 16590 16591 16592 16593 16594 16595 16596 16597 16598 16599 16600 16601 16602 16603 16604 16605 16606 16607 16608 16609 16610 16611 16612 16613 16614 16615 16616 16617 16618 16619 16620 16621 16622 16623 16624 16625 16626 16627 16628 16629 16630 16631 16632 16633 16634 16635 16636 16637 16638 16639 16640 16641 16642 16643 16644 16645 16646 16647 16648 16649 16650 16651 16652 16653 16654 16655 16656 16657 16658 16659 16660 16661 16662 16663 16664 16665 16666 16667 16668 16669 16670 16671 16672 16673 16674 16675 16676 16677 16678 16679 16680 16681 16682 16683 16684 16685 16686 16687 16688 16689 16690 16691 16692 16693 16694 16695 16696 16697 16698 16699 16700 16701 16702 16703 16704 16705 16706 16707 16708 16709 16710 16711 16712 16713 16714 16715 16716 16717 16718 16719 16720 16721 16722 16723 16724 16725 16726 16727 16728 16729 16730 16731 16732 16733 16734 16735 16736 16737 16738 16739 16740 16741 16742 16743 16744 16745 16746 16747 16748 16749 16750 16751 16752 16753 16754 16755 16756 16757 16758 16759 16760 16761 16762 16763 16764 16765 16766 16767 16768 16769 16770 16771 16772 16773 16774 16775 16776 16777 16778 16779 16780 16781 16782 16783 16784 16785 16786 16787 16788 16789 16790 16791 16792 16793 16794 16795 16796 16797 16798 16799 16800 16801 16802 16803 16804 16805 16806 16807 16808 16809 16810 16811 16812 16813 16814 16815 16816 16817 16818 16819 16820 16821 16822 16823 16824 16825 16826 16827 16828 16829 16830 16831 16832 16833 16834 16835 16836 16837 16838 16839 16840 16841 16842 16843 16844 16845 16846 16847 16848 16849 16850 16851 16852 16853 16854 16855 16856 16857 16858 16859 16860 16861 16862 16863 16864 16865 16866 16867 16868 16869 16870 16871 16872 16873 16874 16875 16876 16877 16878 16879 16880 16881 16882 16883 16884 16885 16886 16887 16888 16889 16890 16891 16892 16893 16894 16895 16896 16897 16898 16899 16900 16901 16902 16903 16904 16905 16906 16907 16908 16909 16910 16911 16912 16913 16914 16915 16916 16917 16918 16919 16920 16921 16922 16923 16924 16925 16926 16927 16928 16929 16930 16931 16932 16933 16934 16935 16936 16937 16938 16939 16940 16941 16942 16943 16944 16945 16946 16947 16948 16949 16950 16951 16952 16953 16954 16955 16956 16957 16958 16959 16960 16961 16962 16963 16964 16965 16966 16967 16968 16969 16970 16971 16972 16973 16974 16975 16976 16977 16978 16979 16980 16981 16982 16983 16984 16985 16986 16987 16988 16989 16990 16991 16992 16993 16994 16995 16996 16997 16998 16999 17000 17001 17002 17003 17004 17005 17006 17007 17008 17009 17010 17011 17012 17013 17014 17015 17016 17017 17018 17019 17020 17021 17022 17023 17024 17025 17026 17027 17028 17029 17030 17031 17032 17033 17034 17035 17036 17037 17038 17039 17040 17041 17042 17043 17044 17045 17046 17047 17048 17049 17050 17051 17052 17053 17054 17055 17056 17057 17058 17059 17060 17061 17062 17063 17064 17065 17066 17067 17068 17069 17070 17071 17072 17073 17074 17075 17076 17077 17078 17079 17080 17081 17082 17083 17084 17085 17086 17087 17088 17089 17090 17091 17092 17093 17094 17095 17096 17097 17098 17099 17100 17101 17102 17103 17104 17105 17106 17107 17108 17109 17110 17111 17112 17113 17114 17115 17116 17117 17118 17119 17120 17121 17122 17123 17124 17125 17126 17127 17128 17129 17130 17131 17132 17133 17134 17135 17136 17137 17138 17139 17140 17141 17142 17143 17144 17145 17146 17147 17148 17149 17150 17151 17152 17153 17154 17155 17156 17157 17158 17159 17160 17161 17162 17163 17164 17165 17166 17167 17168 17169 17170 17171 17172 17173 17174 17175 17176 17177 17178 17179 17180 17181 17182 17183 17184 17185 17186 17187 17188 17189 17190 17191 17192 17193 17194 17195 17196 17197 17198 17199 17200 17201 17202 17203 17204 17205 17206 17207 17208 17209 17210 17211 17212 17213 17214 17215 17216 17217 17218 17219 17220 17221 17222 17223 17224 17225 17226 17227 17228 17229 17230 17231 17232 17233 17234 17235 17236 17237 17238 17239 17240 17241 17242 17243 17244 17245 17246 17247 17248 17249 17250 17251 17252 17253 17254 17255 17256 17257 17258 17259 17260 17261 17262 17263 17264 17265 17266 17267 17268 17269 17270 17271 17272 17273 17274 17275 17276 17277 17278 17279 17280 17281 17282 17283 17284 17285 17286 17287 17288 17289 17290 17291 17292 17293 17294 17295 17296 17297 17298 17299 17300 17301 17302 17303 17304 17305 17306 17307 17308 17309 17310 17311 17312 17313 17314 17315 17316 17317 17318 17319 17320 17321 17322 17323 17324 17325 17326 17327 17328 17329 17330 17331 17332 17333 17334 17335 17336 17337 17338 17339 17340 17341 17342 17343 17344 17345 17346 17347 17348 17349 17350 17351 17352 17353 17354 17355 17356 17357 17358 17359 17360 17361 17362 17363 17364 17365 17366 17367 17368 17369 17370 17371 17372 17373 17374 17375 17376 17377 17378 17379 17380 17381 17382 17383 17384 17385 17386 17387 17388 17389 17390 17391 17392 17393 17394 17395 17396 17397 17398 17399 17400 17401 17402 17403 17404 17405 17406 17407 17408 17409 17410 17411 17412 17413 17414 17415 17416 17417 17418 17419 17420 17421 17422 17423 17424 17425 17426 17427 17428 17429 17430 17431 17432 17433 17434 17435 17436 17437 17438 17439 17440 17441 17442 17443 17444 17445 17446 17447 17448 17449 17450 17451 17452 17453 17454 17455 17456 17457 17458 17459 17460 17461 17462 17463 17464 17465 17466 17467 17468 17469 17470 17471 17472 17473 17474 17475 17476 17477 17478 17479 17480 17481 17482 17483 17484 17485 17486 17487 17488 17489 17490 17491 17492 17493 17494 17495 17496 17497 17498 17499 17500 17501 17502 17503 17504 17505 17506 17507 17508 17509 17510 17511 17512 17513 17514 17515 17516 17517 17518 17519 17520 17521 17522 17523 17524 17525 17526 17527 17528 17529 17530 17531 17532 17533 17534 17535 17536 17537 17538 17539 17540 17541 17542 17543 17544 17545 17546 17547 17548 17549 17550 17551 17552 17553 17554 17555 17556 17557 17558 17559 17560 17561 17562 17563 17564 17565 17566 17567 17568 17569 17570 17571 17572 17573 17574 17575 17576 17577 17578 17579 17580 17581 17582 17583 17584 17585 17586 17587 17588 17589 17590 17591 17592 17593 17594 17595 17596 17597 17598 17599 17600 17601 17602 17603 17604 17605 17606 17607 17608 17609 17610 17611 17612 17613 17614 17615 17616 17617 17618 17619 17620 17621 17622 17623 17624 17625 17626 17627 17628 17629 17630 17631 17632 17633 17634 17635 17636 17637 17638 17639 17640 17641 17642 17643 17644 17645 17646 17647 17648 17649 17650 17651 17652 17653 17654 17655 17656 17657 17658 17659 17660 17661 17662 17663 17664 17665 17666 17667 17668 17669 17670 17671 17672 17673 17674 17675 17676 17677 17678 17679 17680 17681 17682 17683 17684 17685 17686 17687 17688 17689 17690 17691 17692 17693 17694 17695 17696 17697 17698 17699 17700 17701 17702 17703 17704 17705 17706 17707 17708 17709 17710 17711 17712 17713 17714 17715 17716 17717 17718 17719 17720 17721 17722 17723 17724 17725 17726 17727 17728 17729 17730 17731 17732 17733 17734 17735 17736 17737 17738 17739 17740 17741 17742 17743 17744 17745 17746 17747 17748 17749 17750 17751 17752 17753 17754 17755 17756 17757 17758 17759 17760 17761 17762 17763 17764 17765 17766 17767 17768 17769 17770 17771 17772 17773 17774 17775 17776 17777 17778 17779 17780 17781 17782 17783 17784 17785 17786 17787 17788 17789 17790 17791 17792 17793 17794 17795 17796 17797 17798 17799 17800 17801 17802 17803 17804 17805 17806 17807 17808 17809 17810 17811 17812 17813 17814 17815 17816 17817 17818 17819 17820 17821 17822 17823 17824 17825 17826 17827 17828 17829 17830 17831 17832 17833 17834 17835 17836 17837 17838 17839 17840 17841 17842 17843 17844 17845 17846 17847 17848 17849 17850 17851 17852 17853 17854 17855 17856 17857 17858 17859 17860 17861 17862 17863 17864 17865 17866 17867 17868 17869 17870 17871 17872 17873 17874 17875 17876 17877 17878 17879 17880 17881 17882 17883 17884 17885 17886 17887 17888 17889 17890 17891 17892 17893 17894 17895 17896 17897 17898 17899 17900 17901 17902 17903 17904 17905 17906 17907 17908 17909 17910 17911 17912 17913 17914 17915 17916 17917 17918 17919 17920 17921 17922 17923 17924 17925 17926 17927 17928 17929 17930 17931 17932 17933 17934 17935 17936 17937 17938 17939 17940 17941 17942 17943 17944 17945 17946 17947 17948 17949 17950 17951 17952 17953 17954 17955 17956 17957 17958 17959 17960 17961 17962 17963 17964 17965 17966 17967 17968 17969 17970 17971 17972 17973 17974 17975 17976 17977 17978 17979 17980 17981 17982 17983 17984 17985 17986 17987 17988 17989 17990 17991 17992 17993 17994 17995 17996 17997 17998 17999 18000 18001 18002 18003 18004 18005 18006 18007 18008 18009 18010 18011 18012 18013 18014 18015 18016 18017 18018 18019 18020 18021 18022 18023 18024 18025 18026 18027 18028 18029 18030 18031 18032 18033 18034 18035 18036 18037 18038 18039 18040 18041 18042 18043 18044 18045 18046 18047 18048 18049 18050 18051 18052 18053 18054 18055 18056 18057 18058 18059 18060 18061 18062 18063 18064 18065 18066 18067 18068 18069 18070 18071 18072 18073 18074 18075 18076 18077 18078 18079 18080 18081 18082 18083 18084 18085 18086 18087 18088 18089 18090 18091 18092 18093 18094 18095 18096 18097 18098 18099 18100 18101 18102 18103 18104 18105 18106 18107 18108 18109 18110 18111 18112 18113 18114 18115 18116 18117 18118 18119 18120 18121 18122 18123 18124 18125 18126 18127 18128 18129 18130 18131 18132 18133 18134 18135 18136 18137 18138 18139 18140 18141 18142 18143 18144 18145 18146 18147 18148 18149 18150 18151 18152 18153 18154 18155 18156 18157 18158 18159 18160 18161 18162 18163 18164 18165 18166 18167 18168 18169 18170 18171 18172 18173 18174 18175 18176 18177 18178 18179 18180 18181 18182 18183 18184 18185 18186 18187 18188 18189 18190 18191 18192 18193 18194 18195 18196 18197 18198 18199 18200 18201 18202 18203 18204 18205 18206 18207 18208 18209 18210 18211 18212 18213 18214 18215 18216 18217 18218 18219 18220 18221 18222 18223 18224 18225 18226 18227 18228 18229 18230 18231 18232 18233 18234 18235 18236 18237 18238 18239 18240 18241 18242 18243 18244 18245 18246 18247 18248 18249 18250 18251 18252 18253 18254 18255 18256 18257 18258 18259 18260 18261 18262 18263 18264 18265 18266 18267 18268 18269 18270 18271 18272 18273 18274 18275 18276 18277 18278 18279 18280 18281 18282 18283 18284 18285 18286 18287 18288 18289 18290 18291 18292 18293 18294 18295 18296 18297 18298 18299 18300 18301 18302 18303 18304 18305 18306 18307 18308 18309 18310 18311 18312 18313 18314 18315 18316 18317 18318 18319 18320 18321 18322 18323 18324 18325 18326 18327 18328 18329 18330 18331 18332 18333 18334 18335 18336 18337 18338 18339 18340 18341 18342 18343 18344 18345 18346 18347 18348 18349 18350 18351 18352 18353 18354 18355 18356 18357 18358 18359 18360 18361 18362 18363 18364 18365 18366 18367 18368 18369 18370 18371 18372 18373 18374 18375 18376 18377 18378 18379 18380 18381 18382 18383 18384 18385 18386 18387 18388 18389 18390 18391 18392 18393 18394 18395 18396 18397 18398 18399 18400 18401 18402 18403 18404 18405 18406 18407 18408 18409 18410 18411 18412 18413 18414 18415 18416 18417 18418 18419 18420 18421 18422 18423 18424 18425 18426 18427 18428 18429 18430 18431 18432 18433 18434 18435 18436 18437 18438 18439 18440 18441 18442 18443 18444 18445 18446 18447 18448 18449 18450 18451 18452 18453 18454 18455 18456 18457 18458 18459 18460 18461 18462 18463 18464 18465 18466 18467 18468 18469 18470 18471 18472 18473 18474 18475 18476 18477 18478 18479 18480 18481 18482 18483 18484 18485 18486 18487 18488 18489 18490 18491 18492 18493 18494 18495 18496 18497 18498 18499 18500 18501 18502 18503 18504 18505 18506 18507 18508 18509 18510 18511 18512 18513 18514 18515 18516 18517 18518 18519 18520 18521 18522 18523 18524 18525 18526 18527 18528 18529 18530 18531 18532 18533 18534 18535 18536 18537 18538 18539 18540 18541 18542 18543 18544 18545 18546 18547 18548 18549 18550 18551 18552 18553 18554 18555 18556 18557 18558 18559 18560 18561 18562 18563 18564 18565 18566 18567 18568 18569 18570 18571 18572 18573 18574 18575 18576 18577 18578 18579 18580 18581 18582 18583 18584 18585 18586 18587 18588 18589 18590 18591 18592 18593 18594 18595 18596 18597 18598 18599 18600 18601 18602 18603 18604 18605 18606 18607 18608 18609 18610 18611 18612 18613 18614 18615 18616 18617 18618 18619 18620 18621 18622 18623 18624 18625 18626 18627 18628 18629 18630 18631 18632 18633 18634 18635 18636 18637 18638 18639 18640 18641 18642 18643 18644 18645 18646 18647 18648 18649 18650 18651 18652 18653 18654 18655 18656 18657 18658 18659 18660 18661 18662 18663 18664 18665 18666 18667 18668 18669 18670 18671 18672 18673 18674 18675 18676 18677 18678 18679 18680 18681 18682 18683 18684 18685 18686 18687 18688 18689 18690 18691 18692 18693 18694 18695 18696 18697 18698 18699 18700 18701 18702 18703 18704 18705 18706 18707 18708 18709 18710 18711 18712 18713 18714 18715 18716 18717 18718 18719 18720 18721 18722 18723 18724 18725 18726 18727 18728 18729 18730 18731 18732 18733 18734 18735 18736 18737 18738 18739 18740 18741 18742 18743 18744 18745 18746 18747 18748 18749 18750 18751 18752 18753 18754 18755 18756 18757 18758 18759 18760 18761 18762 18763 18764 18765 18766 18767 18768 18769 18770 18771 18772 18773 18774 18775 18776 18777 18778 18779 18780 18781 18782 18783 18784 18785 18786 18787 18788 18789 18790 18791 18792 18793 18794 18795 18796 18797 18798 18799 18800 18801 18802 18803 18804 18805 18806 18807 18808 18809 18810 18811 18812 18813 18814 18815 18816 18817 18818 18819 18820 18821 18822 18823 18824 18825 18826 18827 18828 18829 18830 18831 18832 18833 18834 18835 18836 18837 18838 18839 18840 18841 18842 18843 18844 18845 18846 18847 18848 18849 18850 18851 18852 18853 18854 18855 18856 18857 18858 18859 18860 18861 18862 18863 18864 18865 18866 18867 18868 18869 18870 18871 18872 18873 18874 18875 18876 18877 18878 18879 18880 18881 18882 18883 18884 18885 18886 18887 18888 18889 18890 18891 18892 18893 18894 18895 18896 18897 18898 18899 18900 18901 18902 18903 18904 18905 18906 18907 18908 18909 18910 18911 18912 18913 18914 18915 18916 18917 18918 18919 18920 18921 18922 18923 18924 18925 18926 18927 18928 18929 18930 18931 18932 18933 18934 18935 18936 18937 18938 18939 18940 18941 18942 18943 18944 18945 18946 18947 18948 18949 18950 18951 18952 18953 18954 18955 18956 18957 18958 18959 18960 18961 18962 18963 18964 18965 18966 18967 18968 18969 18970 18971 18972 18973 18974 18975 18976 18977 18978 18979 18980 18981 18982 18983 18984 18985 18986 18987 18988 18989 18990 18991 18992 18993 18994 18995 18996 18997 18998 18999 19000 19001 19002 19003 19004 19005 19006 19007 19008 19009 19010 19011 19012 19013 19014 19015 19016 19017 19018 19019 19020 19021 19022 19023 19024 19025 19026 19027 19028 19029 19030 19031 19032 19033 19034 19035 19036 19037 19038 19039 19040 19041 19042 19043 19044 19045 19046 19047 19048 19049 19050 19051 19052 19053 19054 19055 19056 19057 19058 19059 19060 19061 19062 19063 19064 19065 19066 19067 19068 19069 19070 19071 19072 19073 19074 19075 19076 19077 19078 19079 19080 19081 19082 19083 19084 19085 19086 19087 19088 19089 19090 19091 19092 19093 19094 19095 19096 19097 19098 19099 19100 19101 19102 19103 19104 19105 19106 19107 19108 19109 19110 19111 19112 19113 19114 19115 19116 19117 19118 19119 19120 19121 19122 19123 19124 19125 19126 19127 19128 19129 19130 19131 19132 19133 19134 19135 19136 19137 19138 19139 19140 19141 19142 19143 19144 19145 19146 19147 19148 19149 19150 19151 19152 19153 19154 19155 19156 19157 19158 19159 19160 19161 19162 19163 19164 19165 19166 19167 19168 19169 19170 19171 19172 19173 19174 19175 19176 19177 19178 19179 19180 19181 19182 19183 19184 19185 19186 19187 19188 19189 19190 19191 19192 19193 19194 19195 19196 19197 19198 19199 19200 19201 19202 19203 19204 19205 19206 19207 19208 19209 19210 19211 19212 19213 19214 19215 19216 19217 19218 19219 19220 19221 19222 19223 19224 19225 19226 19227 19228 19229 19230 19231 19232 19233 19234 19235 19236 19237 19238 19239 19240 19241 19242 19243 19244 19245 19246 19247 19248 19249 19250 19251 19252 19253 19254 19255 19256 19257 19258 19259 19260 19261 19262 19263 19264 19265 19266 19267 19268 19269 19270 19271 19272 19273 19274 19275 19276 19277 19278 19279 19280 19281 19282 19283 19284 19285 19286 19287 19288 19289 19290 19291 19292 19293 19294 19295 19296 19297 19298 19299 19300 19301 19302 19303 19304 19305 19306 19307 19308 19309 19310 19311 19312 19313 19314 19315 19316 19317 19318 19319 19320 19321 19322 19323 19324 19325 19326 19327 19328 19329 19330 19331 19332 19333 19334 19335 19336 19337 19338 19339 19340 19341 19342 19343 19344 19345 19346 19347 19348 19349 19350 19351 19352 19353 19354 19355 19356 19357 19358 19359 19360 19361 19362 19363 19364 19365 19366 19367 19368 19369 19370 19371 19372 19373 19374 19375 19376 19377 19378 19379 19380 19381 19382 19383 19384 19385 19386 19387 19388 19389 19390 19391 19392 19393 19394 19395 19396 19397 19398 19399 19400 19401 19402 19403 19404 19405 19406 19407 19408 19409 19410 19411 19412 19413 19414 19415 19416 19417 19418 19419 19420 19421 19422 19423 19424 19425 19426 19427 19428 19429 19430 19431 19432 19433 19434 19435 19436 19437 19438 19439 19440 19441 19442 19443 19444 19445 19446 19447 19448 19449 19450 19451 19452 19453 19454 19455 19456 19457 19458 19459 19460 19461 19462 19463 19464 19465 19466 19467 19468 19469 19470 19471 19472 19473 19474 19475 19476 19477 19478 19479 19480 19481 19482 19483 19484 19485 19486 19487 19488 19489 19490 19491 19492 19493 19494 19495 19496 19497 19498 19499 19500 19501 19502 19503 19504 19505 19506 19507 19508 19509 19510 19511 19512 19513 19514 19515 19516 19517 19518 19519 19520 19521 19522 19523 19524 19525 19526 19527 19528 19529 19530 19531 19532 19533 19534 19535 19536 19537 19538 19539 19540 19541 19542 19543 19544 19545 19546 19547 19548 19549 19550 19551 19552 19553 19554 19555 19556 19557 19558 19559 19560 19561 19562 19563 19564 19565 19566 19567 19568 19569 19570 19571 19572 19573 19574 19575 19576 19577 19578 19579 19580 19581 19582 19583 19584 19585 19586 19587 19588 19589 19590 19591 19592 19593 19594 19595 19596 19597 19598 19599 19600 19601 19602 19603 19604 19605 19606 19607 19608 19609 19610 19611 19612 19613 19614 19615 19616 19617 19618 19619 19620 19621 19622 19623 19624 19625 19626 19627 19628 19629 19630 19631 19632 19633 19634 19635 19636 19637 19638 19639 19640 19641 19642 19643 19644 19645 19646 19647 19648 19649 19650 19651 19652 19653 19654 19655 19656 19657 19658 19659 19660 19661 19662 19663 19664 19665 19666 19667 19668 19669 19670 19671 19672 19673 19674 19675 19676 19677 19678 19679 19680 19681 19682 19683 19684 19685 19686 19687 19688 19689 19690 19691 19692 19693 19694 19695 19696 19697 19698 19699 19700 19701 19702 19703 19704 19705 19706 19707 19708 19709 19710 19711 19712 19713 19714 19715 19716 19717 19718 19719 19720 19721 19722 19723 19724 19725 19726 19727 19728 19729 19730 19731 19732 19733 19734 19735 19736 19737 19738 19739 19740 19741 19742 19743 19744 19745 19746 19747 19748 19749 19750 19751 19752 19753 19754 19755 19756 19757 19758 19759 19760 19761 19762 19763 19764 19765 19766 19767 19768 19769 19770 19771 19772 19773 19774 19775 19776 19777 19778 19779 19780 19781 19782 19783 19784 19785 19786 19787 19788 19789 19790 19791 19792 19793 19794 19795 19796 19797 19798 19799 19800 19801 19802 19803 19804 19805 19806 19807 19808 19809 19810 19811 19812 19813 19814 19815 19816 19817 19818 19819 19820 19821 19822 19823 19824 19825 19826 19827 19828 19829 19830 19831 19832 19833 19834 19835 19836 19837 19838 19839 19840 19841 19842 19843 19844 19845 19846 19847 19848 19849 19850 19851 19852 19853 19854 19855 19856 19857 19858 19859 19860 19861 19862 19863 19864 19865 19866 19867 19868 19869 19870 19871 19872 19873 19874 19875 19876 19877 19878 19879 19880 19881 19882 19883 19884 19885 19886 19887 19888 19889 19890 19891 19892 19893 19894 19895 19896 19897 19898 19899 19900 19901 19902 19903 19904 19905 19906 19907 19908 19909 19910 19911 19912 19913 19914 19915 19916 19917 19918 19919 19920 19921 19922 19923 19924 19925 19926 19927 19928 19929 19930 19931 19932 19933 19934 19935 19936 19937 19938 19939 19940 19941 19942 19943 19944 19945 19946 19947 19948 19949 19950 19951 19952 19953 19954 19955 19956 19957 19958 19959 19960 19961 19962 19963 19964 19965 19966 19967 19968 19969 19970 19971 19972 19973 19974 19975 19976 19977 19978 19979 19980 19981 19982 19983 19984 19985 19986 19987 19988 19989 19990 19991 19992 19993 19994 19995 19996 19997 19998 19999 20000 20001 20002 20003 20004 20005 20006 20007 20008 20009 20010 20011 20012 20013 20014 20015 20016 20017 20018 20019 20020 20021 20022 20023 20024 20025 20026 20027 20028 20029 20030 20031 20032 20033 20034 20035 20036 20037 20038 20039 20040 20041 20042 20043 20044 20045 20046 20047 20048 20049 20050 20051 20052 20053 20054 20055 20056 20057 20058 20059 20060 20061 20062 20063 20064 20065 20066 20067 20068 20069 20070 20071 20072 20073 20074 20075 20076 20077 20078 20079 20080 20081 20082 20083 20084 20085 20086 20087 20088 20089 20090 20091 20092 20093 20094 20095 20096 20097 20098 20099 20100 20101 20102 20103 20104 20105 20106 20107 20108 20109 20110 20111 20112 20113 20114 20115 20116 20117 20118 20119 20120 20121 20122 20123 20124 20125 20126 20127 20128 20129 20130 20131 20132 20133 20134 20135 20136 20137 20138 20139 20140 20141 20142 20143 20144 20145 20146 20147 20148 20149 20150 20151 20152 20153 20154 20155 20156 20157 20158 20159 20160 20161 20162 20163 20164 20165 20166 20167 20168 20169 20170 20171 20172 20173 20174 20175 20176 20177 20178 20179 20180 20181 20182 20183 20184 20185 20186 20187 20188 20189 20190 20191 20192 20193 20194 20195 20196 20197 20198 20199 20200 20201 20202 20203 20204 20205 20206 20207 20208 20209 20210 20211 20212 20213 20214 20215 20216 20217 20218 20219 20220 20221 20222 20223 20224 20225 20226 20227 20228 20229 20230 20231 20232 20233 20234 20235 20236 20237 20238 20239 20240 20241 20242 20243 20244 20245 20246 20247 20248 20249 20250 20251 20252 20253 20254 20255 20256 20257 20258 20259 20260 20261 20262 20263 20264 20265 20266 20267 20268 20269 20270 20271 20272 20273 20274 20275 20276 20277 20278 20279 20280 20281 20282 20283 20284 20285 20286 20287 20288 20289 20290 20291 20292 20293 20294 20295 20296 20297 20298 20299 20300 20301 20302 20303 20304 20305 20306 20307 20308 20309 20310 20311 20312 20313 20314 20315 20316 20317 20318 20319 20320 20321 20322 20323 20324 20325 20326 20327 20328 20329 20330 20331 20332 20333 20334 20335 20336 20337 20338 20339 20340 20341 20342 20343 20344 20345 20346 20347 20348 20349 20350 20351 20352 20353 20354 20355 20356 20357 20358 20359 20360 20361 20362 20363 20364 20365 20366 20367 20368 20369 20370 20371 20372 20373 20374 20375 20376 20377 20378 20379 20380 20381 20382 20383 20384 20385 20386 20387 20388 20389 20390 20391 20392 20393 20394 20395 20396 20397 20398 20399 20400 20401 20402 20403 20404 20405 20406 20407 20408 20409 20410 20411 20412 20413 20414 20415 20416 20417 20418 20419 20420 20421 20422 20423 20424 20425 20426 20427 20428 20429 20430 20431 20432 20433 20434 20435 20436 20437 20438 20439 20440 20441 20442 20443 20444 20445 20446 20447 20448 20449 20450 20451 20452 20453 20454 20455 20456 20457 20458 20459 20460 20461 20462 20463 20464 20465 20466 20467 20468 20469 20470 20471 20472 20473 20474 20475 20476 20477 20478 20479 20480 20481 20482 20483 20484 20485 20486 20487 20488 20489 20490 20491 20492 20493 20494 20495 20496 20497 20498 20499 20500 20501 20502 20503 20504 20505 20506 20507 20508 20509 20510 20511 20512 20513 20514 20515 20516 20517 20518 20519 20520 20521 20522 20523 20524 20525 20526 20527 20528 20529 20530 20531 20532 20533 20534 20535 20536 20537 20538 20539 20540 20541 20542 20543 20544 20545 20546 20547 20548 20549 20550 20551 20552 20553 20554 20555 20556 20557 20558 20559 20560 20561 20562 20563 20564 20565 20566 20567 20568 20569 20570 20571 20572 20573 20574 20575 20576 20577 20578 20579 20580 20581 20582 20583 20584 20585 20586 20587 20588 20589 20590 20591 20592 20593 20594 20595 20596 20597 20598 20599 20600 20601 20602 20603 20604 20605 20606 20607 20608 20609 20610 20611 20612 20613 20614 20615 20616 20617 20618 20619 20620 20621 20622 20623 20624 20625 20626 20627 20628 20629 20630 20631 20632 20633 20634 20635 20636 20637 20638 20639 20640 20641 20642 20643 20644 20645 20646 20647 20648 20649 20650 20651 20652 20653 20654 20655 20656 20657 20658 20659 20660 20661 20662 20663 20664 20665 20666 20667 20668 20669 20670 20671 20672 20673 20674 20675 20676 20677 20678 20679 20680 20681 20682 20683 20684 20685 20686 20687 20688 20689 20690 20691 20692 20693 20694 20695 20696 20697 20698 20699 20700 20701 20702 20703 20704 20705 20706 20707 20708 20709 20710 20711 20712 20713 20714 20715 20716 20717 20718 20719 20720 20721 20722 20723 20724 20725 20726 20727 20728 20729 20730 20731 20732 20733 20734 20735 20736 20737 20738 20739 20740 20741 20742 20743 20744 20745 20746 20747 20748 20749 20750 20751 20752 20753 20754 20755 20756 20757 20758 20759 20760 20761 20762 20763 20764 20765 20766 20767 20768 20769 20770 20771 20772 20773 20774 20775 20776 20777 20778 20779 20780 20781 20782 20783 20784 20785 20786 20787 20788 20789 20790 20791 20792 20793 20794 20795 20796 20797 20798 20799 20800 20801 20802 20803 20804 20805 20806 20807 20808 20809 20810 20811 20812 20813 20814 20815 20816 20817 20818 20819 20820 20821 20822 20823 20824 20825 20826 20827 20828 20829 20830 20831 20832 20833 20834 20835 20836 20837 20838 20839 20840 20841 20842 20843 20844 20845 20846 20847 20848 20849 20850 20851 20852 20853 20854 20855 20856 20857 20858 20859 20860 20861 20862 20863 20864 20865 20866 20867 20868 20869 20870 20871 20872 20873 20874 20875 20876 20877 20878 20879 20880 20881 20882 20883 20884 20885 20886 20887 20888 20889 20890 20891 20892 20893 20894 20895 20896 20897 20898 20899 20900 20901 20902 20903 20904 20905 20906 20907 20908 20909 20910 20911 20912 20913 20914 20915 20916 20917 20918 20919 20920 20921 20922 20923 20924 20925 20926 20927 20928 20929 20930 20931 20932 20933 20934 20935 20936 20937 20938 20939 20940 20941 20942 20943 20944 20945 20946 20947 20948 20949 20950 20951 20952 20953 20954 20955 20956 20957 20958 20959 20960 20961 20962 20963 20964 20965 20966 20967 20968 20969 20970 20971 20972 20973 20974 20975 20976 20977 20978 20979 20980 20981 20982 20983 20984 20985 20986 20987 20988 20989 20990 20991 20992 20993 20994 20995 20996 20997 20998 20999 21000 21001 21002 21003 21004 21005 21006 21007 21008 21009 21010 21011 21012 21013 21014 21015 21016 21017 21018 21019 21020 21021 21022 21023 21024 21025 21026 21027 21028 21029 21030 21031 21032 21033 21034 21035 21036 21037 21038 21039 21040 21041 21042 21043 21044 21045 21046 21047 21048 21049 21050 21051 21052 21053 21054 21055 21056 21057 21058 21059 21060 21061 21062 21063 21064 21065 21066 21067 21068 21069 21070 21071 21072 21073 21074 21075 21076 21077 21078 21079 21080 21081 21082 21083 21084 21085 21086 21087 21088 21089 21090 21091 21092 21093 21094 21095 21096 21097 21098 21099 21100 21101 21102 21103 21104 21105 21106 21107 21108 21109 21110 21111 21112 21113 21114 21115 21116 21117 21118 21119 21120 21121 21122 21123 21124 21125 21126 21127 21128 21129 21130 21131 21132 21133 21134 21135 21136 21137 21138 21139 21140 21141 21142 21143 21144 21145 21146 21147 21148 21149 21150 21151 21152 21153 21154 21155 21156 21157 21158 21159 21160 21161 21162 21163 21164 21165 21166 21167 21168 21169 21170 21171 21172 21173 21174 21175 21176 21177 21178 21179 21180 21181 21182 21183 21184 21185 21186 21187 21188 21189 21190 21191 21192 21193 21194 21195 21196 21197 21198 21199 21200 21201 21202 21203 21204 21205 21206 21207 21208 21209 21210 21211 21212 21213 21214 21215 21216 21217 21218 21219 21220 21221 21222 21223 21224 21225 21226 21227 21228 21229 21230 21231 21232 21233 21234 21235 21236 21237 21238 21239 21240 21241 21242 21243 21244 21245 21246 21247 21248 21249 21250 21251 21252 21253 21254 21255 21256 21257 21258 21259 21260 21261 21262 21263 21264 21265 21266 21267 21268 21269 21270 21271 21272 21273 21274 21275 21276 21277 21278 21279 21280 21281 21282 21283 21284 21285 21286 21287 21288 21289 21290 21291 21292 21293 21294 21295 21296 21297 21298 21299 21300 21301 21302 21303 21304 21305 21306 21307 21308 21309 21310 21311 21312 21313 21314 21315 21316 21317 21318 21319 21320 21321 21322 21323 21324 21325 21326 21327 21328 21329 21330 21331 21332 21333 21334 21335 21336 21337 21338 21339 21340 21341 21342 21343 21344 21345 21346 21347 21348 21349 21350 21351 21352 21353 21354 21355 21356 21357 21358 21359 21360 21361 21362 21363 21364 21365 21366 21367 21368 21369 21370 21371 21372 21373 21374 21375 21376 21377 21378 21379 21380 21381 21382 21383 21384 21385 21386 21387 21388 21389 21390 21391 21392 21393 21394 21395 21396 21397 21398 21399 21400 21401 21402 21403 21404 21405 21406 21407 21408 21409 21410 21411 21412 21413 21414 21415 21416 21417 21418 21419 21420 21421 21422 21423 21424 21425 21426 21427 21428 21429 21430 21431 21432 21433 21434 21435 21436 21437 21438 21439 21440 21441 21442 21443 21444 21445 21446 21447 21448 21449 21450 21451 21452 21453 21454 21455 21456 21457 21458 21459 21460 21461 21462 21463 21464 21465 21466 21467 21468 21469 21470 21471 21472 21473 21474 21475 21476 21477 21478 21479 21480 21481 21482 21483 21484 21485 21486 21487 21488 21489 21490 21491 21492 21493 21494 21495 21496 21497 21498 21499 21500 21501 21502 21503 21504 21505 21506 21507 21508 21509 21510 21511 21512 21513 21514 21515 21516 21517 21518 21519 21520 21521 21522 21523 21524 21525 21526 21527 21528 21529 21530 21531 21532 21533 21534 21535 21536 21537 21538 21539 21540 21541 21542 21543 21544 21545 21546 21547 21548 21549 21550 21551 21552 21553 21554 21555 21556 21557 21558 21559 21560 21561 21562 21563 21564 21565 21566 21567 21568 21569 21570 21571 21572 21573 21574 21575 21576 21577 21578 21579 21580 21581 21582 21583 21584 21585 21586 21587 21588 21589 21590 21591 21592 21593 21594 21595 21596 21597 21598 21599 21600 21601 21602 21603 21604 21605 21606 21607 21608 21609 21610 21611 21612 21613 21614 21615 21616 21617 21618 21619 21620 21621 21622 21623 21624 21625 21626 21627 21628 21629 21630 21631 21632 21633 21634 21635 21636 21637 21638 21639 21640 21641 21642 21643 21644 21645 21646 21647 21648 21649 21650 21651 21652 21653 21654 21655 21656 21657 21658 21659 21660 21661 21662 21663 21664 21665 21666 21667 21668 21669 21670 21671 21672 21673 21674 21675 21676 21677 21678 21679 21680 21681 21682 21683 21684 21685 21686 21687 21688 21689 21690 21691 21692 21693 21694 21695 21696 21697 21698 21699 21700 21701 21702 21703 21704 21705 21706 21707 21708 21709 21710 21711 21712 21713 21714 21715 21716 21717 21718 21719 21720 21721 21722 21723 21724 21725 21726 21727 21728 21729 21730 21731 21732 21733 21734 21735 21736 21737 21738 21739 21740 21741 21742 21743 21744 21745 21746 21747 21748 21749 21750 21751 21752 21753 21754 21755 21756 21757 21758 21759 21760 21761 21762 21763 21764 21765 21766 21767 21768 21769 21770 21771 21772 21773 21774 21775 21776 21777 21778 21779 21780 21781 21782 21783 21784 21785 21786 21787 21788 21789 21790 21791 21792 21793 21794 21795 21796 21797 21798 21799 21800 21801 21802 21803 21804 21805 21806 21807 21808 21809 21810 21811 21812 21813 21814 21815 21816 21817 21818 21819 21820 21821 21822 21823 21824 21825 21826 21827 21828 21829 21830 21831 21832 21833 21834 21835 21836 21837 21838 21839 21840 21841 21842 21843 21844 21845 21846 21847 21848 21849 21850 21851 21852 21853 21854 21855 21856 21857 21858 21859 21860 21861 21862 21863 21864 21865 21866 21867 21868 21869 21870 21871 21872 21873 21874 21875 21876 21877 21878 21879 21880 21881 21882 21883 21884 21885 21886 21887 21888 21889 21890 21891 21892 21893 21894 21895 21896 21897 21898 21899 21900 21901 21902 21903 21904 21905 21906 21907 21908 21909 21910 21911 21912 21913 21914 21915 21916 21917 21918 21919 21920 21921 21922 21923 21924 21925 21926 21927 21928 21929 21930 21931 21932 21933 21934 21935 21936 21937 21938 21939 21940 21941 21942 21943 21944 21945 21946 21947 21948 21949 21950 21951 21952 21953 21954 21955 21956 21957 21958 21959 21960 21961 21962 21963 21964 21965 21966 21967 21968 21969 21970 21971 21972 21973 21974 21975 21976 21977 21978 21979 21980 21981 21982 21983 21984 21985 21986 21987 21988 21989 21990 21991 21992 21993 21994 21995 21996 21997 21998 21999 22000 22001 22002 22003 22004 22005 22006 22007 22008 22009 22010 22011 22012 22013 22014 22015 22016 22017 22018 22019 22020 22021 22022 22023 22024 22025 22026 22027 22028 22029 22030 22031 22032 22033 22034 22035 22036 22037 22038 22039 22040 22041 22042 22043 22044 22045 22046 22047 22048 22049 22050 22051 22052 22053 22054 22055 22056 22057 22058 22059 22060 22061 22062 22063 22064 22065 22066 22067 22068 22069 22070 22071 22072 22073 22074 22075 22076 22077 22078 22079 22080 22081 22082 22083 22084 22085 22086 22087 22088 22089 22090 22091 22092 22093 22094 22095 22096 22097 22098 22099 22100 22101 22102 22103 22104 22105 22106 22107 22108 22109 22110 22111 22112 22113 22114 22115 22116 22117 22118 22119 22120 22121 22122 22123 22124 22125 22126 22127 22128 22129 22130 22131 22132 22133 22134 22135 22136 22137 22138 22139 22140 22141 22142 22143 22144 22145 22146 22147 22148 22149 22150 22151 22152 22153 22154 22155 22156 22157 22158 22159 22160 22161 22162 22163 22164 22165 22166 22167 22168 22169 22170 22171 22172 22173 22174 22175 22176 22177 22178 22179 22180 22181 22182 22183 22184 22185 22186 22187 22188 22189 22190 22191 22192 22193 22194 22195 22196 22197 22198 22199 22200 22201 22202 22203 22204 22205 22206 22207 22208 22209 22210 22211 22212 22213 22214 22215 22216 22217 22218 22219 22220 22221 22222 22223 22224 22225 22226 22227 22228 22229 22230 22231 22232 22233 22234 22235 22236 22237 22238 22239 22240 22241 22242 22243 22244 22245 22246 22247 22248 22249 22250 22251 22252 22253 22254 22255 22256 22257 22258 22259 22260 22261 22262 22263 22264 22265 22266 22267 22268 22269 22270 22271 22272 22273 22274 22275 22276 22277 22278 22279 22280 22281 22282 22283 22284 22285 22286 22287 22288 22289 22290 22291 22292 22293 22294 22295 22296 22297 22298 22299 22300 22301 22302 22303 22304 22305 22306 22307 22308 22309 22310 22311 22312 22313 22314 22315 22316 22317 22318 22319 22320 22321 22322 22323 22324 22325 22326 22327 22328 22329 22330 22331 22332 22333 22334 22335 22336 22337 22338 22339 22340 22341 22342 22343 22344 22345 22346 22347 22348 22349 22350 22351 22352 22353 22354 22355 22356 22357 22358 22359 22360 22361 22362 22363 22364 22365 22366 22367 22368 22369 22370 22371 22372 22373 22374 22375 22376 22377 22378 22379 22380 22381 22382 22383 22384 22385 22386 22387 22388 22389 22390 22391 22392 22393 22394 22395 22396 22397 22398 22399 22400 22401 22402 22403 22404 22405 22406 22407 22408 22409 22410 22411 22412 22413 22414 22415 22416 22417 22418 22419 22420 22421 22422 22423 22424 22425 22426 22427 22428 22429 22430 22431 22432 22433 22434 22435 22436 22437 22438 22439 22440 22441 22442 22443 22444 22445 22446 22447 22448 22449 22450 22451 22452 22453 22454 22455 22456 22457 22458 22459 22460 22461 22462 22463 22464 22465 22466 22467 22468 22469 22470 22471 22472 22473 22474 22475 22476 22477 22478 22479 22480 22481 22482 22483 22484 22485 22486 22487 22488 22489 22490 22491 22492 22493 22494 22495 22496 22497 22498 22499 22500 22501 22502 22503 22504 22505 22506 22507 22508 22509 22510 22511 22512 22513 22514 22515 22516 22517 22518 22519 22520 22521 22522 22523 22524 22525 22526 22527 22528 22529 22530 22531 22532 22533 22534 22535 22536 22537 22538 22539 22540 22541 22542 22543 22544 22545 22546 22547 22548 22549 22550 22551 22552 22553 22554 22555 22556 22557 22558 22559 22560 22561 22562 22563 22564 22565 22566 22567 22568 22569 22570 22571 22572 22573 22574 22575 22576 22577 22578 22579 22580 22581 22582 22583 22584 22585 22586 22587 22588 22589 22590 22591 22592 22593 22594 22595 22596 22597 22598 22599 22600 22601 22602 22603 22604 22605 22606 22607 22608 22609 22610 22611 22612 22613 22614 22615 22616 22617 22618 22619 22620 22621 22622 22623 22624 22625 22626 22627 22628 22629 22630 22631 22632 22633 22634 22635 22636 22637 22638 22639 22640 22641 22642 22643 22644 22645 22646 22647 22648 22649 22650 22651 22652 22653 22654 22655 22656 22657 22658 22659 22660 22661 22662 22663 22664 22665 22666 22667 22668 22669 22670 22671 22672 22673 22674 22675 22676 22677 22678 22679 22680 22681 22682 22683 22684 22685 22686 22687 22688 22689 22690 22691 22692 22693 22694 22695 22696 22697 22698 22699 22700 22701 22702 22703 22704 22705 22706 22707 22708 22709 22710 22711 22712 22713 22714 22715 22716 22717 22718 22719 22720 22721 22722 22723 22724 22725 22726 22727 22728 22729 22730 22731 22732 22733 22734 22735 22736 22737 22738 22739 22740 22741 22742 22743 22744 22745 22746 22747 22748 22749 22750 22751 22752 22753 22754 22755 22756 22757 22758 22759 22760 22761 22762 22763 22764 22765 22766 22767 22768 22769 22770 22771 22772 22773 22774 22775 22776 22777 22778 22779 22780 22781 22782 22783 22784 22785 22786 22787 22788 22789 22790 22791 22792 22793 22794 22795 22796 22797 22798 22799 22800 22801 22802 22803 22804 22805 22806 22807 22808 22809 22810 22811 22812 22813 22814 22815 22816 22817 22818 22819 22820 22821 22822 22823 22824 22825 22826 22827 22828 22829 22830 22831 22832 22833 22834 22835 22836 22837 22838 22839 22840 22841 22842 22843 22844 22845 22846 22847 22848 22849 22850 22851 22852 22853 22854 22855 22856 22857 22858 22859 22860 22861 22862 22863 22864 22865 22866 22867 22868 22869 22870 22871 22872 22873 22874 22875 22876 22877 22878 22879 22880 22881 22882 22883 22884 22885 22886 22887 22888 22889 22890 22891 22892 22893 22894 22895 22896 22897 22898 22899 22900 22901 22902 22903 22904 22905 22906 22907 22908 22909 22910 22911 22912 22913 22914 22915 22916 22917 22918 22919 22920 22921 22922 22923 22924 22925 22926 22927 22928 22929 22930 22931 22932 22933 22934 22935 22936 22937 22938 22939 22940 22941 22942 22943 22944 22945 22946 22947 22948 22949 22950 22951 22952 22953 22954 22955 22956 22957 22958 22959 22960 22961 22962 22963 22964 22965 22966 22967 22968 22969 22970 22971 22972 22973 22974 22975 22976 22977 22978 22979 22980 22981 22982 22983 22984 22985 22986 22987 22988 22989 22990 22991 22992 22993 22994 22995 22996 22997 22998 22999 23000 23001 23002 23003 23004 23005 23006 23007 23008 23009 23010 23011 23012 23013 23014 23015 23016 23017 23018 23019 23020 23021 23022 23023 23024 23025 23026 23027 23028 23029 23030 23031 23032 23033 23034 23035 23036 23037 23038 23039 23040 23041 23042 23043 23044 23045 23046 23047 23048 23049 23050 23051 23052 23053 23054 23055 23056 23057 23058 23059 23060 23061 23062 23063 23064 23065 23066 23067 23068 23069 23070 23071 23072 23073 23074 23075 23076 23077 23078 23079 23080 23081 23082 23083 23084 23085 23086 23087 23088 23089 23090 23091 23092 23093 23094 23095 23096 23097 23098 23099 23100 23101 23102 23103 23104 23105 23106 23107 23108 23109 23110 23111 23112 23113 23114 23115 23116 23117 23118 23119 23120 23121 23122 23123 23124 23125 23126 23127 23128 23129 23130 23131 23132 23133 23134 23135 23136 23137 23138 23139 23140 23141 23142 23143 23144 23145 23146 23147 23148 23149 23150 23151 23152 23153 23154 23155 23156 23157 23158 23159 23160 23161 23162 23163 23164 23165 23166 23167 23168 23169 23170 23171 23172 23173 23174 23175 23176 23177 23178 23179 23180 23181 23182 23183 23184 23185 23186 23187 23188 23189 23190 23191 23192 23193 23194 23195 23196 23197 23198 23199 23200 23201 23202 23203 23204 23205 23206 23207 23208 23209 23210 23211 23212 23213 23214 23215 23216 23217 23218 23219 23220 23221 23222 23223 23224 23225 23226 23227 23228 23229 23230 23231 23232 23233 23234 23235 23236 23237 23238 23239 23240 23241 23242 23243 23244 23245 23246 23247 23248 23249 23250 23251 23252 23253 23254 23255 23256 23257 23258 23259 23260 23261 23262 23263 23264 23265 23266 23267 23268 23269 23270 23271 23272 23273 23274 23275 23276 23277 23278 23279 23280 23281 23282 23283 23284 23285 23286 23287 23288 23289 23290 23291 23292 23293 23294 23295 23296 23297 23298 23299 23300 23301 23302 23303 23304 23305 23306 23307 23308 23309 23310 23311 23312 23313 23314 23315 23316 23317 23318 23319 23320 23321 23322 23323 23324 23325 23326 23327 23328 23329 23330 23331 23332 23333 23334 23335 23336 23337 23338 23339 23340 23341 23342 23343 23344 23345 23346 23347 23348 23349 23350 23351 23352 23353 23354 23355 23356 23357 23358 23359 23360 23361 23362 23363 23364 23365 23366 23367 23368 23369 23370 23371 23372 23373 23374 23375 23376 23377 23378 23379 23380 23381 23382 23383 23384 23385 23386 23387 23388 23389 23390 23391 23392 23393 23394 23395 23396 23397 23398 23399 23400 23401 23402 23403 23404 23405 23406 23407 23408 23409 23410 23411 23412 23413 23414 23415 23416 23417 23418 23419 23420 23421 23422 23423 23424 23425 23426 23427 23428 23429 23430 23431 23432 23433 23434 23435 23436 23437 23438 23439 23440 23441 23442 23443 23444 23445 23446 23447 23448 23449 23450 23451 23452 23453 23454 23455 23456 23457 23458 23459 23460 23461 23462 23463 23464 23465 23466 23467 23468 23469 23470 23471 23472 23473 23474 23475 23476 23477 23478 23479 23480 23481 23482 23483 23484 23485 23486 23487 23488 23489 23490 23491 23492 23493 23494 23495 23496 23497 23498 23499 23500 23501 23502 23503 23504 23505 23506 23507 23508 23509 23510 23511 23512 23513 23514 23515 23516 23517 23518 23519 23520 23521 23522 23523 23524 23525 23526 23527 23528 23529 23530 23531 23532 23533 23534 23535 23536 23537 23538 23539 23540 23541 23542 23543 23544 23545 23546 23547 23548 23549 23550 23551 23552 23553 23554 23555 23556 23557 23558 23559 23560 23561 23562 23563 23564 23565 23566 23567 23568 23569 23570 23571 23572 23573 23574 23575 23576 23577 23578 23579 23580 23581 23582 23583 23584 23585 23586 23587 23588 23589 23590 23591 23592 23593 23594 23595 23596 23597 23598 23599 23600 23601 23602 23603 23604 23605 23606 23607 23608 23609 23610 23611 23612 23613 23614 23615 23616 23617 23618 23619 23620 23621 23622 23623 23624 23625 23626 23627 23628 23629 23630 23631 23632 23633 23634 23635 23636 23637 23638 23639 23640 23641 23642 23643 23644 23645 23646 23647 23648 23649 23650 23651 23652 23653 23654 23655 23656 23657 23658 23659 23660 23661 23662 23663 23664 23665 23666 23667 23668 23669 23670 23671 23672 23673 23674 23675 23676 23677 23678 23679 23680 23681 23682 23683 23684 23685 23686 23687 23688 23689 23690 23691 23692 23693 23694 23695 23696 23697 23698 23699 23700 23701 23702 23703 23704 23705 23706 23707 23708 23709 23710 23711 23712 23713 23714 23715 23716 23717 23718 23719 23720 23721 23722 23723 23724 23725 23726 23727 23728 23729 23730 23731 23732 23733 23734 23735 23736 23737 23738 23739 23740 23741 23742 23743 23744 23745 23746 23747 23748 23749 23750 23751 23752 23753 23754 23755 23756 23757 23758 23759 23760 23761 23762 23763 23764 23765 23766 23767 23768 23769 23770 23771 23772 23773 23774 23775 23776 23777 23778 23779 23780 23781 23782 23783 23784 23785 23786 23787 23788 23789 23790 23791 23792 23793 23794 23795 23796 23797 23798 23799 23800 23801 23802 23803 23804 23805 23806 23807 23808 23809 23810 23811 23812 23813 23814 23815 23816 23817 23818 23819 23820 23821 23822 23823 23824 23825 23826 23827 23828 23829 23830 23831 23832 23833 23834 23835 23836 23837 23838 23839 23840 23841 23842 23843 23844 23845 23846 23847 23848 23849 23850 23851 23852 23853 23854 23855 23856 23857 23858 23859 23860 23861 23862 23863 23864 23865 23866 23867 23868 23869 23870 23871 23872 23873 23874 23875 23876 23877 23878 23879 23880 23881 23882 23883 23884 23885 23886 23887 23888 23889 23890 23891 23892 23893 23894 23895 23896 23897 23898 23899 23900 23901 23902 23903 23904 23905 23906 23907 23908 23909 23910 23911 23912 23913 23914 23915 23916 23917 23918 23919 23920 23921 23922 23923 23924 23925 23926 23927 23928 23929 23930 23931 23932 23933 23934 23935 23936 23937 23938 23939 23940 23941 23942 23943 23944 23945 23946 23947 23948 23949 23950 23951 23952 23953 23954 23955 23956 23957 23958 23959 23960 23961 23962 23963 23964 23965 23966 23967 23968 23969 23970 23971 23972 23973 23974 23975 23976 23977 23978 23979 23980 23981 23982 23983 23984 23985 23986 23987 23988 23989 23990 23991 23992 23993 23994 23995 23996 23997 23998 23999 24000 24001 24002 24003 24004 24005 24006 24007 24008 24009 24010 24011 24012 24013 24014 24015 24016 24017 24018 24019 24020 24021 24022 24023 24024 24025 24026 24027 24028 24029 24030 24031 24032 24033 24034 24035 24036 24037 24038 24039 24040 24041 24042 24043 24044 24045 24046 24047 24048 24049 24050 24051 24052 24053 24054 24055 24056 24057 24058 24059 24060 24061 24062 24063 24064 24065 24066 24067 24068 24069 24070 24071 24072 24073 24074 24075 24076 24077 24078 24079 24080 24081 24082 24083 24084 24085 24086 24087 24088 24089 24090 24091 24092 24093 24094 24095 24096 24097 24098 24099 24100 24101 24102 24103 24104 24105 24106 24107 24108 24109 24110 24111 24112 24113 24114 24115 24116 24117 24118 24119 24120 24121 24122 24123 24124 24125 24126 24127 24128 24129 24130 24131 24132 24133 24134 24135 24136 24137 24138 24139 24140 24141 24142 24143 24144 24145 24146 24147 24148 24149 24150 24151 24152 24153 24154 24155 24156 24157 24158 24159 24160 24161 24162 24163 24164 24165 24166 24167 24168 24169 24170 24171 24172 24173 24174 24175 24176 24177 24178 24179 24180 24181 24182 24183 24184 24185 24186 24187 24188 24189 24190 24191 24192 24193 24194 24195 24196 24197 24198 24199 24200 24201 24202 24203 24204 24205 24206 24207 24208 24209 24210 24211 24212 24213 24214 24215 24216 24217 24218 24219 24220 24221 24222 24223 24224 24225 24226 24227 24228 24229 24230 24231 24232 24233 24234 24235 24236 24237 24238 24239 24240 24241 24242 24243 24244 24245 24246 24247 24248 24249 24250 24251 24252 24253 24254 24255 24256 24257 24258 24259 24260 24261 24262 24263 24264 24265 24266 24267 24268 24269 24270 24271 24272 24273 24274 24275 24276 24277 24278 24279 24280 24281 24282 24283 24284 24285 24286 24287 24288 24289 24290 24291 24292 24293 24294 24295 24296 24297 24298 24299 24300 24301 24302 24303 24304 24305 24306 24307 24308 24309 24310 24311 24312 24313 24314 24315 24316 24317 24318 24319 24320 24321 24322 24323 24324 24325 24326 24327 24328 24329 24330 24331 24332 24333 24334 24335 24336 24337 24338 24339 24340 24341 24342 24343 24344 24345 24346 24347 24348 24349 24350 24351 24352 24353 24354 24355 24356 24357 24358 24359 24360 24361 24362 24363 24364 24365 24366 24367 24368 24369 24370 24371 24372 24373 24374 24375 24376 24377 24378 24379 24380 24381 24382 24383 24384 24385 24386 24387 24388 24389 24390 24391 24392 24393 24394 24395 24396 24397 24398 24399 24400 24401 24402 24403 24404 24405 24406 24407 24408 24409 24410 24411 24412 24413 24414 24415 24416 24417 24418 24419 24420 24421 24422 24423 24424 24425 24426 24427 24428 24429 24430 24431 24432 24433 24434 24435 24436 24437 24438 24439 24440 24441 24442 24443 24444 24445 24446 24447 24448 24449 24450 24451 24452 24453 24454 24455 24456 24457 24458 24459 24460 24461 24462 24463 24464 24465 24466 24467 24468 24469 24470 24471 24472 24473 24474 24475 24476 24477 24478 24479 24480 24481 24482 24483 24484 24485 24486 24487 24488 24489 24490 24491 24492 24493 24494 24495 24496 24497 24498 24499 24500 24501 24502 24503 24504 24505 24506 24507 24508 24509 24510 24511 24512 24513 24514 24515 24516 24517 24518 24519 24520 24521 24522 24523 24524 24525 24526 24527 24528 24529 24530 24531 24532 24533 24534 24535 24536 24537 24538 24539 24540 24541 24542 24543 24544 24545 24546 24547 24548 24549 24550 24551 24552 24553 24554 24555 24556 24557 24558 24559 24560 24561 24562 24563 24564 24565 24566 24567 24568 24569 24570 24571 24572 24573 24574 24575 24576 24577 24578 24579 24580 24581 24582 24583 24584 24585 24586 24587 24588 24589 24590 24591 24592 24593 24594 24595 24596 24597 24598 24599 24600 24601 24602 24603 24604 24605 24606 24607 24608 24609 24610 24611 24612 24613 24614 24615 24616 24617 24618 24619 24620 24621 24622 24623 24624 24625 24626 24627 24628 24629 24630 24631 24632 24633 24634 24635 24636 24637 24638 24639 24640 24641 24642 24643 24644 24645 24646 24647 24648 24649 24650 24651 24652 24653 24654 24655 24656 24657 24658 24659 24660 24661 24662 24663 24664 24665 24666 24667 24668 24669 24670 24671 24672 24673 24674 24675 24676 24677 24678 24679 24680 24681 24682 24683 24684 24685 24686 24687 24688 24689 24690 24691 24692 24693 24694 24695 24696 24697 24698 24699 24700 24701 24702 24703 24704 24705 24706 24707 24708 24709 24710 24711 24712 24713 24714 24715 24716 24717 24718 24719 24720 24721 24722 24723 24724 24725 24726 24727 24728 24729 24730 24731 24732 24733 24734 24735 24736 24737 24738 24739 24740 24741 24742 24743 24744 24745 24746 24747 24748 24749 24750 24751 24752 24753 24754 24755 24756 24757 24758 24759 24760 24761 24762 24763 24764 24765 24766 24767 24768 24769 24770 24771 24772 24773 24774 24775 24776 24777 24778 24779 24780 24781 24782 24783 24784 24785 24786 24787 24788 24789 24790 24791 24792 24793 24794 24795 24796 24797 24798 24799 24800 24801 24802 24803 24804 24805 24806 24807 24808 24809 24810 24811 24812 24813 24814 24815 24816 24817 24818 24819 24820 24821 24822 24823 24824 24825 24826 24827 24828 24829 24830 24831 24832 24833 24834 24835 24836 24837 24838 24839 24840 24841 24842 24843 24844 24845 24846 24847 24848 24849 24850 24851 24852 24853 24854 24855 24856 24857 24858 24859 24860 24861 24862 24863 24864 24865 24866 24867 24868 24869 24870 24871 24872 24873 24874 24875 24876 24877 24878 24879 24880 24881 24882 24883 24884 24885 24886 24887 24888 24889 24890 24891 24892 24893 24894 24895 24896 24897 24898 24899 24900 24901 24902 24903 24904 24905 24906 24907 24908 24909 24910 24911 24912 24913 24914 24915 24916 24917 24918 24919 24920 24921 24922 24923 24924 24925 24926 24927 24928 24929 24930 24931 24932 24933 24934 24935 24936 24937 24938 24939 24940 24941 24942 24943 24944 24945 24946 24947 24948 24949 24950 24951 24952 24953 24954 24955 24956 24957 24958 24959 24960 24961 24962 24963 24964 24965 24966 24967 24968 24969 24970 24971 24972 24973 24974 24975 24976 24977 24978 24979 24980 24981 24982 24983 24984 24985 24986 24987 24988 24989 24990 24991 24992 24993 24994 24995 24996 24997 24998 24999 25000 25001 25002 25003 25004 25005 25006 25007 25008 25009 25010 25011 25012 25013 25014 25015 25016 25017 25018 25019 25020 25021 25022 25023 25024 25025 25026 25027 25028 25029 25030 25031 25032 25033 25034 25035 25036 25037 25038 25039 25040 25041 25042 25043 25044 25045 25046 25047 25048 25049 25050 25051 25052 25053 25054 25055 25056 25057 25058 25059 25060 25061 25062 25063 25064 25065 25066 25067 25068 25069 25070 25071 25072 25073 25074 25075 25076 25077 25078 25079 25080 25081 25082 25083 25084 25085 25086 25087 25088 25089 25090 25091 25092 25093 25094 25095 25096 25097 25098 25099 25100 25101 25102 25103 25104 25105 25106 25107 25108 25109 25110 25111 25112 25113 25114 25115 25116 25117 25118 25119 25120 25121 25122 25123 25124 25125 25126 25127 25128 25129 25130 25131 25132 25133 25134 25135 25136 25137 25138 25139 25140 25141 25142 25143 25144 25145 25146 25147 25148 25149 25150 25151 25152 25153 25154 25155 25156 25157 25158 25159 25160 25161 25162 25163 25164 25165 25166 25167 25168 25169 25170 25171 25172 25173 25174 25175 25176 25177 25178 25179 25180 25181 25182 25183 25184 25185 25186 25187 25188 25189 25190 25191 25192 25193 25194 25195 25196 25197 25198 25199 25200 25201 25202 25203 25204 25205 25206 25207 25208 25209 25210 25211 25212 25213 25214 25215 25216 25217 25218 25219 25220 25221 25222 25223 25224 25225 25226 25227 25228 25229 25230 25231 25232 25233 25234 25235 25236 25237 25238 25239 25240 25241 25242 25243 25244 25245 25246 25247 25248 25249 25250 25251 25252 25253 25254 25255 25256 25257 25258 25259 25260 25261 25262 25263 25264 25265 25266 25267 25268 25269 25270 25271 25272 25273 25274 25275 25276 25277 25278 25279 25280 25281 25282 25283 25284 25285 25286 25287 25288 25289 25290 25291 25292 25293 25294 25295 25296 25297 25298 25299 25300 25301 25302 25303 25304 25305 25306 25307 25308 25309 25310 25311 25312 25313 25314 25315 25316 25317 25318 25319 25320 25321 25322 25323 25324 25325 25326 25327 25328 25329 25330 25331 25332 25333 25334 25335 25336 25337 25338 25339 25340 25341 25342 25343 25344 25345 25346 25347 25348 25349 25350 25351 25352 25353 25354 25355 25356 25357 25358 25359 25360 25361 25362 25363 25364 25365 25366 25367 25368 25369 25370 25371 25372 25373 25374 25375 25376 25377 25378 25379 25380 25381 25382 25383 25384 25385 25386 25387 25388 25389 25390 25391 25392 25393 25394 25395 25396 25397 25398 25399 25400 25401 25402 25403 25404 25405 25406 25407 25408 25409 25410 25411 25412 25413 25414 25415 25416 25417 25418 25419 25420 25421 25422 25423 25424 25425 25426 25427 25428 25429 25430 25431 25432 25433 25434 25435 25436 25437 25438 25439 25440 25441 25442 25443 25444 25445 25446 25447 25448 25449 25450 25451 25452 25453 25454 25455 25456 25457 25458 25459 25460 25461 25462 25463 25464 25465 25466 25467 25468 25469 25470 25471 25472 25473 25474 25475 25476 25477 25478 25479 25480 25481 25482 25483 25484 25485 25486 25487 25488 25489 25490 25491 25492 25493 25494 25495 25496 25497 25498 25499 25500 25501 25502 25503 25504 25505 25506 25507 25508 25509 25510 25511 25512 25513 25514 25515 25516 25517 25518 25519 25520 25521 25522 25523 25524 25525 25526 25527 25528 25529 25530 25531 25532 25533 25534 25535 25536 25537 25538 25539 25540 25541 25542 25543 25544 25545 25546 25547 25548 25549 25550 25551 25552 25553 25554 25555 25556 25557 25558 25559 25560 25561 25562 25563 25564 25565 25566 25567 25568 25569 25570 25571 25572 25573 25574 25575 25576 25577 25578 25579 25580 25581 25582 25583 25584 25585 25586 25587 25588 25589 25590 25591 25592 25593 25594 25595 25596 25597 25598 25599 25600 25601 25602 25603 25604 25605 25606 25607 25608 25609 25610 25611 25612 25613 25614 25615 25616 25617 25618 25619 25620 25621 25622 25623 25624 25625 25626 25627 25628 25629 25630 25631 25632 25633 25634 25635 25636 25637 25638 25639 25640 25641 25642 25643 25644 25645 25646 25647 25648 25649 25650 25651 25652 25653 25654 25655 25656 25657 25658 25659 25660 25661 25662 25663 25664 25665 25666 25667 25668 25669 25670 25671 25672 25673 25674 25675 25676 25677 25678 25679 25680 25681 25682 25683 25684 25685 25686 25687 25688 25689 25690 25691 25692 25693 25694 25695 25696 25697 25698 25699 25700 25701 25702 25703 25704 25705 25706 25707 25708 25709 25710 25711 25712 25713 25714 25715 25716 25717 25718 25719 25720 25721 25722 25723 25724 25725 25726 25727 25728 25729 25730 25731 25732 25733 25734 25735 25736 25737 25738 25739 25740 25741 25742 25743 25744 25745 25746 25747 25748 25749 25750 25751 25752 25753 25754 25755 25756 25757 25758 25759 25760 25761 25762 25763 25764 25765 25766 25767 25768 25769 25770 25771 25772 25773 25774 25775 25776 25777 25778 25779 25780 25781 25782 25783 25784 25785 25786 25787 25788 25789 25790 25791 25792 25793 25794 25795 25796 25797 25798 25799 25800 25801 25802 25803 25804 25805 25806 25807 25808 25809 25810 25811 25812 25813 25814 25815 25816 25817 25818 25819 25820 25821 25822 25823 25824 25825 25826 25827 25828 25829 25830 25831 25832 25833 25834 25835 25836 25837 25838 25839 25840 25841 25842 25843 25844 25845 25846 25847 25848 25849 25850 25851 25852 25853 25854 25855 25856 25857 25858 25859 25860 25861 25862 25863 25864 25865 25866 25867 25868 25869 25870 25871 25872 25873 25874 25875 25876 25877 25878 25879 25880 25881 25882 25883 25884 25885 25886 25887 25888 25889 25890 25891 25892 25893 25894 25895 25896 25897 25898 25899 25900 25901 25902 25903 25904 25905 25906 25907 25908 25909 25910 25911 25912 25913 25914 25915 25916 25917 25918 25919 25920 25921 25922 25923 25924 25925 25926 25927 25928 25929 25930 25931 25932 25933 25934 25935 25936 25937 25938 25939 25940 25941 25942 25943 25944 25945 25946 25947 25948 25949 25950 25951 25952 25953 25954 25955 25956 25957 25958 25959 25960 25961 25962 25963 25964 25965 25966 25967 25968 25969 25970 25971 25972 25973 25974 25975 25976 25977 25978 25979 25980 25981 25982 25983 25984 25985 25986 25987 25988 25989 25990 25991 25992 25993 25994 25995 25996 25997 25998 25999 26000 26001 26002 26003 26004 26005 26006 26007 26008 26009 26010 26011 26012 26013 26014 26015 26016 26017 26018 26019 26020 26021 26022 26023 26024 26025 26026 26027 26028 26029 26030 26031 26032 26033 26034 26035 26036 26037 26038 26039 26040 26041 26042 26043 26044 26045 26046 26047 26048 26049 26050 26051 26052 26053 26054 26055 26056 26057 26058 26059 26060 26061 26062 26063 26064 26065 26066 26067 26068 26069 26070 26071 26072 26073 26074 26075 26076 26077 26078 26079 26080 26081 26082 26083 26084 26085 26086 26087 26088 26089 26090 26091 26092 26093 26094 26095 26096 26097 26098 26099 26100 26101 26102 26103 26104 26105 26106 26107 26108 26109 26110 26111 26112 26113 26114 26115 26116 26117 26118 26119 26120 26121 26122 26123 26124 26125 26126 26127 26128 26129 26130 26131 26132 26133 26134 26135 26136 26137 26138 26139 26140 26141 26142 26143 26144 26145 26146 26147 26148 26149 26150 26151 26152 26153 26154 26155 26156 26157 26158 26159 26160 26161 26162 26163 26164 26165 26166 26167 26168 26169 26170 26171 26172 26173 26174 26175 26176 26177 26178 26179 26180 26181 26182 26183 26184 26185 26186 26187 26188 26189 26190 26191 26192 26193 26194 26195 26196 26197 26198 26199 26200 26201 26202 26203 26204 26205 26206 26207 26208 26209 26210 26211 26212 26213 26214 26215 26216 26217 26218 26219 26220 26221 26222 26223 26224 26225 26226 26227 26228 26229 26230 26231 26232 26233 26234 26235 26236 26237 26238 26239 26240 26241 26242 26243 26244 26245 26246 26247 26248 26249 26250 26251 26252 26253 26254 26255 26256 26257 26258 26259 26260 26261 26262 26263 26264 26265 26266 26267 26268 26269 26270 26271 26272 26273 26274 26275 26276 26277 26278 26279 26280 26281 26282 26283 26284 26285 26286 26287 26288 26289 26290 26291 26292 26293 26294 26295 26296 26297 26298 26299 26300 26301 26302 26303 26304 26305 26306 26307 26308 26309 26310 26311 26312 26313 26314 26315 26316 26317 26318 26319 26320 26321 26322 26323 26324 26325 26326 26327 26328 26329 26330 26331 26332 26333 26334 26335 26336 26337 26338 26339 26340 26341 26342 26343 26344 26345 26346 26347 26348 26349 26350 26351 26352 26353 26354 26355 26356 26357 26358 26359 26360 26361 26362 26363 26364 26365 26366 26367 26368 26369 26370 26371 26372 26373 26374 26375 26376 26377 26378 26379 26380 26381 26382 26383 26384 26385 26386 26387 26388 26389 26390 26391 26392 26393 26394 26395 26396 26397 26398 26399 26400 26401 26402 26403 26404 26405 26406 26407 26408 26409 26410 26411 26412 26413 26414 26415 26416 26417 26418 26419 26420 26421 26422 26423 26424 26425 26426 26427 26428 26429 26430 26431 26432 26433 26434 26435 26436 26437 26438 26439 26440 26441 26442 26443 26444 26445 26446 26447 26448 26449 26450 26451 26452 26453 26454 26455 26456 26457 26458 26459 26460 26461 26462 26463 26464 26465 26466 26467 26468 26469 26470 26471 26472 26473 26474 26475 26476 26477 26478 26479 26480 26481 26482 26483 26484 26485 26486 26487 26488 26489 26490 26491 26492 26493 26494 26495 26496 26497 26498 26499 26500 26501 26502 26503 26504 26505 26506 26507 26508 26509 26510 26511 26512 26513 26514 26515 26516 26517 26518 26519 26520 26521 26522 26523 26524 26525 26526 26527 26528 26529 26530 26531 26532 26533 26534 26535 26536 26537 26538 26539 26540 26541 26542 26543 26544 26545 26546 26547 26548 26549 26550 26551 26552 26553 26554 26555 26556 26557 26558 26559 26560 26561 26562 26563 26564 26565 26566 26567 26568 26569 26570 26571 26572 26573 26574 26575 26576 26577 26578 26579 26580 26581 26582 26583 26584 26585 26586 26587 26588 26589 26590 26591 26592 26593 26594 26595 26596 26597 26598 26599 26600 26601 26602 26603 26604 26605 26606 26607 26608 26609 26610 26611 26612 26613 26614 26615 26616 26617 26618 26619 26620 26621 26622 26623 26624 26625 26626 26627 26628 26629 26630 26631 26632 26633 26634 26635 26636 26637 26638 26639 26640 26641 26642 26643 26644 26645 26646 26647 26648 26649 26650 26651 26652 26653 26654 26655 26656 26657 26658 26659 26660 26661 26662 26663 26664 26665 26666 26667 26668 26669 26670 26671 26672 26673 26674 26675 26676 26677 26678 26679 26680 26681 26682 26683 26684 26685 26686 26687 26688 26689 26690 26691 26692 26693 26694 26695 26696 26697 26698 26699 26700 26701 26702 26703 26704 26705 26706 26707 26708 26709 26710 26711 26712 26713 26714 26715 26716 26717 26718 26719 26720 26721 26722 26723 26724 26725 26726 26727 26728 26729 26730 26731 26732 26733 26734 26735 26736 26737 26738 26739 26740 26741 26742 26743 26744 26745 26746 26747 26748 26749 26750 26751 26752 26753 26754 26755 26756 26757 26758 26759 26760 26761 26762 26763 26764 26765 26766 26767 26768 26769 26770 26771 26772 26773 26774 26775 26776 26777 26778 26779 26780 26781 26782 26783 26784 26785 26786 26787 26788 26789 26790 26791 26792 26793 26794 26795 26796 26797 26798 26799 26800 26801 26802 26803 26804 26805 26806 26807 26808 26809 26810 26811 26812 26813 26814 26815 26816 26817 26818 26819 26820 26821 26822 26823 26824 26825 26826 26827 26828 26829 26830 26831 26832 26833 26834 26835 26836 26837 26838 26839 26840 26841 26842 26843 26844 26845 26846 26847 26848 26849 26850 26851 26852 26853 26854 26855 26856 26857 26858 26859 26860 26861 26862 26863 26864 26865 26866 26867 26868 26869 26870 26871 26872 26873 26874 26875 26876 26877 26878 26879 26880 26881 26882 26883 26884 26885 26886 26887 26888 26889 26890 26891 26892 26893 26894 26895 26896 26897 26898 26899 26900 26901 26902 26903 26904 26905 26906 26907 26908 26909 26910 26911 26912 26913 26914 26915 26916 26917 26918 26919 26920 26921 26922 26923 26924 26925 26926 26927 26928 26929 26930 26931 26932 26933 26934 26935 26936 26937 26938 26939 26940 26941 26942 26943 26944 26945 26946 26947 26948 26949 26950 26951 26952 26953 26954 26955 26956 26957 26958 26959 26960 26961 26962 26963 26964 26965 26966 26967 26968 26969 26970 26971 26972 26973 26974 26975 26976 26977 26978 26979 26980 26981 26982 26983 26984 26985 26986 26987 26988 26989 26990 26991 26992 26993 26994 26995 26996 26997 26998 26999 27000 27001 27002 27003 27004 27005 27006 27007 27008 27009 27010 27011 27012 27013 27014 27015 27016 27017 27018 27019 27020 27021 27022 27023 27024 27025 27026 27027 27028 27029 27030 27031 27032 27033 27034 27035 27036 27037 27038 27039 27040 27041 27042 27043 27044 27045 27046 27047 27048 27049 27050 27051 27052 27053 27054 27055 27056 27057 27058 27059 27060 27061 27062 27063 27064 27065 27066 27067 27068 27069 27070 27071 27072 27073 27074 27075 27076 27077 27078 27079 27080 27081 27082 27083 27084 27085 27086 27087 27088 27089 27090 27091 27092 27093 27094 27095 27096 27097 27098 27099 27100 27101 27102 27103 27104 27105 27106 27107 27108 27109 27110 27111 27112 27113 27114 27115 27116 27117 27118 27119 27120 27121 27122 27123 27124 27125 27126 27127 27128 27129 27130 27131 27132 27133 27134 27135 27136 27137 27138 27139 27140 27141 27142 27143 27144 27145 27146 27147 27148 27149 27150 27151 27152 27153 27154 27155 27156 27157 27158 27159 27160 27161 27162 27163 27164 27165 27166 27167 27168 27169 27170 27171 27172 27173 27174 27175 27176 27177 27178 27179 27180 27181 27182 27183 27184 27185 27186 27187 27188 27189 27190 27191 27192 27193 27194 27195 27196 27197 27198 27199 27200 27201 27202 27203 27204 27205 27206 27207 27208 27209 27210 27211 27212 27213 27214 27215 27216 27217 27218 27219 27220 27221 27222 27223 27224 27225 27226 27227 27228 27229 27230 27231 27232 27233 27234 27235 27236 27237 27238 27239 27240 27241 27242 27243 27244 27245 27246 27247 27248 27249 27250 27251 27252 27253 27254 27255 27256 27257 27258 27259 27260 27261 27262 27263 27264 27265 27266 27267 27268 27269 27270 27271 27272 27273 27274 27275 27276 27277 27278 27279 27280 27281 27282 27283 27284 27285 27286 27287 27288 27289 27290 27291 27292 27293 27294 27295 27296 27297 27298 27299 27300 27301 27302 27303 27304 27305 27306 27307 27308 27309 27310 27311 27312 27313 27314 27315 27316 27317 27318 27319 27320 27321 27322 27323 27324 27325 27326 27327 27328 27329 27330 27331 27332 27333 27334 27335 27336 27337 27338 27339 27340 27341 27342 27343 27344 27345 27346 27347 27348 27349 27350 27351 27352 27353 27354 27355 27356 27357 27358 27359 27360 27361 27362 27363 27364 27365 27366 27367 27368 27369 27370 27371 27372 27373 27374 27375 27376 27377 27378 27379 27380 27381 27382 27383 27384 27385 27386 27387 27388 27389 27390 27391 27392 27393 27394 27395 27396 27397 27398 27399 27400 27401 27402 27403 27404 27405 27406 27407 27408 27409 27410 27411 27412 27413 27414 27415 27416 27417 27418 27419 27420 27421 27422 27423 27424 27425 27426 27427 27428 27429 27430 27431 27432 27433 27434 27435 27436 27437 27438 27439 27440 27441 27442 27443 27444 27445 27446 27447 27448 27449 27450 27451 27452 27453 27454 27455 27456 27457 27458 27459 27460 27461 27462 27463 27464 27465 27466 27467 27468 27469 27470 27471 27472 27473 27474 27475 27476 27477 27478 27479 27480 27481 27482 27483 27484 27485 27486 27487 27488 27489 27490 27491 27492 27493 27494 27495 27496 27497 27498 27499 27500 27501 27502 27503 27504 27505 27506 27507 27508 27509 27510 27511 27512 27513 27514 27515 27516 27517 27518 27519 27520 27521 27522 27523 27524 27525 27526 27527 27528 27529 27530 27531 27532 27533 27534 27535 27536 27537 27538 27539 27540 27541 27542 27543 27544 27545 27546 27547 27548 27549 27550 27551 27552 27553 27554 27555 27556 27557 27558 27559 27560 27561 27562 27563 27564 27565 27566 27567 27568 27569 27570 27571 27572 27573 27574 27575 27576 27577 27578 27579 27580 27581 27582 27583 27584 27585 27586 27587 27588 27589 27590 27591 27592 27593 27594 27595 27596 27597 27598 27599 27600 27601 27602 27603 27604 27605 27606 27607 27608 27609 27610 27611 27612 27613 27614 27615 27616 27617 27618 27619 27620 27621 27622 27623 27624 27625 27626 27627 27628 27629 27630 27631 27632 27633 27634 27635 27636 27637 27638 27639 27640 27641 27642 27643 27644 27645 27646 27647 27648 27649 27650 27651 27652 27653 27654 27655 27656 27657 27658 27659 27660 27661 27662 27663 27664 27665 27666 27667 27668 27669 27670 27671 27672 27673 27674 27675 27676 27677 27678 27679 27680 27681 27682 27683 27684 27685 27686 27687 27688 27689 27690 27691 27692 27693 27694 27695 27696 27697 27698 27699 27700 27701 27702 27703 27704 27705 27706 27707 27708 27709 27710 27711 27712 27713 27714 27715 27716 27717 27718 27719 27720 27721 27722 27723 27724 27725 27726 27727 27728 27729 27730 27731 27732 27733 27734 27735 27736 27737 27738 27739 27740 27741 27742 27743 27744 27745 27746 27747 27748 27749 27750 27751 27752 27753 27754 27755 27756 27757 27758 27759 27760 27761 27762 27763 27764 27765 27766 27767 27768 27769 27770 27771 27772 27773 27774 27775 27776 27777 27778 27779 27780 27781 27782 27783 27784 27785 27786 27787 27788 27789 27790 27791 27792 27793 27794 27795 27796 27797 27798 27799 27800 27801 27802 27803 27804 27805 27806 27807 27808 27809 27810 27811 27812 27813 27814 27815 27816 27817 27818 27819 27820 27821 27822 27823 27824 27825 27826 27827 27828 27829 27830 27831 27832 27833 27834 27835 27836 27837 27838 27839 27840 27841 27842 27843 27844 27845 27846 27847 27848 27849 27850 27851 27852 27853 27854 27855 27856 27857 27858 27859 27860 27861 27862 27863 27864 27865 27866 27867 27868 27869 27870 27871 27872 27873 27874 27875 27876 27877 27878 27879 27880 27881 27882 27883 27884 27885 27886 27887 27888 27889 27890 27891 27892 27893 27894 27895 27896 27897 27898 27899 27900 27901 27902 27903 27904 27905 27906 27907 27908 27909 27910 27911 27912 27913 27914 27915 27916 27917 27918 27919 27920 27921 27922 27923 27924 27925 27926 27927 27928 27929 27930 27931 27932 27933 27934 27935 27936 27937 27938 27939 27940 27941 27942 27943 27944 27945 27946 27947 27948 27949 27950 27951 27952 27953 27954 27955 27956 27957 27958 27959 27960 27961 27962 27963 27964 27965 27966 27967 27968 27969 27970 27971 27972 27973 27974 27975 27976 27977 27978 27979 27980 27981 27982 27983 27984 27985 27986 27987 27988 27989 27990 27991 27992 27993 27994 27995 27996 27997 27998 27999 28000 28001 28002 28003 28004 28005 28006 28007 28008 28009 28010 28011 28012 28013 28014 28015 28016 28017 28018 28019 28020 28021 28022 28023 28024 28025 28026 28027 28028 28029 28030 28031 28032 28033 28034 28035 28036 28037 28038 28039 28040 28041 28042 28043 28044 28045 28046 28047 28048 28049 28050 28051 28052 28053 28054 28055 28056 28057 28058 28059 28060 28061 28062 28063 28064 28065 28066 28067 28068 28069 28070 28071 28072 28073 28074 28075 28076 28077 28078 28079 28080 28081 28082 28083 28084 28085 28086 28087 28088 28089 28090 28091 28092 28093 28094 28095 28096 28097 28098 28099 28100 28101 28102 28103 28104 28105 28106 28107 28108 28109 28110 28111 28112 28113 28114 28115 28116 28117 28118 28119 28120 28121 28122 28123 28124 28125 28126 28127 28128 28129 28130 28131 28132 28133 28134 28135 28136 28137 28138 28139 28140 28141 28142 28143 28144 28145 28146 28147 28148 28149 28150 28151 28152 28153 28154 28155 28156 28157 28158 28159 28160 28161 28162 28163 28164 28165 28166 28167 28168 28169 28170 28171 28172 28173 28174 28175 28176 28177 28178 28179 28180 28181 28182 28183 28184 28185 28186 28187 28188 28189 28190 28191 28192 28193 28194 28195 28196 28197 28198 28199 28200 28201 28202 28203 28204 28205 28206 28207 28208 28209 28210 28211 28212 28213 28214 28215 28216 28217 28218 28219 28220 28221 28222 28223 28224 28225 28226 28227 28228 28229 28230 28231 28232 28233 28234 28235 28236 28237 28238 28239 28240 28241 28242 28243 28244 28245 28246 28247 28248 28249 28250 28251 28252 28253 28254 28255 28256 28257 28258 28259 28260 28261 28262 28263 28264 28265 28266 28267 28268 28269 28270 28271 28272 28273 28274 28275 28276 28277 28278 28279 28280 28281 28282 28283 28284 28285 28286 28287 28288 28289 28290 28291 28292 28293 28294 28295 28296 28297 28298 28299 28300 28301 28302 28303 28304 28305 28306 28307 28308 28309 28310 28311 28312 28313 28314 28315 28316 28317 28318 28319 28320 28321 28322 28323 28324 28325 28326 28327 28328 28329 28330 28331 28332 28333 28334 28335 28336 28337 28338 28339 28340 28341 28342 28343 28344 28345 28346 28347 28348 28349 28350 28351 28352 28353 28354 28355 28356 28357 28358 28359 28360 28361 28362 28363 28364 28365 28366 28367 28368 28369 28370 28371 28372 28373 28374 28375 28376 28377 28378 28379 28380 28381 28382 28383 28384 28385 28386 28387 28388 28389 28390 28391 28392 28393 28394 28395 28396 28397 28398 28399 28400 28401 28402 28403 28404 28405 28406 28407 28408 28409 28410 28411 28412 28413 28414 28415 28416 28417 28418 28419 28420 28421 28422 28423 28424 28425 28426 28427 28428 28429 28430 28431 28432 28433 28434 28435 28436 28437 28438 28439 28440 28441 28442 28443 28444 28445 28446 28447 28448 28449 28450 28451 28452 28453 28454 28455 28456 28457 28458 28459 28460 28461 28462 28463 28464 28465 28466 28467 28468 28469 28470 28471 28472 28473 28474 28475 28476 28477 28478 28479 28480 28481 28482 28483 28484 28485 28486 28487 28488 28489 28490 28491 28492 28493 28494 28495 28496 28497 28498 28499 28500 28501 28502 28503 28504 28505 28506 28507 28508 28509 28510 28511 28512 28513 28514 28515 28516 28517 28518 28519 28520 28521 28522 28523 28524 28525 28526 28527 28528 28529 28530 28531 28532 28533 28534 28535 28536 28537 28538 28539 28540 28541 28542 28543 28544 28545 28546 28547 28548 28549 28550 28551 28552 28553 28554 28555 28556 28557 28558 28559 28560 28561 28562 28563 28564 28565 28566 28567 28568 28569 28570 28571 28572 28573 28574 28575 28576 28577 28578 28579 28580 28581 28582 28583 28584 28585 28586 28587 28588 28589 28590 28591 28592 28593 28594 28595 28596 28597 28598 28599 28600 28601 28602 28603 28604 28605 28606 28607 28608 28609 28610 28611 28612 28613 28614 28615 28616 28617 28618 28619 28620 28621 28622 28623 28624 28625 28626 28627 28628 28629 28630 28631 28632 28633 28634 28635 28636 28637 28638 28639 28640 28641 28642 28643 28644 28645 28646 28647 28648 28649 28650 28651 28652 28653 28654 28655 28656 28657 28658 28659 28660 28661 28662 28663 28664 28665 28666 28667 28668 28669 28670 28671 28672 28673 28674 28675 28676 28677 28678 28679 28680 28681 28682 28683 28684 28685 28686 28687 28688 28689 28690 28691 28692 28693 28694 28695 28696 28697 28698 28699 28700 28701 28702 28703 28704 28705 28706 28707 28708 28709 28710 28711 28712 28713 28714 28715 28716 28717 28718 28719 28720 28721 28722 28723 28724 28725 28726 28727 28728 28729 28730 28731 28732 28733 28734 28735 28736 28737 28738 28739 28740 28741 28742 28743 28744 28745 28746 28747 28748 28749 28750 28751 28752 28753 28754 28755 28756 28757 28758 28759 28760 28761 28762 28763 28764 28765 28766 28767 28768 28769 28770 28771 28772 28773 28774 28775 28776 28777 28778 28779 28780 28781 28782 28783 28784 28785 28786 28787 28788 28789 28790 28791 28792 28793 28794 28795 28796 28797 28798 28799 28800 28801 28802 28803 28804 28805 28806 28807 28808 28809 28810 28811 28812 28813 28814 28815 28816 28817 28818 28819 28820 28821 28822 28823 28824 28825 28826 28827 28828 28829 28830 28831 28832 28833 28834 28835 28836 28837 28838 28839 28840 28841 28842 28843 28844 28845 28846 28847 28848 28849 28850 28851 28852 28853 28854 28855 28856 28857 28858 28859 28860 28861 28862 28863 28864 28865 28866 28867 28868 28869 28870 28871 28872 28873 28874 28875 28876 28877 28878 28879 28880 28881 28882 28883 28884 28885 28886 28887 28888 28889 28890 28891 28892 28893 28894 28895 28896 28897 28898 28899 28900 28901 28902 28903 28904 28905 28906 28907 28908 28909 28910 28911 28912 28913 28914 28915 28916 28917 28918 28919 28920 28921 28922 28923 28924 28925 28926 28927 28928 28929 28930 28931 28932 28933 28934 28935 28936 28937 28938 28939 28940 28941 28942 28943 28944 28945 28946 28947 28948 28949 28950 28951 28952 28953 28954 28955 28956 28957 28958 28959 28960 28961 28962 28963 28964 28965 28966 28967 28968 28969 28970 28971 28972 28973 28974 28975 28976 28977 28978 28979 28980 28981 28982 28983 28984 28985 28986 28987 28988 28989 28990 28991 28992 28993 28994 28995 28996 28997 28998 28999 29000 29001 29002 29003 29004 29005 29006 29007 29008 29009 29010 29011 29012 29013 29014 29015 29016 29017 29018 29019 29020 29021 29022 29023 29024 29025 29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040 29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055 29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070 29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085 29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100 29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115 29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130 29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145 29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160 29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175 29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190 29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205 29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220 29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235 29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250 29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265 29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280 29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295 29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310 29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325 29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340 29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355 29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370 29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385 29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400 29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415 29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430 29431 29432 29433 29434 29435 29436 29437 29438 29439 29440 29441 29442 29443 29444 29445 29446 29447 29448 29449 29450 29451 29452 29453 29454 29455 29456 29457 29458 29459 29460 29461 29462 29463 29464 29465 29466 29467 29468 29469 29470 29471 29472 29473 29474 29475 29476 29477 29478 29479 29480 29481 29482 29483 29484 29485 29486 29487 29488 29489 29490 29491 29492 29493 29494 29495 29496 29497 29498 29499 29500 29501 29502 29503 29504 29505 29506 29507 29508 29509 29510 29511 29512 29513 29514 29515 29516 29517 29518 29519 29520 29521 29522 29523 29524 29525 29526 29527 29528 29529 29530 29531 29532 29533 29534 29535 29536 29537 29538 29539 29540 29541 29542 29543 29544 29545 29546 29547 29548 29549 29550 29551 29552 29553 29554 29555 29556 29557 29558 29559 29560 29561 29562 29563 29564 29565 29566 29567 29568 29569 29570 29571 29572 29573 29574 29575 29576 29577 29578 29579 29580 29581 29582 29583 29584 29585 29586 29587 29588 29589 29590 29591 29592 29593 29594 29595 29596 29597 29598 29599 29600 29601 29602 29603 29604 29605 29606 29607 29608 29609 29610 29611 29612 29613 29614 29615 29616 29617 29618 29619 29620 29621 29622 29623 29624 29625 29626 29627 29628 29629 29630 29631 29632 29633 29634 29635 29636 29637 29638 29639 29640 29641 29642 29643 29644 29645 29646 29647 29648 29649 29650 29651 29652 29653 29654 29655 29656 29657 29658 29659 29660 29661 29662 29663 29664 29665 29666 29667 29668 29669 29670 29671 29672 29673 29674 29675 29676 29677 29678 29679 29680 29681 29682 29683 29684 29685 29686 29687 29688 29689 29690 29691 29692 29693 29694 29695 29696 29697 29698 29699 29700 29701 29702 29703 29704 29705 29706 29707 29708 29709 29710 29711 29712 29713 29714 29715 29716 29717 29718 29719 29720 29721 29722 29723 29724 29725 29726 29727 29728 29729 29730 29731 29732 29733 29734 29735 29736 29737 29738 29739 29740 29741 29742 29743 29744 29745 29746 29747 29748 29749 29750 29751 29752 29753 29754 29755 29756 29757 29758 29759 29760 29761 29762 29763 29764 29765 29766 29767 29768 29769 29770 29771 29772 29773 29774 29775 29776 29777 29778 29779 29780 29781 29782 29783 29784 29785 29786 29787 29788 29789 29790 29791 29792 29793 29794 29795 29796 29797 29798 29799 29800 29801 29802 29803 29804 29805 29806 29807 29808 29809 29810 29811 29812 29813 29814 29815 29816 29817 29818 29819 29820 29821 29822 29823 29824 29825 29826 29827 29828 29829 29830 29831 29832 29833 29834 29835 29836 29837 29838 29839 29840 29841 29842 29843 29844 29845 29846 29847 29848 29849 29850 29851 29852 29853 29854 29855 29856 29857 29858 29859 29860 29861 29862 29863 29864 29865 29866 29867 29868 29869 29870 29871 29872 29873 29874 29875 29876 29877 29878 29879 29880 29881 29882 29883 29884 29885 29886 29887 29888 29889 29890 29891 29892 29893 29894 29895 29896 29897 29898 29899 29900 29901 29902 29903 29904 29905 29906 29907 29908 29909 29910 29911 29912 29913 29914 29915 29916 29917 29918 29919 29920 29921 29922 29923 29924 29925 29926 29927 29928 29929 29930 29931 29932 29933 29934 29935 29936 29937 29938 29939 29940 29941 29942 29943 29944 29945 29946 29947 29948 29949 29950 29951 29952 29953 29954 29955 29956 29957 29958 29959 29960 29961 29962 29963 29964 29965 29966 29967 29968 29969 29970 29971 29972 29973 29974 29975 29976 29977 29978 29979 29980 29981 29982 29983 29984 29985 29986 29987 29988 29989 29990 29991 29992 29993 29994 29995 29996 29997 29998 29999 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 30050 30051 30052 30053 30054 30055 30056 30057 30058 30059 30060 30061 30062 30063 30064 30065 30066 30067 30068 30069 30070 30071 30072 30073 30074 30075 30076 30077 30078 30079 30080 30081 30082 30083 30084 30085 30086 30087 30088 30089 30090 30091 30092 30093 30094 30095 30096 30097 30098 30099 30100 30101 30102 30103 30104 30105 30106 30107 30108 30109 30110 30111 30112 30113 30114 30115 30116 30117 30118 30119 30120 30121 30122 30123 30124 30125 30126 30127 30128 30129 30130 30131 30132 30133 30134 30135 30136 30137 30138 30139 30140 30141 30142 30143 30144 30145 30146 30147 30148 30149 30150 30151 30152 30153 30154 30155 30156 30157 30158 30159 30160 30161 30162 30163 30164 30165 30166 30167 30168 30169 30170 30171 30172 30173 30174 30175 30176 30177 30178 30179 30180 30181 30182 30183 30184 30185 30186 30187 30188 30189 30190 30191 30192 30193 30194 30195 30196 30197 30198 30199 30200 30201 30202 30203 30204 30205 30206 30207 30208 30209 30210 30211 30212 30213 30214 30215 30216 30217 30218 30219 30220 30221 30222 30223 30224 30225 30226 30227 30228 30229 30230 30231 30232 30233 30234 30235 30236 30237 30238 30239 30240 30241 30242 30243 30244 30245 30246 30247 30248 30249 30250 30251 30252 30253 30254 30255 30256 30257 30258 30259 30260 30261 30262 30263 30264 30265 30266 30267 30268 30269 30270 30271 30272 30273 30274 30275 30276 30277 30278 30279 30280 30281 30282 30283 30284 30285 30286 30287 30288 30289 30290 30291 30292 30293 30294 30295 30296 30297 30298 30299 30300 30301 30302 30303 30304 30305 30306 30307 30308 30309 30310 30311 30312 30313 30314 30315 30316 30317 30318 30319 30320 30321 30322 30323 30324 30325 30326 30327 30328 30329 30330 30331 30332 30333 30334 30335 30336 30337 30338 30339 30340 30341 30342 30343 30344 30345 30346 30347 30348 30349 30350 30351 30352 30353 30354 30355 30356 30357 30358 30359 30360 30361 30362 30363 30364 30365 30366 30367 30368 30369 30370 30371 30372 30373 30374 30375 30376 30377 30378 30379 30380 30381 30382 30383 30384 30385 30386 30387 30388 30389 30390 30391 30392 30393 30394 30395 30396 30397 30398 30399 30400 30401 30402 30403 30404 30405 30406 30407 30408 30409 30410 30411 30412 30413 30414 30415 30416 30417 30418 30419 30420 30421 30422 30423 30424 30425 30426 30427 30428 30429 30430 30431 30432 30433 30434 30435 30436 30437 30438 30439 30440 30441 30442 30443 30444 30445 30446 30447 30448 30449 30450 30451 30452 30453 30454 30455 30456 30457 30458 30459 30460 30461 30462 30463 30464 30465 30466 30467 30468 30469 30470 30471 30472 30473 30474 30475 30476 30477 30478 30479 30480 30481 30482 30483 30484 30485 30486 30487 30488 30489 30490 30491 30492 30493 30494 30495 30496 30497 30498 30499 30500 30501 30502 30503 30504 30505 30506 30507 30508 30509 30510 30511 30512 30513 30514 30515 30516 30517 30518 30519 30520 30521 30522 30523 30524 30525 30526 30527 30528 30529 30530 30531 30532 30533 30534 30535 30536 30537 30538 30539 30540 30541 30542 30543 30544 30545 30546 30547 30548 30549 30550 30551 30552 30553 30554 30555 30556 30557 30558 30559 30560 30561 30562 30563 30564 30565 30566 30567 30568 30569 30570 30571 30572 30573 30574 30575 30576 30577 30578 30579 30580 30581 30582 30583 30584 30585 30586 30587 30588 30589 30590 30591 30592 30593 30594 30595 30596 30597 30598 30599 30600 30601 30602 30603 30604 30605 30606 30607 30608 30609 30610 30611 30612 30613 30614 30615 30616 30617 30618 30619 30620 30621 30622 30623 30624 30625 30626 30627 30628 30629 30630 30631 30632 30633 30634 30635 30636 30637 30638 30639 30640 30641 30642 30643 30644 30645 30646 30647 30648 30649 30650 30651 30652 30653 30654 30655 30656 30657 30658 30659 30660 30661 30662 30663 30664 30665 30666 30667 30668 30669 30670 30671 30672 30673 30674 30675 30676 30677 30678 30679 30680 30681 30682 30683 30684 30685 30686 30687 30688 30689 30690 30691 30692 30693 30694 30695 30696 30697 30698 30699 30700 30701 30702 30703 30704 30705 30706 30707 30708 30709 30710 30711 30712 30713 30714 30715 30716 30717 30718 30719 30720 30721 30722 30723 30724 30725 30726 30727 30728 30729 30730 30731 30732 30733 30734 30735 30736 30737 30738 30739 30740 30741 30742 30743 30744 30745 30746 30747 30748 30749 30750 30751 30752 30753 30754 30755 30756 30757 30758 30759 30760 30761 30762 30763 30764 30765 30766 30767 30768 30769 30770 30771 30772 30773 30774 30775 30776 30777 30778 30779 30780 30781 30782 30783 30784 30785 30786 30787 30788 30789 30790 30791 30792 30793 30794 30795 30796 30797 30798 30799 30800 30801 30802 30803 30804 30805 30806 30807 30808 30809 30810 30811 30812 30813 30814 30815 30816 30817 30818 30819 30820 30821 30822 30823 30824 30825 30826 30827 30828 30829 30830 30831 30832 30833 30834 30835 30836 30837 30838 30839 30840 30841 30842 30843 30844 30845 30846 30847 30848 30849 30850 30851 30852 30853 30854 30855 30856 30857 30858 30859 30860 30861 30862 30863 30864 30865 30866 30867 30868 30869 30870 30871 30872 30873 30874 30875 30876 30877 30878 30879 30880 30881 30882 30883 30884 30885 30886 30887 30888 30889 30890 30891 30892 30893 30894 30895 30896 30897 30898 30899 30900 30901 30902 30903 30904 30905 30906 30907 30908 30909 30910 30911 30912 30913 30914 30915 30916 30917 30918 30919 30920 30921 30922 30923 30924 30925 30926 30927 30928 30929 30930 30931 30932 30933 30934 30935 30936 30937 30938 30939 30940 30941 30942 30943 30944 30945 30946 30947 30948 30949 30950 30951 30952 30953 30954 30955 30956 30957 30958 30959 30960 30961 30962 30963 30964 30965 30966 30967 30968 30969 30970 30971 30972 30973 30974 30975 30976 30977 30978 30979 30980 30981 30982 30983 30984 30985 30986 30987 30988 30989 30990 30991 30992 30993 30994 30995 30996 30997 30998 30999 31000 31001 31002 31003 31004 31005 31006 31007 31008 31009 31010 31011 31012 31013 31014 31015 31016 31017 31018 31019 31020 31021 31022 31023 31024 31025 31026 31027 31028 31029 31030 31031 31032 31033 31034 31035 31036 31037 31038 31039 31040 31041 31042 31043 31044 31045 31046 31047 31048 31049 31050 31051 31052 31053 31054 31055 31056 31057 31058 31059 31060 31061 31062 31063 31064 31065 31066 31067 31068 31069 31070 31071 31072 31073 31074 31075 31076 31077 31078 31079 31080 31081 31082 31083 31084 31085 31086 31087 31088 31089 31090 31091 31092 31093 31094 31095 31096 31097 31098 31099 31100 31101 31102 31103 31104 31105 31106 31107 31108 31109 31110 31111 31112 31113 31114 31115 31116 31117 31118 31119 31120 31121 31122 31123 31124 31125 31126 31127 31128 31129 31130 31131 31132 31133 31134 31135 31136 31137 31138 31139 31140 31141 31142 31143 31144 31145 31146 31147 31148 31149 31150 31151 31152 31153 31154 31155 31156 31157 31158 31159 31160 31161 31162 31163 31164 31165 31166 31167 31168 31169 31170 31171 31172 31173 31174 31175 31176 31177 31178 31179 31180 31181 31182 31183 31184 31185 31186 31187 31188 31189 31190 31191 31192 31193 31194 31195 31196 31197 31198 31199 31200 31201 31202 31203 31204 31205 31206 31207 31208 31209 31210 31211 31212 31213 31214 31215 31216 31217 31218 31219 31220 31221 31222 31223 31224 31225 31226 31227 31228 31229 31230 31231 31232 31233 31234 31235 31236 31237 31238 31239 31240 31241 31242 31243 31244 31245 31246 31247 31248 31249 31250 31251 31252 31253 31254 31255 31256 31257 31258 31259 31260 31261 31262 31263 31264 31265 31266 31267 31268 31269 31270 31271 31272 31273 31274 31275 31276 31277 31278 31279 31280 31281 31282 31283 31284 31285 31286 31287 31288 31289 31290 31291 31292 31293 31294 31295 31296 31297 31298 31299 31300 31301 31302 31303 31304 31305 31306 31307 31308 31309 31310 31311 31312 31313 31314 31315 31316 31317 31318 31319 31320 31321 31322 31323 31324 31325 31326 31327 31328 31329 31330 31331 31332 31333 31334 31335 31336 31337 31338 31339 31340 31341 31342 31343 31344 31345 31346 31347 31348 31349 31350 31351 31352 31353 31354 31355 31356 31357 31358 31359 31360 31361 31362 31363 31364 31365 31366 31367 31368 31369 31370 31371 31372 31373 31374 31375 31376 31377 31378 31379 31380 31381 31382 31383 31384 31385 31386 31387 31388 31389 31390 31391 31392 31393 31394 31395 31396 31397 31398 31399 31400 31401 31402 31403 31404 31405 31406 31407 31408 31409 31410 31411 31412 31413 31414 31415 31416 31417 31418 31419 31420 31421 31422 31423 31424 31425 31426 31427 31428 31429 31430 31431 31432 31433 31434 31435 31436 31437 31438 31439 31440 31441 31442 31443 31444 31445 31446 31447 31448 31449 31450 31451 31452 31453 31454 31455 31456 31457 31458 31459 31460 31461 31462 31463 31464 31465 31466 31467 31468 31469 31470 31471 31472 31473 31474 31475 31476 31477 31478 31479 31480 31481 31482 31483 31484 31485 31486 31487 31488 31489 31490 31491 31492 31493 31494 31495 31496 31497 31498 31499 31500 31501 31502 31503 31504 31505 31506 31507 31508 31509 31510 31511 31512 31513 31514 31515 31516 31517 31518 31519 31520 31521 31522 31523 31524 31525 31526 31527 31528 31529 31530 31531 31532 31533 31534 31535 31536 31537 31538 31539 31540 31541 31542 31543 31544 31545 31546 31547 31548 31549 31550 31551 31552 31553 31554 31555 31556 31557 31558 31559 31560 31561 31562 31563 31564 31565 31566 31567 31568 31569 31570 31571 31572 31573 31574 31575 31576 31577 31578 31579 31580 31581 31582 31583 31584 31585 31586 31587 31588 31589 31590 31591 31592 31593 31594 31595 31596 31597 31598 31599 31600 31601 31602 31603 31604 31605 31606 31607 31608 31609 31610 31611 31612 31613 31614 31615 31616 31617 31618 31619 31620 31621 31622 31623 31624 31625 31626 31627 31628 31629 31630 31631 31632 31633 31634 31635 31636 31637 31638 31639 31640 31641 31642 31643 31644 31645 31646 31647 31648 31649 31650 31651 31652 31653 31654 31655 31656 31657 31658 31659 31660 31661 31662 31663 31664 31665 31666 31667 31668 31669 31670 31671 31672 31673 31674 31675 31676 31677 31678 31679 31680 31681 31682 31683 31684 31685 31686 31687 31688 31689 31690 31691 31692 31693 31694 31695 31696 31697 31698 31699 31700 31701 31702 31703 31704 31705 31706 31707 31708 31709 31710 31711 31712 31713 31714 31715 31716 31717 31718 31719 31720 31721 31722 31723 31724 31725 31726 31727 31728 31729 31730 31731 31732 31733 31734 31735 31736 31737 31738 31739 31740 31741 31742 31743 31744 31745 31746 31747 31748 31749 31750 31751 31752 31753 31754 31755 31756 31757 31758 31759 31760 31761 31762 31763 31764 31765 31766 31767 31768 31769 31770 31771 31772 31773 31774 31775 31776 31777 31778 31779 31780 31781 31782 31783 31784 31785 31786 31787 31788 31789 31790 31791 31792 31793 31794 31795 31796 31797 31798 31799 31800 31801 31802 31803 31804 31805 31806 31807 31808 31809 31810 31811 31812 31813 31814 31815 31816 31817 31818 31819 31820 31821 31822 31823 31824 31825 31826 31827 31828 31829 31830 31831 31832 31833 31834 31835 31836 31837 31838 31839 31840 31841 31842 31843 31844 31845 31846 31847 31848 31849 31850 31851 31852 31853 31854 31855 31856 31857 31858 31859 31860 31861 31862 31863 31864 31865 31866 31867 31868 31869 31870 31871 31872 31873 31874 31875 31876 31877 31878 31879 31880 31881 31882 31883 31884 31885 31886 31887 31888 31889 31890 31891 31892 31893 31894 31895 31896 31897 31898 31899 31900 31901 31902 31903 31904 31905 31906 31907 31908 31909 31910 31911 31912 31913 31914 31915 31916 31917 31918 31919 31920 31921 31922 31923 31924 31925 31926 31927 31928 31929 31930 31931 31932 31933 31934 31935 31936 31937 31938 31939 31940 31941 31942 31943 31944 31945 31946 31947 31948 31949 31950 31951 31952 31953 31954 31955 31956 31957 31958 31959 31960 31961 31962 31963 31964 31965 31966 31967 31968 31969 31970 31971 31972 31973 31974 31975 31976 31977 31978 31979 31980 31981 31982 31983 31984 31985 31986 31987 31988 31989 31990 31991 31992 31993 31994 31995 31996 31997 31998 31999 32000 32001 32002 32003 32004 32005 32006 32007 32008 32009 32010 32011 32012 32013 32014 32015 32016 32017 32018 32019 32020 32021 32022 32023 32024 32025 32026 32027 32028 32029 32030 32031 32032 32033 32034 32035 32036 32037 32038 32039 32040 32041 32042 32043 32044 32045 32046 32047 32048 32049 32050 32051 32052 32053 32054 32055 32056 32057 32058 32059 32060 32061 32062 32063 32064 32065 32066 32067 32068 32069 32070 32071 32072 32073 32074 32075 32076 32077 32078 32079 32080 32081 32082 32083 32084 32085 32086 32087 32088 32089 32090 32091 32092 32093 32094 32095 32096 32097 32098 32099 32100 32101 32102 32103 32104 32105 32106 32107 32108 32109 32110 32111 32112 32113 32114 32115 32116 32117 32118 32119 32120 32121 32122 32123 32124 32125 32126 32127 32128 32129 32130 32131 32132 32133 32134 32135 32136 32137 32138 32139 32140 32141 32142 32143 32144 32145 32146 32147 32148 32149 32150 32151 32152 32153 32154 32155 32156 32157 32158 32159 32160 32161 32162 32163 32164 32165 32166 32167 32168 32169 32170 32171 32172 32173 32174 32175 32176 32177 32178 32179 32180 32181 32182 32183 32184 32185 32186 32187 32188 32189 32190 32191 32192 32193 32194 32195 32196 32197 32198 32199 32200 32201 32202 32203 32204 32205 32206 32207 32208 32209 32210 32211 32212 32213 32214 32215 32216 32217 32218 32219 32220 32221 32222 32223 32224 32225 32226 32227 32228 32229 32230 32231 32232 32233 32234 32235 32236 32237 32238 32239 32240 32241 32242 32243 32244 32245 32246 32247 32248 32249 32250 32251 32252 32253 32254 32255 32256 32257 32258 32259 32260 32261 32262 32263 32264 32265 32266 32267 32268 32269 32270 32271 32272 32273 32274 32275 32276 32277 32278 32279 32280 32281 32282 32283 32284 32285 32286 32287 32288 32289 32290 32291 32292 32293 32294 32295 32296 32297 32298 32299 32300 32301 32302 32303 32304 32305 32306 32307 32308 32309 32310 32311 32312 32313 32314 32315 32316 32317 32318 32319 32320 32321 32322 32323 32324 32325 32326 32327 32328 32329 32330 32331 32332 32333 32334 32335 32336 32337 32338 32339 32340 32341 32342 32343 32344 32345 32346 32347 32348 32349 32350 32351 32352 32353 32354 32355 32356 32357 32358 32359 32360 32361 32362 32363 32364 32365 32366 32367 32368 32369 32370 32371 32372 32373 32374 32375 32376 32377 32378 32379 32380 32381 32382 32383 32384 32385 32386 32387 32388 32389 32390 32391 32392 32393 32394 32395 32396 32397 32398 32399 32400 32401 32402 32403 32404 32405 32406 32407 32408 32409 32410 32411 32412 32413 32414 32415 32416 32417 32418 32419 32420 32421 32422 32423 32424 32425 32426 32427 32428 32429 32430 32431 32432 32433 32434 32435 32436 32437 32438 32439 32440 32441 32442 32443 32444 32445 32446 32447 32448 32449 32450 32451 32452 32453 32454 32455 32456 32457 32458 32459 32460 32461 32462 32463 32464 32465 32466 32467 32468 32469 32470 32471 32472 32473 32474 32475 32476 32477 32478 32479 32480 32481 32482 32483 32484 32485 32486 32487 32488 32489 32490 32491 32492 32493 32494 32495 32496 32497 32498 32499 32500 32501 32502 32503 32504 32505 32506 32507 32508 32509 32510 32511 32512 32513 32514 32515 32516 32517 32518 32519 32520 32521 32522 32523 32524 32525 32526 32527 32528 32529 32530 32531 32532 32533 32534 32535 32536 32537 32538 32539 32540 32541 32542 32543 32544 32545 32546 32547 32548 32549 32550 32551 32552 32553 32554 32555 32556 32557 32558 32559 32560 32561 32562 32563 32564 32565 32566 32567 32568 32569 32570 32571 32572 32573 32574 32575 32576 32577 32578 32579 32580 32581 32582 32583 32584 32585 32586 32587 32588 32589 32590 32591 32592 32593 32594 32595 32596 32597 32598 32599 32600 32601 32602 32603 32604 32605 32606 32607 32608 32609 32610 32611 32612 32613 32614 32615 32616 32617 32618 32619 32620 32621 32622 32623 32624 32625 32626 32627 32628 32629 32630 32631 32632 32633 32634 32635 32636 32637 32638 32639 32640 32641 32642 32643 32644 32645 32646 32647 32648 32649 32650 32651 32652 32653 32654 32655 32656 32657 32658 32659 32660 32661 32662 32663 32664 32665 32666 32667 32668 32669 32670 32671 32672 32673 32674 32675 32676 32677 32678 32679 32680 32681 32682 32683 32684 32685 32686 32687 32688 32689 32690 32691 32692 32693 32694 32695 32696 32697 32698 32699 32700 32701 32702 32703 32704 32705 32706 32707 32708 32709 32710 32711 32712 32713 32714 32715 32716 32717 32718 32719 32720 32721 32722 32723 32724 32725 32726 32727 32728 32729 32730 32731 32732 32733 32734 32735 32736 32737 32738 32739 32740 32741 32742 32743 32744 32745 32746 32747 32748 32749 32750 32751 32752 32753 32754 32755 32756 32757 32758 32759 32760 32761 32762 32763 32764 32765 32766 32767 32768 32769 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 32780 32781 32782 32783 32784 32785 32786 32787 32788 32789 32790 32791 32792 32793 32794 32795 32796 32797 32798 32799 32800 32801 32802 32803 32804 32805 32806 32807 32808 32809 32810 32811 32812 32813 32814 32815 32816 32817 32818 32819 32820 32821 32822 32823 32824 32825 32826 32827 32828 32829 32830 32831 32832 32833 32834 32835 32836 32837 32838 32839 32840 32841 32842 32843 32844 32845 32846 32847 32848 32849 32850 32851 32852 32853 32854 32855 32856 32857 32858 32859 32860 32861 32862 32863 32864 32865 32866 32867 32868 32869 32870 32871 32872 32873 32874 32875 32876 32877 32878 32879 32880 32881 32882 32883 32884 32885 32886 32887 32888 32889 32890 32891 32892 32893 32894 32895 32896 32897 32898 32899 32900 32901 32902 32903 32904 32905 32906 32907 32908 32909 32910 32911 32912 32913 32914 32915 32916 32917 32918 32919 32920 32921 32922 32923 32924 32925 32926 32927 32928 32929 32930 32931 32932 32933 32934 32935 32936 32937 32938 32939 32940 32941 32942 32943 32944 32945 32946 32947 32948 32949 32950 32951 32952 32953 32954 32955 32956 32957 32958 32959 32960 32961 32962 32963 32964 32965 32966 32967 32968 32969 32970 32971 32972 32973 32974 32975 32976 32977 32978 32979 32980 32981 32982 32983 32984 32985 32986 32987 32988 32989 32990 32991 32992 32993 32994 32995 32996 32997 32998 32999 33000 33001 33002 33003 33004 33005 33006 33007 33008 33009 33010 33011 33012 33013 33014 33015 33016 33017 33018 33019 33020 33021 33022 33023 33024 33025 33026 33027 33028 33029 33030 33031 33032 33033 33034 33035 33036 33037 33038 33039 33040 33041 33042 33043 33044 33045 33046 33047 33048 33049 33050 33051 33052 33053 33054 33055 33056 33057 33058 33059 33060 33061 33062 33063 33064 33065 33066 33067 33068 33069 33070 33071 33072 33073 33074 33075 33076 33077 33078 33079 33080 33081 33082 33083 33084 33085 33086 33087 33088 33089 33090 33091 33092 33093 33094 33095 33096 33097 33098 33099 33100 33101 33102 33103 33104 33105 33106 33107 33108 33109 33110 33111 33112 33113 33114 33115 33116 33117 33118 33119 33120 33121 33122 33123 33124 33125 33126 33127 33128 33129 33130 33131 33132 33133 33134 33135 33136 33137 33138 33139 33140 33141 33142 33143 33144 33145 33146 33147 33148 33149 33150 33151 33152 33153 33154 33155 33156 33157 33158 33159 33160 33161 33162 33163 33164 33165 33166 33167 33168 33169 33170 33171 33172 33173 33174 33175 33176 33177 33178 33179 33180 33181 33182 33183 33184 33185 33186 33187 33188 33189 33190 33191 33192 33193 33194 33195 33196 33197 33198 33199 33200 33201 33202 33203 33204 33205 33206 33207 33208 33209 33210 33211 33212 33213 33214 33215 33216 33217 33218 33219 33220 33221 33222 33223 33224 33225 33226 33227 33228 33229 33230 33231 33232 33233 33234 33235 33236 33237 33238 33239 33240 33241 33242 33243 33244 33245 33246 33247 33248 33249 33250 33251 33252 33253 33254 33255 33256 33257 33258 33259 33260 33261 33262 33263 33264 33265 33266 33267 33268 33269 33270 33271 33272 33273 33274 33275 33276 33277 33278 33279 33280 33281 33282 33283 33284 33285 33286 33287 33288 33289 33290 33291 33292 33293 33294 33295 33296 33297 33298 33299 33300 33301 33302 33303 33304 33305 33306 33307 33308 33309 33310 33311 33312 33313 33314 33315 33316 33317 33318 33319 33320 33321 33322 33323 33324 33325 33326 33327 33328 33329 33330 33331 33332 33333 33334 33335 33336 33337 33338 33339 33340 33341 33342 33343 33344 33345 33346 33347 33348 33349 33350 33351 33352 33353 33354 33355 33356 33357 33358 33359 33360 33361 33362 33363 33364 33365 33366 33367 33368 33369 33370 33371 33372 33373 33374 33375 33376 33377 33378 33379 33380 33381 33382 33383 33384 33385 33386 33387 33388 33389 33390 33391 33392 33393 33394 33395 33396 33397 33398 33399 33400 33401 33402 33403 33404 33405 33406 33407 33408 33409 33410 33411 33412 33413 33414 33415 33416 33417 33418 33419 33420 33421 33422 33423 33424 33425 33426 33427 33428 33429 33430 33431 33432 33433 33434 33435 33436 33437 33438 33439 33440 33441 33442 33443 33444 33445 33446 33447 33448 33449 33450 33451 33452 33453 33454 33455 33456 33457 33458 33459 33460 33461 33462 33463 33464 33465 33466 33467 33468 33469 33470 33471 33472 33473 33474 33475 33476 33477 33478 33479 33480 33481 33482 33483 33484 33485 33486 33487 33488 33489 33490 33491 33492 33493 33494 33495 33496 33497 33498 33499 33500 33501 33502 33503 33504 33505 33506 33507 33508 33509 33510 33511 33512 33513 33514 33515 33516 33517 33518 33519 33520 33521 33522 33523 33524 33525 33526 33527 33528 33529 33530 33531 33532 33533 33534 33535 33536 33537 33538 33539 33540 33541 33542 33543 33544 33545 33546 33547 33548 33549 33550 33551 33552 33553 33554 33555 33556 33557 33558 33559 33560 33561 33562 33563 33564 33565 33566 33567 33568 33569 33570 33571 33572 33573 33574 33575 33576 33577 33578 33579 33580 33581 33582 33583 33584 33585 33586 33587 33588 33589 33590 33591 33592 33593 33594 33595 33596 33597 33598 33599 33600 33601 33602 33603 33604 33605 33606 33607 33608 33609 33610 33611 33612 33613 33614 33615 33616 33617 33618 33619 33620 33621 33622 33623 33624 33625 33626 33627 33628 33629 33630 33631 33632 33633 33634 33635 33636 33637 33638 33639 33640 33641 33642 33643 33644 33645 33646 33647 33648 33649 33650 33651 33652 33653 33654 33655 33656 33657 33658 33659 33660 33661 33662 33663 33664 33665 33666 33667 33668 33669 33670 33671 33672 33673 33674 33675 33676 33677 33678 33679 33680 33681 33682 33683 33684 33685 33686 33687 33688 33689 33690 33691 33692 33693 33694 33695 33696 33697 33698 33699 33700 33701 33702 33703 33704 33705 33706 33707 33708 33709 33710 33711 33712 33713 33714 33715 33716 33717 33718 33719 33720 33721 33722 33723 33724 33725 33726 33727 33728 33729 33730 33731 33732 33733 33734 33735 33736 33737 33738 33739 33740 33741 33742 33743 33744 33745 33746 33747 33748 33749 33750 33751 33752 33753 33754 33755 33756 33757 33758 33759 33760 33761 33762 33763 33764 33765 33766 33767 33768 33769 33770 33771 33772 33773 33774 33775 33776 33777 33778 33779 33780 33781 33782 33783 33784 33785 33786 33787 33788 33789 33790 33791 33792 33793 33794 33795 33796 33797 33798 33799 33800 33801 33802 33803 33804 33805 33806 33807 33808 33809 33810 33811 33812 33813 33814 33815 33816 33817 33818 33819 33820 33821 33822 33823 33824 33825 33826 33827 33828 33829 33830 33831 33832 33833 33834 33835 33836 33837 33838 33839 33840 33841 33842 33843 33844 33845 33846 33847 33848 33849 33850 33851 33852 33853 33854 33855 33856 33857 33858 33859 33860 33861 33862 33863 33864 33865 33866 33867 33868 33869 33870 33871 33872 33873 33874 33875 33876 33877 33878 33879 33880 33881 33882 33883 33884 33885 33886 33887 33888 33889 33890 33891 33892 33893 33894 33895 33896 33897 33898 33899 33900 33901 33902 33903 33904 33905 33906 33907 33908 33909 33910 33911 33912 33913 33914 33915 33916 33917 33918 33919 33920 33921 33922 33923 33924 33925 33926 33927 33928 33929 33930 33931 33932 33933 33934 33935 33936 33937 33938 33939 33940 33941 33942 33943 33944 33945 33946 33947 33948 33949 33950 33951 33952 33953 33954 33955 33956 33957 33958 33959 33960 33961 33962 33963 33964 33965 33966 33967 33968 33969 33970 33971 33972 33973 33974 33975 33976 33977 33978 33979 33980 33981 33982 33983 33984 33985 33986 33987 33988 33989 33990 33991 33992 33993 33994 33995 33996 33997 33998 33999 34000 34001 34002 34003 34004 34005 34006 34007 34008 34009 34010 34011 34012 34013 34014 34015 34016 34017 34018 34019 34020 34021 34022 34023 34024 34025 34026 34027 34028 34029 34030 34031 34032 34033 34034 34035 34036 34037 34038 34039 34040 34041 34042 34043 34044 34045 34046 34047 34048 34049 34050 34051 34052 34053 34054 34055 34056 34057 34058 34059 34060 34061 34062 34063 34064 34065 34066 34067 34068 34069 34070 34071 34072 34073 34074 34075 34076 34077 34078 34079 34080 34081 34082 34083 34084 34085 34086 34087 34088 34089 34090 34091 34092 34093 34094 34095 34096 34097 34098 34099 34100 34101 34102 34103 34104 34105 34106 34107 34108 34109 34110 34111 34112 34113 34114 34115 34116 34117 34118 34119 34120 34121 34122 34123 34124 34125 34126 34127 34128 34129 34130 34131 34132 34133 34134 34135 34136 34137 34138 34139 34140 34141 34142 34143 34144 34145 34146 34147 34148 34149 34150 34151 34152 34153 34154 34155 34156 34157 34158 34159 34160 34161 34162 34163 34164 34165 34166 34167 34168 34169 34170 34171 34172 34173 34174 34175 34176 34177 34178 34179 34180 34181 34182 34183 34184 34185 34186 34187 34188 34189 34190 34191 34192 34193 34194 34195 34196 34197 34198 34199 34200 34201 34202 34203 34204 34205 34206 34207 34208 34209 34210 34211 34212 34213 34214 34215 34216 34217 34218 34219 34220 34221 34222 34223 34224 34225 34226 34227 34228 34229 34230 34231 34232 34233 34234 34235 34236 34237 34238 34239 34240 34241 34242 34243 34244 34245 34246 34247 34248 34249 34250 34251 34252 34253 34254 34255 34256 34257 34258 34259 34260 34261 34262 34263 34264 34265 34266 34267 34268 34269 34270 34271 34272 34273 34274 34275 34276 34277 34278 34279 34280 34281 34282 34283 34284 34285 34286 34287 34288 34289 34290 34291 34292 34293 34294 34295 34296 34297 34298 34299 34300 34301 34302 34303 34304 34305 34306 34307 34308 34309 34310 34311 34312 34313 34314 34315 34316 34317 34318 34319 34320 34321 34322 34323 34324 34325 34326 34327 34328 34329 34330 34331 34332 34333 34334 34335 34336 34337 34338 34339 34340 34341 34342 34343 34344 34345 34346 34347 34348 34349 34350 34351 34352 34353 34354 34355 34356 34357 34358 34359 34360 34361 34362 34363 34364 34365 34366 34367 34368 34369 34370 34371 34372 34373 34374 34375 34376 34377 34378 34379 34380 34381 34382 34383 34384 34385 34386 34387 34388 34389 34390 34391 34392 34393 34394 34395 34396 34397 34398 34399 34400 34401 34402 34403 34404 34405 34406 34407 34408 34409 34410 34411 34412 34413 34414 34415 34416 34417 34418 34419 34420 34421 34422 34423 34424 34425 34426 34427 34428 34429 34430 34431 34432 34433 34434 34435 34436 34437 34438 34439 34440 34441 34442 34443 34444 34445 34446 34447 34448 34449 34450 34451 34452 34453 34454 34455 34456 34457 34458 34459 34460 34461 34462 34463 34464 34465 34466 34467 34468 34469 34470 34471 34472 34473 34474 34475 34476 34477 34478 34479 34480 34481 34482 34483 34484 34485 34486 34487 34488 34489 34490 34491 34492 34493 34494 34495 34496 34497 34498 34499 34500 34501 34502 34503 34504 34505 34506 34507 34508 34509 34510 34511 34512 34513 34514 34515 34516 34517 34518 34519 34520 34521 34522 34523 34524 34525 34526 34527 34528 34529 34530 34531 34532 34533 34534 34535 34536 34537 34538 34539 34540 34541 34542 34543 34544 34545 34546 34547 34548 34549 34550 34551 34552 34553 34554 34555 34556 34557 34558 34559 34560 34561 34562 34563 34564 34565 34566 34567 34568 34569 34570 34571 34572 34573 34574 34575 34576 34577 34578 34579 34580 34581 34582 34583 34584 34585 34586 34587 34588 34589 34590 34591 34592 34593 34594 34595 34596 34597 34598 34599 34600 34601 34602 34603 34604 34605 34606 34607 34608 34609 34610 34611 34612 34613 34614 34615 34616 34617 34618 34619 34620 34621 34622 34623 34624 34625 34626 34627 34628 34629 34630 34631 34632 34633 34634 34635 34636 34637 34638 34639 34640 34641 34642 34643 34644 34645 34646 34647 34648 34649 34650 34651 34652 34653 34654 34655 34656 34657 34658 34659 34660 34661 34662 34663 34664 34665 34666 34667 34668 34669 34670 34671 34672 34673 34674 34675 34676 34677 34678 34679 34680 34681 34682 34683 34684 34685 34686 34687 34688 34689 34690 34691 34692 34693 34694 34695 34696 34697 34698 34699 34700 34701 34702 34703 34704 34705 34706 34707 34708 34709 34710 34711 34712 34713 34714 34715 34716 34717 34718 34719 34720 34721 34722 34723 34724 34725 34726 34727 34728 34729 34730 34731 34732 34733 34734 34735 34736 34737 34738 34739 34740 34741 34742 34743 34744 34745 34746 34747 34748 34749 34750 34751 34752 34753 34754 34755 34756 34757 34758 34759 34760 34761 34762 34763 34764 34765 34766 34767 34768 34769 34770 34771 34772 34773 34774 34775 34776 34777 34778 34779 34780 34781 34782 34783 34784 34785 34786 34787 34788 34789 34790 34791 34792 34793 34794 34795 34796 34797 34798 34799 34800 34801 34802 34803 34804 34805 34806 34807 34808 34809 34810 34811 34812 34813 34814 34815 34816 34817 34818 34819 34820 34821 34822 34823 34824 34825 34826 34827 34828 34829 34830 34831 34832 34833 34834 34835 34836 34837 34838 34839 34840 34841 34842 34843 34844 34845 34846 34847 34848 34849 34850 34851 34852 34853 34854 34855 34856 34857 34858 34859 34860 34861 34862 34863 34864 34865 34866 34867 34868 34869 34870 34871 34872 34873 34874 34875 34876 34877 34878 34879 34880 34881 34882 34883 34884 34885 34886 34887 34888 34889 34890 34891 34892 34893 34894 34895 34896 34897 34898 34899 34900 34901 34902 34903 34904 34905 34906 34907 34908 34909 34910 34911 34912 34913 34914 34915 34916 34917 34918 34919 34920 34921 34922 34923 34924 34925 34926 34927 34928 34929 34930 34931 34932 34933 34934 34935 34936 34937 34938 34939 34940 34941 34942 34943 34944 34945 34946 34947 34948 34949 34950 34951 34952 34953 34954 34955 34956 34957 34958 34959 34960 34961 34962 34963 34964 34965 34966 34967 34968 34969 34970 34971 34972 34973 34974 34975 34976 34977 34978 34979 34980 34981 34982 34983 34984 34985 34986 34987 34988 34989 34990 34991 34992 34993 34994 34995 34996 34997 34998 34999 35000 35001 35002 35003 35004 35005 35006 35007 35008 35009 35010 35011 35012 35013 35014 35015 35016 35017 35018 35019 35020 35021 35022 35023 35024 35025 35026 35027 35028 35029 35030 35031 35032 35033 35034 35035 35036 35037 35038 35039 35040 35041 35042 35043 35044 35045 35046 35047 35048 35049 35050 35051 35052 35053 35054 35055 35056 35057 35058 35059 35060 35061 35062 35063 35064 35065 35066 35067 35068 35069 35070 35071 35072 35073 35074 35075 35076 35077 35078 35079 35080 35081 35082 35083 35084 35085 35086 35087 35088 35089 35090 35091 35092 35093 35094 35095 35096 35097 35098 35099 35100 35101 35102 35103 35104 35105 35106 35107 35108 35109 35110 35111 35112 35113 35114 35115 35116 35117 35118 35119 35120 35121 35122 35123 35124 35125 35126 35127 35128 35129 35130 35131 35132 35133 35134 35135 35136 35137 35138 35139 35140 35141 35142 35143 35144 35145 35146 35147 35148 35149 35150 35151 35152 35153 35154 35155 35156 35157 35158 35159 35160 35161 35162 35163 35164 35165 35166 35167 35168 35169 35170 35171 35172 35173 35174 35175 35176 35177 35178 35179 35180 35181 35182 35183 35184 35185 35186 35187 35188 35189 35190 35191 35192 35193 35194 35195 35196 35197 35198 35199 35200 35201 35202 35203 35204 35205 35206 35207 35208 35209 35210 35211 35212 35213 35214 35215 35216 35217 35218 35219 35220 35221 35222 35223 35224 35225 35226 35227 35228 35229 35230 35231 35232 35233 35234 35235 35236 35237 35238 35239 35240 35241 35242 35243 35244 35245 35246 35247 35248 35249 35250 35251 35252 35253 35254 35255 35256 35257 35258 35259 35260 35261 35262 35263 35264 35265 35266 35267 35268 35269 35270 35271 35272 35273 35274 35275 35276 35277 35278 35279 35280 35281 35282 35283 35284 35285 35286 35287 35288 35289 35290 35291 35292 35293 35294 35295 35296 35297 35298 35299 35300 35301 35302 35303 35304 35305 35306 35307 35308 35309 35310 35311 35312 35313 35314 35315 35316 35317 35318 35319 35320 35321 35322 35323 35324 35325 35326 35327 35328 35329 35330 35331 35332 35333 35334 35335 35336 35337 35338 35339 35340 35341 35342 35343 35344 35345 35346 35347 35348 35349 35350 35351 35352 35353 35354 35355 35356 35357 35358 35359 35360 35361 35362 35363 35364 35365 35366 35367 35368 35369 35370 35371 35372 35373 35374 35375 35376 35377 35378 35379 35380 35381 35382 35383 35384 35385 35386 35387 35388 35389 35390 35391 35392 35393 35394 35395 35396 35397 35398 35399 35400 35401 35402 35403 35404 35405 35406 35407 35408 35409 35410 35411 35412 35413 35414 35415 35416 35417 35418 35419 35420 35421 35422 35423 35424 35425 35426 35427 35428 35429 35430 35431 35432 35433 35434 35435 35436 35437 35438 35439 35440 35441 35442 35443 35444 35445 35446 35447 35448 35449 35450 35451 35452 35453 35454 35455 35456 35457 35458 35459 35460 35461 35462 35463 35464 35465 35466 35467 35468 35469 35470 35471 35472 35473 35474 35475 35476 35477 35478 35479 35480 35481 35482 35483 35484 35485 35486 35487 35488 35489 35490 35491 35492 35493 35494 35495 35496 35497 35498 35499 35500 35501 35502 35503 35504 35505 35506 35507 35508 35509 35510 35511 35512 35513 35514 35515 35516 35517 35518 35519 35520 35521 35522 35523 35524 35525 35526 35527 35528 35529 35530 35531 35532 35533 35534 35535 35536 35537 35538 35539 35540 35541 35542 35543 35544 35545 35546 35547 35548 35549 35550 35551 35552 35553 35554 35555 35556 35557 35558 35559 35560 35561 35562 35563 35564 35565 35566 35567 35568 35569 35570 35571 35572 35573 35574 35575 35576 35577 35578 35579 35580 35581 35582 35583 35584 35585 35586 35587 35588 35589 35590 35591 35592 35593 35594 35595 35596 35597 35598 35599 35600 35601 35602 35603 35604 35605 35606 35607 35608 35609 35610 35611 35612 35613 35614 35615 35616 35617 35618 35619 35620 35621 35622 35623 35624 35625 35626 35627 35628 35629 35630 35631 35632 35633 35634 35635 35636 35637 35638 35639 35640 35641 35642 35643 35644 35645 35646 35647 35648 35649 35650 35651 35652 35653 35654 35655 35656 35657 35658 35659 35660 35661 35662 35663 35664 35665 35666 35667 35668 35669 35670 35671 35672 35673 35674 35675 35676 35677 35678 35679 35680 35681 35682 35683 35684 35685 35686 35687 35688 35689 35690 35691 35692 35693 35694 35695 35696 35697 35698 35699 35700 35701 35702 35703 35704 35705 35706 35707 35708 35709 35710 35711 35712 35713 35714 35715 35716 35717 35718 35719 35720 35721 35722 35723 35724 35725 35726 35727 35728 35729 35730 35731 35732 35733 35734 35735 35736 35737 35738 35739 35740 35741 35742 35743 35744 35745 35746 35747 35748 35749 35750 35751 35752 35753 35754 35755 35756 35757 35758 35759 35760 35761 35762 35763 35764 35765 35766 35767 35768 35769 35770 35771 35772 35773 35774 35775 35776 35777 35778 35779 35780 35781 35782 35783 35784 35785 35786 35787 35788 35789 35790 35791 35792 35793 35794 35795 35796 35797 35798 35799 35800 35801 35802 35803 35804 35805 35806 35807 35808 35809 35810 35811 35812 35813 35814 35815 35816 35817 35818 35819 35820 35821 35822 35823 35824 35825 35826 35827 35828 35829 35830 35831 35832 35833 35834 35835 35836 35837 35838 35839 35840 35841 35842 35843 35844 35845 35846 35847 35848 35849 35850 35851 35852 35853 35854 35855 35856 35857 35858 35859 35860 35861 35862 35863 35864 35865 35866 35867 35868 35869 35870 35871 35872 35873 35874 35875 35876 35877 35878 35879 35880 35881 35882 35883 35884 35885 35886 35887 35888 35889 35890 35891 35892 35893 35894 35895 35896 35897 35898 35899 35900 35901 35902 35903 35904 35905 35906 35907 35908 35909 35910 35911 35912 35913 35914 35915 35916 35917 35918 35919 35920 35921 35922 35923 35924 35925 35926 35927 35928 35929 35930 35931 35932 35933 35934 35935 35936 35937 35938 35939 35940 35941 35942 35943 35944 35945 35946 35947 35948 35949 35950 35951 35952 35953 35954 35955 35956 35957 35958 35959 35960 35961 35962 35963 35964 35965 35966 35967 35968 35969 35970 35971 35972 35973 35974 35975 35976 35977 35978 35979 35980 35981 35982 35983 35984 35985 35986 35987 35988 35989 35990 35991 35992 35993 35994 35995 35996 35997 35998 35999 36000 36001 36002 36003 36004 36005 36006 36007 36008 36009 36010 36011 36012 36013 36014 36015 36016 36017 36018 36019 36020 36021 36022 36023 36024 36025 36026 36027 36028 36029 36030 36031 36032 36033 36034 36035 36036 36037 36038 36039 36040 36041 36042 36043 36044 36045 36046 36047 36048 36049 36050 36051 36052 36053 36054 36055 36056 36057 36058 36059 36060 36061 36062 36063 36064 36065 36066 36067 36068 36069 36070 36071 36072 36073 36074 36075 36076 36077 36078 36079 36080 36081 36082 36083 36084 36085 36086 36087 36088 36089 36090 36091 36092 36093 36094 36095 36096 36097 36098 36099 36100 36101 36102 36103 36104 36105 36106 36107 36108 36109 36110 36111 36112 36113 36114 36115 36116 36117 36118 36119 36120 36121 36122 36123 36124 36125 36126 36127 36128 36129 36130 36131 36132 36133 36134 36135 36136 36137 36138 36139 36140 36141 36142 36143 36144 36145 36146 36147 36148 36149 36150 36151 36152 36153 36154 36155 36156 36157 36158 36159 36160 36161 36162 36163 36164 36165 36166 36167 36168 36169 36170 36171 36172 36173 36174 36175 36176 36177 36178 36179 36180 36181 36182 36183 36184 36185 36186 36187 36188 36189 36190 36191 36192 36193 36194 36195 36196 36197 36198 36199 36200 36201 36202 36203 36204 36205 36206 36207 36208 36209 36210 36211 36212 36213 36214 36215 36216 36217 36218 36219 36220 36221 36222 36223 36224 36225 36226 36227 36228 36229 36230 36231 36232 36233 36234 36235 36236 36237 36238 36239 36240 36241 36242 36243 36244 36245 36246 36247 36248 36249 36250 36251 36252 36253 36254 36255 36256 36257 36258 36259 36260 36261 36262 36263 36264 36265 36266 36267 36268 36269 36270 36271 36272 36273 36274 36275 36276 36277 36278 36279 36280 36281 36282 36283 36284 36285 36286 36287 36288 36289 36290 36291 36292 36293 36294 36295 36296 36297 36298 36299 36300 36301 36302 36303 36304 36305 36306 36307 36308 36309 36310 36311 36312 36313 36314 36315 36316 36317 36318 36319 36320 36321 36322 36323 36324 36325 36326 36327 36328 36329 36330 36331 36332 36333 36334 36335 36336 36337 36338 36339 36340 36341 36342 36343 36344 36345 36346 36347 36348 36349 36350 36351 36352 36353 36354 36355 36356 36357 36358 36359 36360 36361 36362 36363 36364 36365 36366 36367 36368 36369 36370 36371 36372 36373 36374 36375 36376 36377 36378 36379 36380 36381 36382 36383 36384 36385 36386 36387 36388 36389 36390 36391 36392 36393 36394 36395 36396 36397 36398 36399 36400 36401 36402 36403 36404 36405 36406 36407 36408 36409 36410 36411 36412 36413 36414 36415 36416 36417 36418 36419 36420 36421 36422 36423 36424 36425 36426 36427 36428 36429 36430 36431 36432 36433 36434 36435 36436 36437 36438 36439 36440 36441 36442 36443 36444 36445 36446 36447 36448 36449 36450 36451 36452 36453 36454 36455 36456 36457 36458 36459 36460 36461 36462 36463 36464 36465 36466 36467 36468 36469 36470 36471 36472 36473 36474 36475 36476 36477 36478 36479 36480 36481 36482 36483 36484 36485 36486 36487 36488 36489 36490 36491 36492 36493 36494 36495 36496 36497 36498 36499 36500 36501 36502 36503 36504 36505 36506 36507 36508 36509 36510 36511 36512 36513 36514 36515 36516 36517 36518 36519 36520 36521 36522 36523 36524 36525 36526 36527 36528 36529 36530 36531 36532 36533 36534 36535 36536 36537 36538 36539 36540 36541 36542 36543 36544 36545 36546 36547 36548 36549 36550 36551 36552 36553 36554 36555 36556 36557 36558 36559 36560 36561 36562 36563 36564 36565 36566 36567 36568 36569 36570 36571 36572 36573 36574 36575 36576 36577 36578 36579 36580 36581 36582 36583 36584 36585 36586 36587 36588 36589 36590 36591 36592 36593 36594 36595 36596 36597 36598 36599 36600 36601 36602 36603 36604 36605 36606 36607 36608 36609 36610 36611 36612 36613 36614 36615 36616 36617 36618 36619 36620 36621 36622 36623 36624 36625 36626 36627 36628 36629 36630 36631 36632 36633 36634 36635 36636 36637 36638 36639 36640 36641 36642 36643 36644 36645 36646 36647 36648 36649 36650 36651 36652 36653 36654 36655 36656 36657 36658 36659 36660 36661 36662 36663 36664 36665 36666 36667 36668 36669 36670 36671 36672 36673 36674 36675 36676 36677 36678 36679 36680 36681 36682 36683 36684 36685 36686 36687 36688 36689 36690 36691 36692 36693 36694 36695 36696 36697 36698 36699 36700 36701 36702 36703 36704 36705 36706 36707 36708 36709 36710 36711 36712 36713 36714 36715 36716 36717 36718 36719 36720 36721 36722 36723 36724 36725 36726 36727 36728 36729 36730 36731 36732 36733 36734 36735 36736 36737 36738 36739 36740 36741 36742 36743 36744 36745 36746 36747 36748 36749 36750 36751 36752 36753 36754 36755 36756 36757 36758 36759 36760 36761 36762 36763 36764 36765 36766 36767 36768 36769 36770 36771 36772 36773 36774 36775 36776 36777 36778 36779 36780 36781 36782 36783 36784 36785 36786 36787 36788 36789 36790 36791 36792 36793 36794 36795 36796 36797 36798 36799 36800 36801 36802 36803 36804 36805 36806 36807 36808 36809 36810 36811 36812 36813 36814 36815 36816 36817 36818 36819 36820 36821 36822 36823 36824 36825 36826 36827 36828 36829 36830 36831 36832 36833 36834 36835 36836 36837 36838 36839 36840 36841 36842 36843 36844 36845 36846 36847 36848 36849 36850 36851 36852 36853 36854 36855 36856 36857 36858 36859 36860 36861 36862 36863 36864 36865 36866 36867 36868 36869 36870 36871 36872 36873 36874 36875 36876 36877 36878 36879 36880 36881 36882 36883 36884 36885 36886 36887 36888 36889 36890 36891 36892 36893 36894 36895 36896 36897 36898 36899 36900 36901 36902 36903 36904 36905 36906 36907 36908 36909 36910 36911 36912 36913 36914 36915 36916 36917 36918 36919 36920 36921 36922 36923 36924 36925 36926 36927 36928 36929 36930 36931 36932 36933 36934 36935 36936 36937 36938 36939 36940 36941 36942 36943 36944 36945 36946 36947 36948 36949 36950 36951 36952 36953 36954 36955 36956 36957 36958 36959 36960 36961 36962 36963 36964 36965 36966 36967 36968 36969 36970 36971 36972 36973 36974 36975 36976 36977 36978 36979 36980 36981 36982 36983 36984 36985 36986 36987 36988 36989 36990 36991 36992 36993 36994 36995 36996 36997 36998 36999 37000 37001 37002 37003 37004 37005 37006 37007 37008 37009 37010 37011 37012 37013 37014 37015 37016 37017 37018 37019 37020 37021 37022 37023 37024 37025 37026 37027 37028 37029 37030 37031 37032 37033 37034 37035 37036 37037 37038 37039 37040 37041 37042 37043 37044 37045 37046 37047 37048 37049 37050 37051 37052 37053 37054 37055 37056 37057 37058 37059 37060 37061 37062 37063 37064 37065 37066 37067 37068 37069 37070 37071 37072 37073 37074 37075 37076 37077 37078 37079 37080 37081 37082 37083 37084 37085 37086 37087 37088 37089 37090 37091 37092 37093 37094 37095 37096 37097 37098 37099 37100 37101 37102 37103 37104 37105 37106 37107 37108 37109 37110 37111 37112 37113 37114 37115 37116 37117 37118 37119 37120 37121 37122 37123 37124 37125 37126 37127 37128 37129 37130 37131 37132 37133 37134 37135 37136 37137 37138 37139 37140 37141 37142 37143 37144 37145 37146 37147 37148 37149 37150 37151 37152 37153 37154 37155 37156 37157 37158 37159 37160 37161 37162 37163 37164 37165 37166 37167 37168 37169 37170 37171 37172 37173 37174 37175 37176 37177 37178 37179 37180 37181 37182 37183 37184 37185 37186 37187 37188 37189 37190 37191 37192 37193 37194 37195 37196 37197 37198 37199 37200 37201 37202 37203 37204 37205 37206 37207 37208 37209 37210 37211 37212 37213 37214 37215 37216 37217 37218 37219 37220 37221 37222 37223 37224 37225 37226 37227 37228 37229 37230 37231 37232 37233 37234 37235 37236 37237 37238 37239 37240 37241 37242 37243 37244 37245 37246 37247 37248 37249 37250 37251 37252 37253 37254 37255 37256 37257 37258 37259 37260 37261 37262 37263 37264 37265 37266 37267 37268 37269 37270 37271 37272 37273 37274 37275 37276 37277 37278 37279 37280 37281 37282 37283 37284 37285 37286 37287 37288 37289 37290 37291 37292 37293 37294 37295 37296 37297 37298 37299 37300 37301 37302 37303 37304 37305 37306 37307 37308 37309 37310 37311 37312 37313 37314 37315 37316 37317 37318 37319 37320 37321 37322 37323 37324 37325 37326 37327 37328 37329 37330 37331 37332 37333 37334 37335 37336 37337 37338 37339 37340 37341 37342 37343 37344 37345 37346 37347 37348 37349 37350 37351 37352 37353 37354 37355 37356 37357 37358 37359 37360 37361 37362 37363 37364 37365 37366 37367 37368 37369 37370 37371 37372 37373 37374 37375 37376 37377 37378 37379 37380 37381 37382 37383 37384 37385 37386 37387 37388 37389 37390 37391 37392 37393 37394 37395 37396 37397 37398 37399 37400 37401 37402 37403 37404 37405 37406 37407 37408 37409 37410 37411 37412 37413 37414 37415 37416 37417 37418 37419 37420 37421 37422 37423 37424 37425 37426 37427 37428 37429 37430 37431 37432 37433 37434 37435 37436 37437 37438 37439 37440 37441 37442 37443 37444 37445 37446 37447 37448 37449 37450 37451 37452 37453 37454 37455 37456 37457 37458 37459 37460 37461 37462 37463 37464 37465 37466 37467 37468 37469 37470 37471 37472 37473 37474 37475 37476 37477 37478 37479 37480 37481 37482 37483 37484 37485 37486 37487 37488 37489 37490 37491 37492 37493 37494 37495 37496 37497 37498 37499 37500 37501 37502 37503 37504 37505 37506 37507 37508 37509 37510 37511 37512 37513 37514 37515 37516 37517 37518 37519 37520 37521 37522 37523 37524 37525 37526 37527 37528 37529 37530 37531 37532 37533 37534 37535 37536 37537 37538 37539 37540 37541 37542 37543 37544 37545 37546 37547 37548 37549 37550 37551 37552 37553 37554 37555 37556 37557 37558 37559 37560 37561 37562 37563 37564 37565 37566 37567 37568 37569 37570 37571 37572 37573 37574 37575 37576 37577 37578 37579 37580 37581 37582 37583 37584 37585 37586 37587 37588 37589 37590 37591 37592 37593 37594 37595 37596 37597 37598 37599 37600 37601 37602 37603 37604 37605 37606 37607 37608 37609 37610 37611 37612 37613 37614 37615 37616 37617 37618 37619 37620 37621 37622 37623 37624 37625 37626 37627 37628 37629 37630 37631 37632 37633 37634 37635 37636 37637 37638 37639 37640 37641 37642 37643 37644 37645 37646 37647 37648 37649 37650 37651 37652 37653 37654 37655 37656 37657 37658 37659 37660 37661 37662 37663 37664 37665 37666 37667 37668 37669 37670 37671 37672 37673 37674 37675 37676 37677 37678 37679 37680 37681 37682 37683 37684 37685 37686 37687 37688 37689 37690 37691 37692 37693 37694 37695 37696 37697 37698 37699 37700 37701 37702 37703 37704 37705 37706 37707 37708 37709 37710 37711 37712 37713 37714 37715 37716 37717 37718 37719 37720 37721 37722 37723 37724 37725 37726 37727 37728 37729 37730 37731 37732 37733 37734 37735 37736 37737 37738 37739 37740 37741 37742 37743 37744 37745 37746 37747 37748 37749 37750 37751 37752 37753 37754 37755 37756 37757 37758 37759 37760 37761 37762 37763 37764 37765 37766 37767 37768 37769 37770 37771 37772 37773 37774 37775 37776 37777 37778 37779 37780 37781 37782 37783 37784 37785 37786 37787 37788 37789 37790 37791 37792 37793 37794 37795 37796 37797 37798 37799 37800 37801 37802 37803 37804 37805 37806 37807 37808 37809 37810 37811 37812 37813 37814 37815 37816 37817 37818 37819 37820 37821 37822 37823 37824 37825 37826 37827 37828 37829 37830 37831 37832 37833 37834 37835 37836 37837 37838 37839 37840 37841 37842 37843 37844 37845 37846 37847 37848 37849 37850 37851 37852 37853 37854 37855 37856 37857 37858 37859 37860 37861 37862 37863 37864 37865 37866 37867 37868 37869 37870 37871 37872 37873 37874 37875 37876 37877 37878 37879 37880 37881 37882 37883 37884 37885 37886 37887 37888 37889 37890 37891 37892 37893 37894 37895 37896 37897 37898 37899 37900 37901 37902 37903 37904 37905 37906 37907 37908 37909 37910 37911 37912 37913 37914 37915 37916 37917 37918 37919 37920 37921 37922 37923 37924 37925 37926 37927 37928 37929 37930 37931 37932 37933 37934 37935 37936 37937 37938 37939 37940 37941 37942 37943 37944 37945 37946 37947 37948 37949 37950 37951 37952 37953 37954 37955 37956 37957 37958 37959 37960 37961 37962 37963 37964 37965 37966 37967 37968 37969 37970 37971 37972 37973 37974 37975 37976 37977 37978 37979 37980 37981 37982 37983 37984 37985 37986 37987 37988 37989 37990 37991 37992 37993 37994 37995 37996 37997 37998 37999 38000 38001 38002 38003 38004 38005 38006 38007 38008 38009 38010 38011 38012 38013 38014 38015 38016 38017 38018 38019 38020 38021 38022 38023 38024 38025 38026 38027 38028 38029 38030 38031 38032 38033 38034 38035 38036 38037 38038 38039 38040 38041 38042 38043 38044 38045 38046 38047 38048 38049 38050 38051 38052 38053 38054 38055 38056 38057 38058 38059 38060 38061 38062 38063 38064 38065 38066 38067 38068 38069 38070 38071 38072 38073 38074 38075 38076 38077 38078 38079 38080 38081 38082 38083 38084 38085 38086 38087 38088 38089 38090 38091 38092 38093 38094 38095 38096 38097 38098 38099 38100 38101 38102 38103 38104 38105 38106 38107 38108 38109 38110 38111 38112 38113 38114 38115 38116 38117 38118 38119 38120 38121 38122 38123 38124 38125 38126 38127 38128 38129 38130 38131 38132 38133 38134 38135 38136 38137 38138 38139 38140 38141 38142 38143 38144 38145 38146 38147 38148 38149 38150 38151 38152 38153 38154 38155 38156 38157 38158 38159 38160 38161 38162 38163 38164 38165 38166 38167 38168 38169 38170 38171 38172 38173 38174 38175 38176 38177 38178 38179 38180 38181 38182 38183 38184 38185 38186 38187 38188 38189 38190 38191 38192 38193 38194 38195 38196 38197 38198 38199 38200 38201 38202 38203 38204 38205 38206 38207 38208 38209 38210 38211 38212 38213 38214 38215 38216 38217 38218 38219 38220 38221 38222 38223 38224 38225 38226 38227 38228 38229 38230 38231 38232 38233 38234 38235 38236 38237 38238 38239 38240 38241 38242 38243 38244 38245 38246 38247 38248 38249 38250 38251 38252 38253 38254 38255 38256 38257 38258 38259 38260 38261 38262 38263 38264 38265 38266 38267 38268 38269 38270 38271 38272 38273 38274 38275 38276 38277 38278 38279 38280 38281 38282 38283 38284 38285 38286 38287 38288 38289 38290 38291 38292 38293 38294 38295 38296 38297 38298 38299 38300 38301 38302 38303 38304 38305 38306 38307 38308 38309 38310 38311 38312 38313 38314 38315 38316 38317 38318 38319 38320 38321 38322 38323 38324 38325 38326 38327 38328 38329 38330 38331 38332 38333 38334 38335 38336 38337 38338 38339 38340 38341 38342 38343 38344 38345 38346 38347 38348 38349 38350 38351 38352 38353 38354 38355 38356 38357 38358 38359 38360 38361 38362 38363 38364 38365 38366 38367 38368 38369 38370 38371 38372 38373 38374 38375 38376 38377 38378 38379 38380 38381 38382 38383 38384 38385 38386 38387 38388 38389 38390 38391 38392 38393 38394 38395 38396 38397 38398 38399 38400 38401 38402 38403 38404 38405 38406 38407 38408 38409 38410 38411 38412 38413 38414 38415 38416 38417 38418 38419 38420 38421 38422 38423 38424 38425 38426 38427 38428 38429 38430 38431 38432 38433 38434 38435 38436 38437 38438 38439 38440 38441 38442 38443 38444 38445 38446 38447 38448 38449 38450 38451 38452 38453 38454 38455 38456 38457 38458 38459 38460 38461 38462 38463 38464 38465 38466 38467 38468 38469 38470 38471 38472 38473 38474 38475 38476 38477 38478 38479 38480 38481 38482 38483 38484 38485 38486 38487 38488 38489 38490 38491 38492 38493 38494 38495 38496 38497 38498 38499 38500 38501 38502 38503 38504 38505 38506 38507 38508 38509 38510 38511 38512 38513 38514 38515 38516 38517 38518 38519 38520 38521 38522 38523 38524 38525 38526 38527 38528 38529 38530 38531 38532 38533 38534 38535 38536 38537 38538 38539 38540 38541 38542 38543 38544 38545 38546 38547 38548 38549 38550 38551 38552 38553 38554 38555 38556 38557 38558 38559 38560 38561 38562 38563 38564 38565 38566 38567 38568 38569 38570 38571 38572 38573 38574 38575 38576 38577 38578 38579 38580 38581 38582 38583 38584 38585 38586 38587 38588 38589 38590 38591 38592 38593 38594 38595 38596 38597 38598 38599 38600 38601 38602 38603 38604 38605 38606 38607 38608 38609 38610 38611 38612 38613 38614 38615 38616 38617 38618 38619 38620 38621 38622 38623 38624 38625 38626 38627 38628 38629 38630 38631 38632 38633 38634 38635 38636 38637 38638 38639 38640 38641 38642 38643 38644 38645 38646 38647 38648 38649 38650 38651 38652 38653 38654 38655 38656 38657 38658 38659 38660 38661 38662 38663 38664 38665 38666 38667 38668 38669 38670 38671 38672 38673 38674 38675 38676 38677 38678 38679 38680 38681 38682 38683 38684 38685 38686 38687 38688 38689 38690 38691 38692 38693 38694 38695 38696 38697 38698 38699 38700 38701 38702 38703 38704 38705 38706 38707 38708 38709 38710 38711 38712 38713 38714 38715 38716 38717 38718 38719 38720 38721 38722 38723 38724 38725 38726 38727 38728 38729 38730 38731 38732 38733 38734 38735 38736 38737 38738 38739 38740 38741 38742 38743 38744 38745 38746 38747 38748 38749 38750 38751 38752 38753 38754 38755 38756 38757 38758 38759 38760 38761 38762 38763 38764 38765 38766 38767 38768 38769 38770 38771 38772 38773 38774 38775 38776 38777 38778 38779 38780 38781 38782 38783 38784 38785 38786 38787 38788 38789 38790 38791 38792 38793 38794 38795 38796 38797 38798 38799 38800 38801 38802 38803 38804 38805 38806 38807 38808 38809 38810 38811 38812 38813 38814 38815 38816 38817 38818 38819 38820 38821 38822 38823 38824 38825 38826 38827 38828 38829 38830 38831 38832 38833 38834 38835 38836 38837 38838 38839 38840 38841 38842 38843 38844 38845 38846 38847 38848 38849 38850 38851 38852 38853 38854 38855 38856 38857 38858 38859 38860 38861 38862 38863 38864 38865 38866 38867 38868 38869 38870 38871 38872 38873 38874 38875 38876 38877 38878 38879 38880 38881 38882 38883 38884 38885 38886 38887 38888 38889 38890 38891 38892 38893 38894 38895 38896 38897 38898 38899 38900 38901 38902 38903 38904 38905 38906 38907 38908 38909 38910 38911 38912 38913 38914 38915 38916 38917 38918 38919 38920 38921 38922 38923 38924 38925 38926 38927 38928 38929 38930 38931 38932 38933 38934 38935 38936 38937 38938 38939 38940 38941 38942 38943 38944 38945 38946 38947 38948 38949 38950 38951 38952 38953 38954 38955 38956 38957 38958 38959 38960 38961 38962 38963 38964 38965 38966 38967 38968 38969 38970 38971 38972 38973 38974 38975 38976 38977 38978 38979 38980 38981 38982 38983 38984 38985 38986 38987 38988 38989 38990 38991 38992 38993 38994 38995 38996 38997 38998 38999 39000 39001 39002 39003 39004 39005 39006 39007 39008 39009 39010 39011 39012 39013 39014 39015 39016 39017 39018 39019 39020 39021 39022 39023 39024 39025 39026 39027 39028 39029 39030 39031 39032 39033 39034 39035 39036 39037 39038 39039 39040 39041 39042 39043 39044 39045 39046 39047 39048 39049 39050 39051 39052 39053 39054 39055 39056 39057 39058 39059 39060 39061 39062 39063 39064 39065 39066 39067 39068 39069 39070 39071 39072 39073 39074 39075 39076 39077 39078 39079 39080 39081 39082 39083 39084 39085 39086 39087 39088 39089 39090 39091 39092 39093 39094 39095 39096 39097 39098 39099 39100 39101 39102 39103 39104 39105 39106 39107 39108 39109 39110 39111 39112 39113 39114 39115 39116 39117 39118 39119 39120 39121 39122 39123 39124 39125 39126 39127 39128 39129 39130 39131 39132 39133 39134 39135 39136 39137 39138 39139 39140 39141 39142 39143 39144 39145 39146 39147 39148 39149 39150 39151 39152 39153 39154 39155 39156 39157 39158 39159 39160 39161 39162 39163 39164 39165 39166 39167 39168 39169 39170 39171 39172 39173 39174 39175 39176 39177 39178 39179 39180 39181 39182 39183 39184 39185 39186 39187 39188 39189 39190 39191 39192 39193 39194 39195 39196 39197 39198 39199 39200 39201 39202 39203 39204 39205 39206 39207 39208 39209 39210 39211 39212 39213 39214 39215 39216 39217 39218 39219 39220 39221 39222 39223 39224 39225 39226 39227 39228 39229 39230 39231 39232 39233 39234 39235 39236 39237 39238 39239 39240 39241 39242 39243 39244 39245 39246 39247 39248 39249 39250 39251 39252 39253 39254 39255 39256 39257 39258 39259 39260 39261 39262 39263 39264 39265 39266 39267 39268 39269 39270 39271 39272 39273 39274 39275 39276 39277 39278 39279 39280 39281 39282 39283 39284 39285 39286 39287 39288 39289 39290 39291 39292 39293 39294 39295 39296 39297 39298 39299 39300 39301 39302 39303 39304 39305 39306 39307 39308 39309 39310 39311 39312 39313 39314 39315 39316 39317 39318 39319 39320 39321 39322 39323 39324 39325 39326 39327 39328 39329 39330 39331 39332 39333 39334 39335 39336 39337 39338 39339 39340 39341 39342 39343 39344 39345 39346 39347 39348 39349 39350 39351 39352 39353 39354 39355 39356 39357 39358 39359 39360 39361 39362 39363 39364 39365 39366 39367 39368 39369 39370 39371 39372 39373 39374 39375 39376 39377 39378 39379 39380 39381 39382 39383 39384 39385 39386 39387 39388 39389 39390 39391 39392 39393 39394 39395 39396 39397 39398 39399 39400 39401 39402 39403 39404 39405 39406 39407 39408 39409 39410 39411 39412 39413 39414 39415 39416 39417 39418 39419 39420 39421 39422 39423 39424 39425 39426 39427 39428 39429 39430 39431 39432 39433 39434 39435 39436 39437 39438 39439 39440 39441 39442 39443 39444 39445 39446 39447 39448 39449 39450 39451 39452 39453 39454 39455 39456 39457 39458 39459 39460 39461 39462 39463 39464 39465 39466 39467 39468 39469 39470 39471 39472 39473 39474 39475 39476 39477 39478 39479 39480 39481 39482 39483 39484 39485 39486 39487 39488 39489 39490 39491 39492 39493 39494 39495 39496 39497 39498 39499 39500 39501 39502 39503 39504 39505 39506 39507 39508 39509 39510 39511 39512 39513 39514 39515 39516 39517 39518 39519 39520 39521 39522 39523 39524 39525 39526 39527 39528 39529 39530 39531 39532 39533 39534 39535 39536 39537 39538 39539 39540 39541 39542 39543 39544 39545 39546 39547 39548 39549 39550 39551 39552 39553 39554 39555 39556 39557 39558 39559 39560 39561 39562 39563 39564 39565 39566 39567 39568 39569 39570 39571 39572 39573 39574 39575 39576 39577 39578 39579 39580 39581 39582 39583 39584 39585 39586 39587 39588 39589 39590 39591 39592 39593 39594 39595 39596 39597 39598 39599 39600 39601 39602 39603 39604 39605 39606 39607 39608 39609 39610 39611 39612 39613 39614 39615 39616 39617 39618 39619 39620 39621 39622 39623 39624 39625 39626 39627 39628 39629 39630 39631 39632 39633 39634 39635 39636 39637 39638 39639 39640 39641 39642 39643 39644 39645 39646 39647 39648 39649 39650 39651 39652 39653 39654 39655 39656 39657 39658 39659 39660 39661 39662 39663 39664 39665 39666 39667 39668 39669 39670 39671 39672 39673 39674 39675 39676 39677 39678 39679 39680 39681 39682 39683 39684 39685 39686 39687 39688 39689 39690 39691 39692 39693 39694 39695 39696 39697 39698 39699 39700 39701 39702 39703 39704 39705 39706 39707 39708 39709 39710 39711 39712 39713 39714 39715 39716 39717 39718 39719 39720 39721 39722 39723 39724 39725 39726 39727 39728 39729 39730 39731 39732 39733 39734 39735 39736 39737 39738 39739 39740 39741 39742 39743 39744 39745 39746 39747 39748 39749 39750 39751 39752 39753 39754 39755 39756 39757 39758 39759 39760 39761 39762 39763 39764 39765 39766 39767 39768 39769 39770 39771 39772 39773 39774 39775 39776 39777 39778 39779 39780 39781 39782 39783 39784 39785 39786 39787 39788 39789 39790 39791 39792 39793 39794 39795 39796 39797 39798 39799 39800 39801 39802 39803 39804 39805 39806 39807 39808 39809 39810 39811 39812 39813 39814 39815 39816 39817 39818 39819 39820 39821 39822 39823 39824 39825 39826 39827 39828 39829 39830 39831 39832 39833 39834 39835 39836 39837 39838 39839 39840 39841 39842 39843 39844 39845 39846 39847 39848 39849 39850 39851 39852 39853 39854 39855 39856 39857 39858 39859 39860 39861 39862 39863 39864 39865 39866 39867 39868 39869 39870 39871 39872 39873 39874 39875 39876 39877 39878 39879 39880 39881 39882 39883 39884 39885 39886 39887 39888 39889 39890 39891 39892 39893 39894 39895 39896 39897 39898 39899 39900 39901 39902 39903 39904 39905 39906 39907 39908 39909 39910 39911 39912 39913 39914 39915 39916 39917 39918 39919 39920 39921 39922 39923 39924 39925 39926 39927 39928 39929 39930 39931 39932 39933 39934 39935 39936 39937 39938 39939 39940 39941 39942 39943 39944 39945 39946 39947 39948 39949 39950 39951 39952 39953 39954 39955 39956 39957 39958 39959 39960 39961 39962 39963 39964 39965 39966 39967 39968 39969 39970 39971 39972 39973 39974 39975 39976 39977 39978 39979 39980 39981 39982 39983 39984 39985 39986 39987 39988 39989 39990 39991 39992 39993 39994 39995 39996 39997 39998 39999 40000 40001 40002 40003 40004 40005 40006 40007 40008 40009 40010 40011 40012 40013 40014 40015 40016 40017 40018 40019 40020 40021 40022 40023 40024 40025 40026 40027 40028 40029 40030 40031 40032 40033 40034 40035 40036 40037 40038 40039 40040 40041 40042 40043 40044 40045 40046 40047 40048 40049 40050 40051 40052 40053 40054 40055 40056 40057 40058 40059 40060 40061 40062 40063 40064 40065 40066 40067 40068 40069 40070 40071 40072 40073 40074 40075 40076 40077 40078 40079 40080 40081 40082 40083 40084 40085 40086 40087 40088 40089 40090 40091 40092 40093 40094 40095 40096 40097 40098 40099 40100 40101 40102 40103 40104 40105 40106 40107 40108 40109 40110 40111 40112 40113 40114 40115 40116 40117 40118 40119 40120 40121 40122 40123 40124 40125 40126 40127 40128 40129 40130 40131 40132 40133 40134 40135 40136 40137 40138 40139 40140 40141 40142 40143 40144 40145 40146 40147 40148 40149 40150 40151 40152 40153 40154 40155 40156 40157 40158 40159 40160 40161 40162 40163 40164 40165 40166 40167 40168 40169 40170 40171 40172 40173 40174 40175 40176 40177 40178 40179 40180 40181 40182 40183 40184 40185 40186 40187 40188 40189 40190 40191 40192 40193 40194 40195 40196 40197 40198 40199 40200 40201 40202 40203 40204 40205 40206 40207 40208 40209 40210 40211 40212 40213 40214 40215 40216 40217 40218 40219 40220 40221 40222 40223 40224 40225 40226 40227 40228 40229 40230 40231 40232 40233 40234 40235 40236 40237 40238 40239 40240 40241 40242 40243 40244 40245 40246 40247 40248 40249 40250 40251 40252 40253 40254 40255 40256 40257 40258 40259 40260 40261 40262 40263 40264 40265 40266 40267 40268 40269 40270 40271 40272 40273 40274 40275 40276 40277 40278 40279 40280 40281 40282 40283 40284 40285 40286 40287 40288 40289 40290 40291 40292 40293 40294 40295 40296 40297 40298 40299 40300 40301 40302 40303 40304 40305 40306 40307 40308 40309 40310 40311 40312 40313 40314 40315 40316 40317 40318 40319 40320 40321 40322 40323 40324 40325 40326 40327 40328 40329 40330 40331 40332 40333 40334 40335 40336 40337 40338 40339 40340 40341 40342 40343 40344 40345 40346 40347 40348 40349 40350 40351 40352 40353 40354 40355 40356 40357 40358 40359 40360 40361 40362 40363 40364 40365 40366 40367 40368 40369 40370 40371 40372 40373 40374 40375 40376 40377 40378 40379 40380 40381 40382 40383 40384 40385 40386 40387 40388 40389 40390 40391 40392 40393 40394 40395 40396 40397 40398 40399 40400 40401 40402 40403 40404 40405 40406 40407 40408 40409 40410 40411 40412 40413 40414 40415 40416 40417 40418 40419 40420 40421 40422 40423 40424 40425 40426 40427 40428 40429 40430 40431 40432 40433 40434 40435 40436 40437 40438 40439 40440 40441 40442 40443 40444 40445 40446 40447 40448 40449 40450 40451 40452 40453 40454 40455 40456 40457 40458 40459 40460 40461 40462 40463 40464 40465 40466 40467 40468 40469 40470 40471 40472 40473 40474 40475 40476 40477 40478 40479 40480 40481 40482 40483 40484 40485 40486 40487 40488 40489 40490 40491 40492 40493 40494 40495 40496 40497 40498 40499 40500 40501 40502 40503 40504 40505 40506 40507 40508 40509 40510 40511 40512 40513 40514 40515 40516 40517 40518 40519 40520 40521 40522 40523 40524 40525 40526 40527 40528 40529 40530 40531 40532 40533 40534 40535 40536 40537 40538 40539 40540 40541 40542 40543 40544 40545 40546 40547 40548 40549 40550 40551 40552 40553 40554 40555 40556 40557 40558 40559 40560 40561 40562 40563 40564 40565 40566 40567 40568 40569 40570 40571 40572 40573 40574 40575 40576 40577 40578 40579 40580 40581 40582 40583 40584 40585 40586 40587 40588 40589 40590 40591 40592 40593 40594 40595 40596 40597 40598 40599 40600 40601 40602 40603 40604 40605 40606 40607 40608 40609 40610 40611 40612 40613 40614 40615 40616 40617 40618 40619 40620 40621 40622 40623 40624 40625 40626 40627 40628 40629 40630 40631 40632 40633 40634 40635 40636 40637 40638 40639 40640 40641 40642 40643 40644 40645 40646 40647 40648 40649 40650 40651 40652 40653 40654 40655 40656 40657 40658 40659 40660 40661 40662 40663 40664 40665 40666 40667 40668 40669 40670 40671 40672 40673 40674 40675 40676 40677 40678 40679 40680 40681 40682 40683 40684 40685 40686 40687 40688 40689 40690 40691 40692 40693 40694 40695 40696 40697 40698 40699 40700 40701 40702 40703 40704 40705 40706 40707 40708 40709 40710 40711 40712 40713 40714 40715 40716 40717 40718 40719 40720 40721 40722 40723 40724 40725 40726 40727 40728 40729 40730 40731 40732 40733 40734 40735 40736 40737 40738 40739 40740 40741 40742 40743 40744 40745 40746 40747 40748 40749 40750 40751 40752 40753 40754 40755 40756 40757 40758 40759 40760 40761 40762 40763 40764 40765 40766 40767 40768 40769 40770 40771 40772 40773 40774 40775 40776 40777 40778 40779 40780 40781 40782 40783 40784 40785 40786 40787 40788 40789 40790 40791 40792 40793 40794 40795 40796 40797 40798 40799 40800 40801 40802 40803 40804 40805 40806 40807 40808 40809 40810 40811 40812 40813 40814 40815 40816 40817 40818 40819 40820 40821 40822 40823 40824 40825 40826 40827 40828 40829 40830 40831 40832 40833 40834 40835 40836 40837 40838 40839 40840 40841 40842 40843 40844 40845 40846 40847 40848 40849 40850 40851 40852 40853 40854 40855 40856 40857 40858 40859 40860 40861 40862 40863 40864 40865 40866 40867 40868 40869 40870 40871 40872 40873 40874 40875 40876 40877 40878 40879 40880 40881 40882 40883 40884 40885 40886 40887 40888 40889 40890 40891 40892 40893 40894 40895 40896 40897 40898 40899 40900 40901 40902 40903 40904 40905 40906 40907 40908 40909 40910 40911 40912 40913 40914 40915 40916 40917 40918 40919 40920 40921 40922 40923 40924 40925 40926 40927 40928 40929 40930 40931 40932 40933 40934 40935 40936 40937 40938 40939 40940 40941 40942 40943 40944 40945 40946 40947 40948 40949 40950 40951 40952 40953 40954 40955 40956 40957 40958 40959 40960 40961 40962 40963 40964 40965 40966 40967 40968 40969 40970 40971 40972 40973 40974 40975 40976 40977 40978 40979 40980 40981 40982 40983 40984 40985 40986 40987 40988 40989 40990 40991 40992 40993 40994 40995 40996 40997 40998 40999 41000 41001 41002 41003 41004 41005 41006 41007 41008 41009 41010 41011 41012 41013 41014 41015 41016 41017 41018 41019 41020 41021 41022 41023 41024 41025 41026 41027 41028 41029 41030 41031 41032 41033 41034 41035 41036 41037 41038 41039 41040 41041 41042 41043 41044 41045 41046 41047 41048 41049 41050 41051 41052 41053 41054 41055 41056 41057 41058 41059 41060 41061 41062 41063 41064 41065 41066 41067 41068 41069 41070 41071 41072 41073 41074 41075 41076 41077 41078 41079 41080 41081 41082 41083 41084 41085 41086 41087 41088 41089 41090 41091 41092 41093 41094 41095 41096 41097 41098 41099 41100 41101 41102 41103 41104 41105 41106 41107 41108 41109 41110 41111 41112 41113 41114 41115 41116 41117 41118 41119 41120 41121 41122 41123 41124 41125 41126 41127 41128 41129 41130 41131 41132 41133 41134 41135 41136 41137 41138 41139 41140 41141 41142 41143 41144 41145 41146 41147 41148 41149 41150 41151 41152 41153 41154 41155 41156 41157 41158 41159 41160 41161 41162 41163 41164 41165 41166 41167 41168 41169 41170 41171 41172 41173 41174 41175 41176 41177 41178 41179 41180 41181 41182 41183 41184 41185 41186 41187 41188 41189 41190 41191 41192 41193 41194 41195 41196 41197 41198 41199 41200 41201 41202 41203 41204 41205 41206 41207 41208 41209 41210 41211 41212 41213 41214 41215 41216 41217 41218 41219 41220 41221 41222 41223 41224 41225 41226 41227 41228 41229 41230 41231 41232 41233 41234 41235 41236 41237 41238 41239 41240 41241 41242 41243 41244 41245 41246 41247 41248 41249 41250 41251 41252 41253 41254 41255 41256 41257 41258 41259 41260 41261 41262 41263 41264 41265 41266 41267 41268 41269 41270 41271 41272 41273 41274 41275 41276 41277 41278 41279 41280 41281 41282 41283 41284 41285 41286 41287 41288 41289 41290 41291 41292 41293 41294 41295 41296 41297 41298 41299 41300 41301 41302 41303 41304 41305 41306 41307 41308 41309 41310 41311 41312 41313 41314 41315 41316 41317 41318 41319 41320 41321 41322 41323 41324 41325 41326 41327 41328 41329 41330 41331 41332 41333 41334 41335 41336 41337 41338 41339 41340 41341 41342 41343 41344 41345 41346 41347 41348 41349 41350 41351 41352 41353 41354 41355 41356 41357 41358 41359 41360 41361 41362 41363 41364 41365 41366 41367 41368 41369 41370 41371 41372 41373 41374 41375 41376 41377 41378 41379 41380 41381 41382 41383 41384 41385 41386 41387 41388 41389 41390 41391 41392 41393 41394 41395 41396 41397 41398 41399 41400 41401 41402 41403 41404 41405 41406 41407 41408 41409 41410 41411 41412 41413 41414 41415 41416 41417 41418 41419 41420 41421 41422 41423 41424 41425 41426 41427 41428 41429 41430 41431 41432 41433 41434 41435 41436 41437 41438 41439 41440 41441 41442 41443 41444 41445 41446 41447 41448 41449 41450 41451 41452 41453 41454 41455 41456 41457 41458 41459 41460 41461 41462 41463 41464 41465 41466 41467 41468 41469 41470 41471 41472 41473 41474 41475 41476 41477 41478 41479 41480 41481 41482 41483 41484 41485 41486 41487 41488 41489 41490 41491 41492 41493 41494 41495 41496 41497 41498 41499 41500 41501 41502 41503 41504 41505 41506 41507 41508 41509 41510 41511 41512 41513 41514 41515 41516 41517 41518 41519 41520 41521 41522 41523 41524 41525 41526 41527 41528 41529 41530 41531 41532 41533 41534 41535 41536 41537 41538 41539 41540 41541 41542 41543 41544 41545 41546 41547 41548 41549 41550 41551 41552 41553 41554 41555 41556 41557 41558 41559 41560 41561 41562 41563 41564 41565 41566 41567 41568 41569 41570 41571 41572 41573 41574 41575 41576 41577 41578 41579 41580 41581 41582 41583 41584 41585 41586 41587 41588 41589 41590 41591 41592 41593 41594 41595 41596 41597 41598 41599 41600 41601 41602 41603 41604 41605 41606 41607 41608 41609 41610 41611 41612 41613 41614 41615 41616 41617 41618 41619 41620 41621 41622 41623 41624 41625 41626 41627 41628 41629 41630 41631 41632 41633 41634 41635 41636 41637 41638 41639 41640 41641 41642 41643 41644 41645 41646 41647 41648 41649 41650 41651 41652 41653 41654 41655 41656 41657 41658 41659 41660 41661 41662 41663 41664 41665 41666 41667 41668 41669 41670 41671 41672 41673 41674 41675 41676 41677 41678 41679 41680 41681 41682 41683 41684 41685 41686 41687 41688 41689 41690 41691 41692 41693 41694 41695 41696 41697 41698 41699 41700 41701 41702 41703 41704 41705 41706 41707 41708 41709 41710 41711 41712 41713 41714 41715 41716 41717 41718 41719 41720 41721 41722 41723 41724 41725 41726 41727 41728 41729 41730 41731 41732 41733 41734 41735 41736 41737 41738 41739 41740 41741 41742 41743 41744 41745 41746 41747 41748 41749 41750 41751 41752 41753 41754 41755 41756 41757 41758 41759 41760 41761 41762 41763 41764 41765 41766 41767 41768 41769 41770 41771 41772 41773 41774 41775 41776 41777 41778 41779 41780 41781 41782 41783 41784 41785 41786 41787 41788 41789 41790 41791 41792 41793 41794 41795 41796 41797 41798 41799 41800 41801 41802 41803 41804 41805 41806 41807 41808 41809 41810 41811 41812 41813 41814 41815 41816 41817 41818 41819 41820 41821 41822 41823 41824 41825 41826 41827 41828 41829 41830 41831 41832 41833 41834 41835 41836 41837 41838 41839 41840 41841 41842 41843 41844 41845 41846 41847 41848 41849 41850 41851 41852 41853 41854 41855 41856 41857 41858 41859 41860 41861 41862 41863 41864 41865 41866 41867 41868 41869 41870 41871 41872 41873 41874 41875 41876 41877 41878 41879 41880 41881 41882 41883 41884 41885 41886 41887 41888 41889 41890 41891 41892 41893 41894 41895 41896 41897 41898 41899 41900 41901 41902 41903 41904 41905 41906 41907 41908 41909 41910 41911 41912 41913 41914 41915 41916 41917 41918 41919 41920 41921 41922 41923 41924 41925 41926 41927 41928 41929 41930 41931 41932 41933 41934 41935 41936 41937 41938 41939 41940 41941 41942 41943 41944 41945 41946 41947 41948 41949 41950 41951 41952 41953 41954 41955 41956 41957 41958 41959 41960 41961 41962 41963 41964 41965 41966 41967 41968 41969 41970 41971 41972 41973 41974 41975 41976 41977 41978 41979 41980 41981 41982 41983 41984 41985 41986 41987 41988 41989 41990 41991 41992 41993 41994 41995 41996 41997 41998 41999 42000 42001 42002 42003 42004 42005 42006 42007 42008 42009 42010 42011 42012 42013 42014 42015 42016 42017 42018 42019 42020 42021 42022 42023 42024 42025 42026 42027 42028 42029 42030 42031 42032 42033 42034 42035 42036 42037 42038 42039 42040 42041 42042 42043 42044 42045 42046 42047 42048 42049 42050 42051 42052 42053 42054 42055 42056 42057 42058 42059 42060 42061 42062 42063 42064 42065 42066 42067 42068 42069 42070 42071 42072 42073 42074 42075 42076 42077 42078 42079 42080 42081 42082 42083 42084 42085 42086 42087 42088 42089 42090 42091 42092 42093 42094 42095 42096 42097 42098 42099 42100 42101 42102 42103 42104 42105 42106 42107 42108 42109 42110 42111 42112 42113 42114 42115 42116 42117 42118 42119 42120 42121 42122 42123 42124 42125 42126 42127 42128 42129 42130 42131 42132 42133 42134 42135 42136 42137 42138 42139 42140 42141 42142 42143 42144 42145 42146 42147 42148 42149 42150 42151 42152 42153 42154 42155 42156 42157 42158 42159 42160 42161 42162 42163 42164 42165 42166 42167 42168 42169 42170 42171 42172 42173 42174 42175 42176 42177 42178 42179 42180 42181 42182 42183 42184 42185 42186 42187 42188 42189 42190 42191 42192 42193 42194 42195 42196 42197 42198 42199 42200 42201 42202 42203 42204 42205 42206 42207 42208 42209 42210 42211 42212 42213 42214 42215 42216 42217 42218 42219 42220 42221 42222 42223 42224 42225 42226 42227 42228 42229 42230 42231 42232 42233 42234 42235 42236 42237 42238 42239 42240 42241 42242 42243 42244 42245 42246 42247 42248 42249 42250 42251 42252 42253 42254 42255 42256 42257 42258 42259 42260 42261 42262 42263 42264 42265 42266 42267 42268 42269 42270 42271 42272 42273 42274 42275 42276 42277 42278 42279 42280 42281 42282 42283 42284 42285 42286 42287 42288 42289 42290 42291 42292 42293 42294 42295 42296 42297 42298 42299 42300 42301 42302 42303 42304 42305 42306 42307 42308 42309 42310 42311 42312 42313 42314 42315 42316 42317 42318 42319 42320 42321 42322 42323 42324 42325 42326 42327 42328 42329 42330 42331 42332 42333 42334 42335 42336 42337 42338 42339 42340 42341 42342 42343 42344 42345 42346 42347 42348 42349 42350 42351 42352 42353 42354 42355 42356 42357 42358 42359 42360 42361 42362 42363 42364 42365 42366 42367 42368 42369 42370 42371 42372 42373 42374 42375 42376 42377 42378 42379 42380 42381 42382 42383 42384 42385 42386 42387 42388 42389 42390 42391 42392 42393 42394 42395 42396 42397 42398 42399 42400 42401 42402 42403 42404 42405 42406 42407 42408 42409 42410 42411 42412 42413 42414 42415 42416 42417 42418 42419 42420 42421 42422 42423 42424 42425 42426 42427 42428 42429 42430 42431 42432 42433 42434 42435 42436 42437 42438 42439 42440 42441 42442 42443 42444 42445 42446 42447 42448 42449 42450 42451 42452 42453 42454 42455 42456 42457 42458 42459 42460 42461 42462 42463 42464 42465 42466 42467 42468 42469 42470 42471 42472 42473 42474 42475 42476 42477 42478 42479 42480 42481 42482 42483 42484 42485 42486 42487 42488 42489 42490 42491 42492 42493 42494 42495 42496 42497 42498 42499 42500 42501 42502 42503 42504 42505 42506 42507 42508 42509 42510 42511 42512 42513 42514 42515 42516 42517 42518 42519 42520 42521 42522 42523 42524 42525 42526 42527 42528 42529 42530 42531 42532 42533 42534 42535 42536 42537 42538 42539 42540 42541 42542 42543 42544 42545 42546 42547 42548 42549 42550 42551 42552 42553 42554 42555 42556 42557 42558 42559 42560 42561 42562 42563 42564 42565 42566 42567 42568 42569 42570 42571 42572 42573 42574 42575 42576 42577 42578 42579 42580 42581 42582 42583 42584 42585 42586 42587 42588 42589 42590 42591 42592 42593 42594 42595 42596 42597 42598 42599 42600 42601 42602 42603 42604 42605 42606 42607 42608 42609 42610 42611 42612 42613 42614 42615 42616 42617 42618 42619 42620 42621 42622 42623 42624 42625 42626 42627 42628 42629 42630 42631 42632 42633 42634 42635 42636 42637 42638 42639 42640 42641 42642 42643 42644 42645 42646 42647 42648 42649 42650 42651 42652 42653 42654 42655 42656 42657 42658 42659 42660 42661 42662 42663 42664 42665 42666 42667 42668 42669 42670 42671 42672 42673 42674 42675 42676 42677 42678 42679 42680 42681 42682 42683 42684 42685 42686 42687 42688 42689 42690 42691 42692 42693 42694 42695 42696 42697 42698 42699 42700 42701 42702 42703 42704 42705 42706 42707 42708 42709 42710 42711 42712 42713 42714 42715 42716 42717 42718 42719 42720 42721 42722 42723 42724 42725 42726 42727 42728 42729 42730 42731 42732 42733 42734 42735 42736 42737 42738 42739 42740 42741 42742 42743 42744 42745 42746 42747 42748 42749 42750 42751 42752 42753 42754 42755 42756 42757 42758 42759 42760 42761 42762 42763 42764 42765 42766 42767 42768 42769 42770 42771 42772 42773 42774 42775 42776 42777 42778 42779 42780 42781 42782 42783 42784 42785 42786 42787 42788 42789 42790 42791 42792 42793 42794 42795 42796 42797 42798 42799 42800 42801 42802 42803 42804 42805 42806 42807 42808 42809 42810 42811 42812 42813 42814 42815 42816 42817 42818 42819 42820 42821 42822 42823 42824 42825 42826 42827 42828 42829 42830 42831 42832 42833 42834 42835 42836 42837 42838 42839 42840 42841 42842 42843 42844 42845 42846 42847 42848 42849 42850 42851 42852 42853 42854 42855 42856 42857 42858 42859 42860 42861 42862 42863 42864 42865 42866 42867 42868 42869 42870 42871 42872 42873 42874 42875 42876 42877 42878 42879 42880 42881 42882 42883 42884 42885 42886 42887 42888 42889 42890 42891 42892 42893 42894 42895 42896 42897 42898 42899 42900 42901 42902 42903 42904 42905 42906 42907 42908 42909 42910 42911 42912 42913 42914 42915 42916 42917 42918 42919 42920 42921 42922 42923 42924 42925 42926 42927 42928 42929 42930 42931 42932 42933 42934 42935 42936 42937 42938 42939 42940 42941 42942 42943 42944 42945 42946 42947 42948 42949 42950 42951 42952 42953 42954 42955 42956 42957 42958 42959 42960 42961 42962 42963 42964 42965 42966 42967 42968 42969 42970 42971 42972 42973 42974 42975 42976 42977 42978 42979 42980 42981 42982 42983 42984 42985 42986 42987 42988 42989 42990 42991 42992 42993 42994 42995 42996 42997 42998 42999 43000 43001 43002 43003 43004 43005 43006 43007 43008 43009 43010 43011 43012 43013 43014 43015 43016 43017 43018 43019 43020 43021 43022 43023 43024 43025 43026 43027 43028 43029 43030 43031 43032 43033 43034 43035 43036 43037 43038 43039 43040 43041 43042 43043 43044 43045 43046 43047 43048 43049 43050 43051 43052 43053 43054 43055 43056 43057 43058 43059 43060 43061 43062 43063 43064 43065 43066 43067 43068 43069 43070 43071 43072 43073 43074 43075 43076 43077 43078 43079 43080 43081 43082 43083 43084 43085 43086 43087 43088 43089 43090 43091 43092 43093 43094 43095 43096 43097 43098 43099 43100 43101 43102 43103 43104 43105 43106 43107 43108 43109 43110 43111 43112 43113 43114 43115 43116 43117 43118 43119 43120 43121 43122 43123 43124 43125 43126 43127 43128 43129 43130 43131 43132 43133 43134 43135 43136 43137 43138 43139 43140 43141 43142 43143 43144 43145 43146 43147 43148 43149 43150 43151 43152 43153 43154 43155 43156 43157 43158 43159 43160 43161 43162 43163 43164 43165 43166 43167 43168 43169 43170 43171 43172 43173 43174 43175 43176 43177 43178 43179 43180 43181 43182 43183 43184 43185 43186 43187 43188 43189 43190 43191 43192 43193 43194 43195 43196 43197 43198 43199 43200 43201 43202 43203 43204 43205 43206 43207 43208 43209 43210 43211 43212 43213 43214 43215 43216 43217 43218 43219 43220 43221 43222 43223 43224 43225 43226 43227 43228 43229 43230 43231 43232 43233 43234 43235 43236 43237 43238 43239 43240 43241 43242 43243 43244 43245 43246 43247 43248 43249 43250 43251 43252 43253 43254 43255 43256 43257 43258 43259 43260 43261 43262 43263 43264 43265 43266 43267 43268 43269 43270 43271 43272 43273 43274 43275 43276 43277 43278 43279 43280 43281 43282 43283 43284 43285 43286 43287 43288 43289 43290 43291 43292 43293 43294 43295 43296 43297 43298 43299 43300 43301 43302 43303 43304 43305 43306 43307 43308 43309 43310 43311 43312 43313 43314 43315 43316 43317 43318 43319 43320 43321 43322 43323 43324 43325 43326 43327 43328 43329 43330 43331 43332 43333 43334 43335 43336 43337 43338 43339 43340 43341 43342 43343 43344 43345 43346 43347 43348 43349 43350 43351 43352 43353 43354 43355 43356 43357 43358 43359 43360 43361 43362 43363 43364 43365 43366 43367 43368 43369 43370 43371 43372 43373 43374 43375 43376 43377 43378 43379 43380 43381 43382 43383 43384 43385 43386 43387 43388 43389 43390 43391 43392 43393 43394 43395 43396 43397 43398 43399 43400 43401 43402 43403 43404 43405 43406 43407 43408 43409 43410 43411 43412 43413 43414 43415 43416 43417 43418 43419 43420 43421 43422 43423 43424 43425 43426 43427 43428 43429 43430 43431 43432 43433 43434 43435 43436 43437 43438 43439 43440 43441 43442 43443 43444 43445 43446 43447 43448 43449 43450 43451 43452 43453 43454 43455 43456 43457 43458 43459 43460 43461 43462 43463 43464 43465 43466 43467 43468 43469 43470 43471 43472 43473 43474 43475 43476 43477 43478 43479 43480 43481 43482 43483 43484 43485 43486 43487 43488 43489 43490 43491 43492 43493 43494 43495 43496 43497 43498 43499 43500 43501 43502 43503 43504 43505 43506 43507 43508 43509 43510 43511 43512 43513 43514 43515 43516 43517 43518 43519 43520 43521 43522 43523 43524 43525 43526 43527 43528 43529 43530 43531 43532 43533 43534 43535 43536 43537 43538 43539 43540 43541 43542 43543 43544 43545 43546 43547 43548 43549 43550 43551 43552 43553 43554 43555 43556 43557 43558 43559 43560 43561 43562 43563 43564 43565 43566 43567 43568 43569 43570 43571 43572 43573 43574 43575 43576 43577 43578 43579 43580 43581 43582 43583 43584 43585 43586 43587 43588 43589 43590 43591 43592 43593 43594 43595 43596 43597 43598 43599 43600 43601 43602 43603 43604 43605 43606 43607 43608 43609 43610 43611 43612 43613 43614 43615 43616 43617 43618 43619 43620 43621 43622 43623 43624 43625 43626 43627 43628 43629 43630 43631 43632 43633 43634 43635 43636 43637 43638 43639 43640 43641 43642 43643 43644 43645 43646 43647 43648 43649 43650 43651 43652 43653 43654 43655 43656 43657 43658 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 43669 43670 43671 43672 43673 43674 43675 43676 43677 43678 43679 43680 43681 43682 43683 43684 43685 43686 43687 43688 43689 43690 43691 43692 43693 43694 43695 43696 43697 43698 43699 43700 43701 43702 43703 43704 43705 43706 43707 43708 43709 43710 43711 43712 43713 43714 43715 43716 43717 43718 43719 43720 43721 43722 43723 43724 43725 43726 43727 43728 43729 43730 43731 43732 43733 43734 43735 43736 43737 43738 43739 43740 43741 43742 43743 43744 43745 43746 43747 43748 43749 43750 43751 43752 43753 43754 43755 43756 43757 43758 43759 43760 43761 43762 43763 43764 43765 43766 43767 43768 43769 43770 43771 43772 43773 43774 43775 43776 43777 43778 43779 43780 43781 43782 43783 43784 43785 43786 43787 43788 43789 43790 43791 43792 43793 43794 43795 43796 43797 43798 43799 43800 43801 43802 43803 43804 43805 43806 43807 43808 43809 43810 43811 43812 43813 43814 43815 43816 43817 43818 43819 43820 43821 43822 43823 43824 43825 43826 43827 43828 43829 43830 43831 43832 43833 43834 43835 43836 43837 43838 43839 43840 43841 43842 43843 43844 43845 43846 43847 43848 43849 43850 43851 43852 43853 43854 43855 43856 43857 43858 43859 43860 43861 43862 43863 43864 43865 43866 43867 43868 43869 43870 43871 43872 43873 43874 43875 43876 43877 43878 43879 43880 43881 43882 43883 43884 43885 43886 43887 43888 43889 43890 43891 43892 43893 43894 43895 43896 43897 43898 43899 43900 43901 43902 43903 43904 43905 43906 43907 43908 43909 43910 43911 43912 43913 43914 43915 43916 43917 43918 43919 43920 43921 43922 43923 43924 43925 43926 43927 43928 43929 43930 43931 43932 43933 43934 43935 43936 43937 43938 43939 43940 43941 43942 43943 43944 43945 43946 43947 43948 43949 43950 43951 43952 43953 43954 43955 43956 43957 43958 43959 43960 43961 43962 43963 43964 43965 43966 43967 43968 43969 43970 43971 43972 43973 43974 43975 43976 43977 43978 43979 43980 43981 43982 43983 43984 43985 43986 43987 43988 43989 43990 43991 43992 43993 43994 43995 43996 43997 43998 43999 44000 44001 44002 44003 44004 44005 44006 44007 44008 44009 44010 44011 44012 44013 44014 44015 44016 44017 44018 44019 44020 44021 44022 44023 44024 44025 44026 44027 44028 44029 44030 44031 44032 44033 44034 44035 44036 44037 44038 44039 44040 44041 44042 44043 44044 44045 44046 44047 44048 44049 44050 44051 44052 44053 44054 44055 44056 44057 44058 44059 44060 44061 44062 44063 44064 44065 44066 44067 44068 44069 44070 44071 44072 44073 44074 44075 44076 44077 44078 44079 44080 44081 44082 44083 44084 44085 44086 44087 44088 44089 44090 44091 44092 44093 44094 44095 44096 44097 44098 44099 44100 44101 44102 44103 44104 44105 44106 44107 44108 44109 44110 44111 44112 44113 44114 44115 44116 44117 44118 44119 44120 44121 44122 44123 44124 44125 44126 44127 44128 44129 44130 44131 44132 44133 44134 44135 44136 44137 44138 44139 44140 44141 44142 44143 44144 44145 44146 44147 44148 44149 44150 44151 44152 44153 44154 44155 44156 44157 44158 44159 44160 44161 44162 44163 44164 44165 44166 44167 44168 44169 44170 44171 44172 44173 44174 44175 44176 44177 44178 44179 44180 44181 44182 44183 44184 44185 44186 44187 44188 44189 44190 44191 44192 44193 44194 44195 44196 44197 44198 44199 44200 44201 44202 44203 44204 44205 44206 44207 44208 44209 44210 44211 44212 44213 44214 44215 44216 44217 44218 44219 44220 44221 44222 44223 44224 44225 44226 44227 44228 44229 44230 44231 44232 44233 44234 44235 44236 44237 44238 44239 44240 44241 44242 44243 44244 44245 44246 44247 44248 44249 44250 44251 44252 44253 44254 44255 44256 44257 44258 44259 44260 44261 44262 44263 44264 44265 44266 44267 44268 44269 44270 44271 44272 44273 44274 44275 44276 44277 44278 44279 44280 44281 44282 44283 44284 44285 44286 44287 44288 44289 44290 44291 44292 44293 44294 44295 44296 44297 44298 44299 44300 44301 44302 44303 44304 44305 44306 44307 44308 44309 44310 44311 44312 44313 44314 44315 44316 44317 44318 44319 44320 44321 44322 44323 44324 44325 44326 44327 44328 44329 44330 44331 44332 44333 44334 44335 44336 44337 44338 44339 44340 44341 44342 44343 44344 44345 44346 44347 44348 44349 44350 44351 44352 44353 44354 44355 44356 44357 44358 44359 44360 44361 44362 44363 44364 44365 44366 44367 44368 44369 44370 44371 44372 44373 44374 44375 44376 44377 44378 44379 44380 44381 44382 44383 44384 44385 44386 44387 44388 44389 44390 44391 44392 44393 44394 44395 44396 44397 44398 44399 44400 44401 44402 44403 44404 44405 44406 44407 44408 44409 44410 44411 44412 44413 44414 44415 44416 44417 44418 44419 44420 44421 44422 44423 44424 44425 44426 44427 44428 44429 44430 44431 44432 44433 44434 44435 44436 44437 44438 44439 44440 44441 44442 44443 44444 44445 44446 44447 44448 44449 44450 44451 44452 44453 44454 44455 44456 44457 44458 44459 44460 44461 44462 44463 44464 44465 44466 44467 44468 44469 44470 44471 44472 44473 44474 44475 44476 44477 44478 44479 44480 44481 44482 44483 44484 44485 44486 44487 44488 44489 44490 44491 44492 44493 44494 44495 44496 44497 44498 44499 44500 44501 44502 44503 44504 44505 44506 44507 44508 44509 44510 44511 44512 44513 44514 44515 44516 44517 44518 44519 44520 44521 44522 44523 44524 44525 44526 44527 44528 44529 44530 44531 44532 44533 44534 44535 44536 44537 44538 44539 44540 44541 44542 44543 44544 44545 44546 44547 44548 44549 44550 44551 44552 44553 44554 44555 44556 44557 44558 44559 44560 44561 44562 44563 44564 44565 44566 44567 44568 44569 44570 44571 44572 44573 44574 44575 44576 44577 44578 44579 44580 44581 44582 44583 44584 44585 44586 44587 44588 44589 44590 44591 44592 44593 44594 44595 44596 44597 44598 44599 44600 44601 44602 44603 44604 44605 44606 44607 44608 44609 44610 44611 44612 44613 44614 44615 44616 44617 44618 44619 44620 44621 44622 44623 44624 44625 44626 44627 44628 44629 44630 44631 44632 44633 44634 44635 44636 44637 44638 44639 44640 44641 44642 44643 44644 44645 44646 44647 44648 44649 44650 44651 44652 44653 44654 44655 44656 44657 44658 44659 44660 44661 44662 44663 44664 44665 44666 44667 44668 44669 44670 44671 44672 44673 44674 44675 44676 44677 44678 44679 44680 44681 44682 44683 44684 44685 44686 44687 44688 44689 44690 44691 44692 44693 44694 44695 44696 44697 44698 44699 44700 44701 44702 44703 44704 44705 44706 44707 44708 44709 44710 44711 44712 44713 44714 44715 44716 44717 44718 44719 44720 44721 44722 44723 44724 44725 44726 44727 44728 44729 44730 44731 44732 44733 44734 44735 44736 44737 44738 44739 44740 44741 44742 44743 44744 44745 44746 44747 44748 44749 44750 44751 44752 44753 44754 44755 44756 44757 44758 44759 44760 44761 44762 44763 44764 44765 44766 44767 44768 44769 44770 44771 44772 44773 44774 44775 44776 44777 44778 44779 44780 44781 44782 44783 44784 44785 44786 44787 44788 44789 44790 44791 44792 44793 44794 44795 44796 44797 44798 44799 44800 44801 44802 44803 44804 44805 44806 44807 44808 44809 44810 44811 44812 44813 44814 44815 44816 44817 44818 44819 44820 44821 44822 44823 44824 44825 44826 44827 44828 44829 44830 44831 44832 44833 44834 44835 44836 44837 44838 44839 44840 44841 44842 44843 44844 44845 44846 44847 44848 44849 44850 44851 44852 44853 44854 44855 44856 44857 44858 44859 44860 44861 44862 44863 44864 44865 44866 44867 44868 44869 44870 44871 44872 44873 44874 44875 44876 44877 44878 44879 44880 44881 44882 44883 44884 44885 44886 44887 44888 44889 44890 44891 44892 44893 44894 44895 44896 44897 44898 44899 44900 44901 44902 44903 44904 44905 44906 44907 44908 44909 44910 44911 44912 44913 44914 44915 44916 44917 44918 44919 44920 44921 44922 44923 44924 44925 44926 44927 44928 44929 44930 44931 44932 44933 44934 44935 44936 44937 44938 44939 44940 44941 44942 44943 44944 44945 44946 44947 44948 44949 44950 44951 44952 44953 44954 44955 44956 44957 44958 44959 44960 44961 44962 44963 44964 44965 44966 44967 44968 44969 44970 44971 44972 44973 44974 44975 44976 44977 44978 44979 44980 44981 44982 44983 44984 44985 44986 44987 44988 44989 44990 44991 44992 44993 44994 44995 44996 44997 44998 44999 45000 45001 45002 45003 45004 45005 45006 45007 45008 45009 45010 45011 45012 45013 45014 45015 45016 45017 45018 45019 45020 45021 45022 45023 45024 45025 45026 45027 45028 45029 45030 45031 45032 45033 45034 45035 45036 45037 45038 45039 45040 45041 45042 45043 45044 45045 45046 45047 45048 45049 45050 45051 45052 45053 45054 45055 45056 45057 45058 45059 45060 45061 45062 45063 45064 45065 45066 45067 45068 45069 45070 45071 45072 45073 45074 45075 45076 45077 45078 45079 45080 45081 45082 45083 45084 45085 45086 45087 45088 45089 45090 45091 45092 45093 45094 45095 45096 45097 45098 45099 45100 45101 45102 45103 45104 45105 45106 45107 45108 45109 45110 45111 45112 45113 45114 45115 45116 45117 45118 45119 45120 45121 45122 45123 45124 45125 45126 45127 45128 45129 45130 45131 45132 45133 45134 45135 45136 45137 45138 45139 45140 45141 45142 45143 45144 45145 45146 45147 45148 45149 45150 45151 45152 45153 45154 45155 45156 45157 45158 45159 45160 45161 45162 45163 45164 45165 45166 45167 45168 45169 45170 45171 45172 45173 45174 45175 45176 45177 45178 45179 45180 45181 45182 45183 45184 45185 45186 45187 45188 45189 45190 45191 45192 45193 45194 45195 45196 45197 45198 45199 45200 45201 45202 45203 45204 45205 45206 45207 45208 45209 45210 45211 45212 45213 45214 45215 45216 45217 45218 45219 45220 45221 45222 45223 45224 45225 45226 45227 45228 45229 45230 45231 45232 45233 45234 45235 45236 45237 45238 45239 45240 45241 45242 45243 45244 45245 45246 45247 45248 45249 45250 45251 45252 45253 45254 45255 45256 45257 45258 45259 45260 45261 45262 45263 45264 45265 45266 45267 45268 45269 45270 45271 45272 45273 45274 45275 45276 45277 45278 45279 45280 45281 45282 45283 45284 45285 45286 45287 45288 45289 45290 45291 45292 45293 45294 45295 45296 45297 45298 45299 45300 45301 45302 45303 45304 45305 45306 45307 45308 45309 45310 45311 45312 45313 45314 45315 45316 45317 45318 45319 45320 45321 45322 45323 45324 45325 45326 45327 45328 45329 45330 45331 45332 45333 45334 45335 45336 45337 45338 45339 45340 45341 45342 45343 45344 45345 45346 45347 45348 45349 45350 45351 45352 45353 45354 45355 45356 45357 45358 45359 45360 45361 45362 45363 45364 45365 45366 45367 45368 45369 45370 45371 45372 45373 45374 45375 45376 45377 45378 45379 45380 45381 45382 45383 45384 45385 45386 45387 45388 45389 45390 45391 45392 45393 45394 45395 45396 45397 45398 45399 45400 45401 45402 45403 45404 45405 45406 45407 45408 45409 45410 45411 45412 45413 45414 45415 45416 45417 45418 45419 45420 45421 45422 45423 45424 45425 45426 45427 45428 45429 45430 45431 45432 45433 45434 45435 45436 45437 45438 45439 45440 45441 45442 45443 45444 45445 45446 45447 45448 45449 45450 45451 45452 45453 45454 45455 45456 45457 45458 45459 45460 45461 45462 45463 45464 45465 45466 45467 45468 45469 45470 45471 45472 45473 45474 45475 45476 45477 45478 45479 45480 45481 45482 45483 45484 45485 45486 45487 45488 45489 45490 45491 45492 45493 45494 45495 45496 45497 45498 45499 45500 45501 45502 45503 45504 45505 45506 45507 45508 45509 45510 45511 45512 45513 45514 45515 45516 45517 45518 45519 45520 45521 45522 45523 45524 45525 45526 45527 45528 45529 45530 45531 45532 45533 45534 45535 45536 45537 45538 45539 45540 45541 45542 45543 45544 45545 45546 45547 45548 45549 45550 45551 45552 45553 45554 45555 45556 45557 45558 45559 45560 45561 45562 45563 45564 45565 45566 45567 45568 45569 45570 45571 45572 45573 45574 45575 45576 45577 45578 45579 45580 45581 45582 45583 45584 45585 45586 45587 45588 45589 45590 45591 45592 45593 45594 45595 45596 45597 45598 45599 45600 45601 45602 45603 45604 45605 45606 45607 45608 45609 45610 45611 45612 45613 45614 45615 45616 45617 45618 45619 45620 45621 45622 45623 45624 45625 45626 45627 45628 45629 45630 45631 45632 45633 45634 45635 45636 45637 45638 45639 45640 45641 45642 45643 45644 45645 45646 45647 45648 45649 45650 45651 45652 45653 45654 45655 45656 45657 45658 45659 45660 45661 45662 45663 45664 45665 45666 45667 45668 45669 45670 45671 45672 45673 45674 45675 45676 45677 45678 45679 45680 45681 45682 45683 45684 45685 45686 45687 45688 45689 45690 45691 45692 45693 45694 45695 45696 45697 45698 45699 45700 45701 45702 45703 45704 45705 45706 45707 45708 45709 45710 45711 45712 45713 45714 45715 45716 45717 45718 45719 45720 45721 45722 45723 45724 45725 45726 45727 45728 45729 45730 45731 45732 45733 45734 45735 45736 45737 45738 45739 45740 45741 45742 45743 45744 45745 45746 45747 45748 45749 45750 45751 45752 45753 45754 45755 45756 45757 45758 45759 45760 45761 45762 45763 45764 45765 45766 45767 45768 45769 45770 45771 45772 45773 45774 45775 45776 45777 45778 45779 45780 45781 45782 45783 45784 45785 45786 45787 45788 45789 45790 45791 45792 45793 45794 45795 45796 45797 45798 45799 45800 45801 45802 45803 45804 45805 45806 45807 45808 45809 45810 45811 45812 45813 45814 45815 45816 45817 45818 45819 45820 45821 45822 45823 45824 45825 45826 45827 45828 45829 45830 45831 45832 45833 45834 45835 45836 45837 45838 45839 45840 45841 45842 45843 45844 45845 45846 45847 45848 45849 45850 45851 45852 45853 45854 45855 45856 45857 45858 45859 45860 45861 45862 45863 45864 45865 45866 45867 45868 45869 45870 45871 45872 45873 45874 45875 45876 45877 45878 45879 45880 45881 45882 45883 45884 45885 45886 45887 45888 45889 45890 45891 45892 45893 45894 45895 45896 45897 45898 45899 45900 45901 45902 45903 45904 45905 45906 45907 45908 45909 45910 45911 45912 45913 45914 45915 45916 45917 45918 45919 45920 45921 45922 45923 45924 45925 45926 45927 45928 45929 45930 45931 45932 45933 45934 45935 45936 45937 45938 45939 45940 45941 45942 45943 45944 45945 45946 45947 45948 45949 45950 45951 45952 45953 45954 45955 45956 45957 45958 45959 45960 45961 45962 45963 45964 45965 45966 45967 45968 45969 45970 45971 45972 45973 45974 45975 45976 45977 45978 45979 45980 45981 45982 45983 45984 45985 45986 45987 45988 45989 45990 45991 45992 45993 45994 45995 45996 45997 45998 45999 46000 46001 46002 46003 46004 46005 46006 46007 46008 46009 46010 46011 46012 46013 46014 46015 46016 46017 46018 46019 46020 46021 46022 46023 46024 46025 46026 46027 46028 46029 46030 46031 46032 46033 46034 46035 46036 46037 46038 46039 46040 46041 46042 46043 46044 46045 46046 46047 46048 46049 46050 46051 46052 46053 46054 46055 46056 46057 46058 46059 46060 46061 46062 46063 46064 46065 46066 46067 46068 46069 46070 46071 46072 46073 46074 46075 46076 46077 46078 46079 46080 46081 46082 46083 46084 46085 46086 46087 46088 46089 46090 46091 46092 46093 46094 46095 46096 46097 46098 46099 46100 46101 46102 46103 46104 46105 46106 46107 46108 46109 46110 46111 46112 46113 46114 46115 46116 46117 46118 46119 46120 46121 46122 46123 46124 46125 46126 46127 46128 46129 46130 46131 46132 46133 46134 46135 46136 46137 46138 46139 46140 46141 46142 46143 46144 46145 46146 46147 46148 46149 46150 46151 46152 46153 46154 46155 46156 46157 46158 46159 46160 46161 46162 46163 46164 46165 46166 46167 46168 46169 46170 46171 46172 46173 46174 46175 46176 46177 46178 46179 46180 46181 46182 46183 46184 46185 46186 46187 46188 46189 46190 46191 46192 46193 46194 46195 46196 46197 46198 46199 46200 46201 46202 46203 46204 46205 46206 46207 46208 46209 46210 46211 46212 46213 46214 46215 46216 46217 46218 46219 46220 46221 46222 46223 46224 46225 46226 46227 46228 46229 46230 46231 46232 46233 46234 46235 46236 46237 46238 46239 46240 46241 46242 46243 46244 46245 46246 46247 46248 46249 46250 46251 46252 46253 46254 46255 46256 46257 46258 46259 46260 46261 46262 46263 46264 46265 46266 46267 46268 46269 46270 46271 46272 46273 46274 46275 46276 46277 46278 46279 46280 46281 46282 46283 46284 46285 46286 46287 46288 46289 46290 46291 46292 46293 46294 46295 46296 46297 46298 46299 46300 46301 46302 46303 46304 46305 46306 46307 46308 46309 46310 46311 46312 46313 46314 46315 46316 46317 46318 46319 46320 46321 46322 46323 46324 46325 46326 46327 46328 46329 46330 46331 46332 46333 46334 46335 46336 46337 46338 46339 46340 46341 46342 46343 46344 46345 46346 46347 46348 46349 46350 46351 46352 46353 46354 46355 46356 46357 46358 46359 46360 46361 46362 46363 46364 46365 46366 46367 46368 46369 46370 46371 46372 46373 46374 46375 46376 46377 46378 46379 46380 46381 46382 46383 46384 46385 46386 46387 46388 46389 46390 46391 46392 46393 46394 46395 46396 46397 46398 46399 46400 46401 46402 46403 46404 46405 46406 46407 46408 46409 46410 46411 46412 46413 46414 46415 46416 46417 46418 46419 46420 46421 46422 46423 46424 46425 46426 46427 46428 46429 46430 46431 46432 46433 46434 46435 46436 46437 46438 46439 46440 46441 46442 46443 46444 46445 46446 46447 46448 46449 46450 46451 46452 46453 46454 46455 46456 46457 46458 46459 46460 46461 46462 46463 46464 46465 46466 46467 46468 46469 46470 46471 46472 46473 46474 46475 46476 46477 46478 46479 46480 46481 46482 46483 46484 46485 46486 46487 46488 46489 46490 46491 46492 46493 46494 46495 46496 46497 46498 46499 46500 46501 46502 46503 46504 46505 46506 46507 46508 46509 46510 46511 46512 46513 46514 46515 46516 46517 46518 46519 46520 46521 46522 46523 46524 46525 46526 46527 46528 46529 46530 46531 46532 46533 46534 46535 46536 46537 46538 46539 46540 46541 46542 46543 46544 46545 46546 46547 46548 46549 46550 46551 46552 46553 46554 46555 46556 46557 46558 46559 46560 46561 46562 46563 46564 46565 46566 46567 46568 46569 46570 46571 46572 46573 46574 46575 46576 46577 46578 46579 46580 46581 46582 46583 46584 46585 46586 46587 46588 46589 46590 46591 46592 46593 46594 46595 46596 46597 46598 46599 46600 46601 46602 46603 46604 46605 46606 46607 46608 46609 46610 46611 46612 46613 46614 46615 46616 46617 46618 46619 46620 46621 46622 46623 46624 46625 46626 46627 46628 46629 46630 46631 46632 46633 46634 46635 46636 46637 46638 46639 46640 46641 46642 46643 46644 46645 46646 46647 46648 46649 46650 46651 46652 46653 46654 46655 46656 46657 46658 46659 46660 46661 46662 46663 46664 46665 46666 46667 46668 46669 46670 46671 46672 46673 46674 46675 46676 46677 46678 46679 46680 46681 46682 46683 46684 46685 46686 46687 46688 46689 46690 46691 46692 46693 46694 46695 46696 46697 46698 46699 46700 46701 46702 46703 46704 46705 46706 46707 46708 46709 46710 46711 46712 46713 46714 46715 46716 46717 46718 46719 46720 46721 46722 46723 46724 46725 46726 46727 46728 46729 46730 46731 46732 46733 46734 46735 46736 46737 46738 46739 46740 46741 46742 46743 46744 46745 46746 46747 46748 46749 46750 46751 46752 46753 46754 46755 46756 46757 46758 46759 46760 46761 46762 46763 46764 46765 46766 46767 46768 46769 46770 46771 46772 46773 46774 46775 46776 46777 46778 46779 46780 46781 46782 46783 46784 46785 46786 46787 46788 46789 46790 46791 46792 46793 46794 46795 46796 46797 46798 46799 46800 46801 46802 46803 46804 46805 46806 46807 46808 46809 46810 46811 46812 46813 46814 46815 46816 46817 46818 46819 46820 46821 46822 46823 46824 46825 46826 46827 46828 46829 46830 46831 46832 46833 46834 46835 46836 46837 46838 46839 46840 46841 46842 46843 46844 46845 46846 46847 46848 46849 46850 46851 46852 46853 46854 46855 46856 46857 46858 46859 46860 46861 46862 46863 46864 46865 46866 46867 46868 46869 46870 46871 46872 46873 46874 46875 46876 46877 46878 46879 46880 46881 46882 46883 46884 46885 46886 46887 46888 46889 46890 46891 46892 46893 46894 46895 46896 46897 46898 46899 46900 46901 46902 46903 46904 46905 46906 46907 46908 46909 46910 46911 46912 46913 46914 46915 46916 46917 46918 46919 46920 46921 46922 46923 46924 46925 46926 46927 46928 46929 46930 46931 46932 46933 46934 46935 46936 46937 46938 46939 46940 46941 46942 46943 46944 46945 46946 46947 46948 46949 46950 46951 46952 46953 46954 46955 46956 46957 46958 46959 46960 46961 46962 46963 46964 46965 46966 46967 46968 46969 46970 46971 46972 46973 46974 46975 46976 46977 46978 46979 46980 46981 46982 46983 46984 46985 46986 46987 46988 46989 46990 46991 46992 46993 46994 46995 46996 46997 46998 46999 47000 47001 47002 47003 47004 47005 47006 47007 47008 47009 47010 47011 47012 47013 47014 47015 47016 47017 47018 47019 47020 47021 47022 47023 47024 47025 47026 47027 47028 47029 47030 47031 47032 47033 47034 47035 47036 47037 47038 47039 47040 47041 47042 47043 47044 47045 47046 47047 47048 47049 47050 47051 47052 47053 47054 47055 47056 47057 47058 47059 47060 47061 47062 47063 47064 47065 47066 47067 47068 47069 47070 47071 47072 47073 47074 47075 47076 47077 47078 47079 47080 47081 47082 47083 47084 47085 47086 47087 47088 47089 47090 47091 47092 47093 47094 47095 47096 47097 47098 47099 47100 47101 47102 47103 47104 47105 47106 47107 47108 47109 47110 47111 47112 47113 47114 47115 47116 47117 47118 47119 47120 47121 47122 47123 47124 47125 47126 47127 47128 47129 47130 47131 47132 47133 47134 47135 47136 47137 47138 47139 47140 47141 47142 47143 47144 47145 47146 47147 47148 47149 47150 47151 47152 47153 47154 47155 47156 47157 47158 47159 47160 47161 47162 47163 47164 47165 47166 47167 47168 47169 47170 47171 47172 47173 47174 47175 47176 47177 47178 47179 47180 47181 47182 47183 47184 47185 47186 47187 47188 47189 47190 47191 47192 47193 47194 47195 47196 47197 47198 47199 47200 47201 47202 47203 47204 47205 47206 47207 47208 47209 47210 47211 47212 47213 47214 47215 47216 47217 47218 47219 47220 47221 47222 47223 47224 47225 47226 47227 47228 47229 47230 47231 47232 47233 47234 47235 47236 47237 47238 47239 47240 47241 47242 47243 47244 47245 47246 47247 47248 47249 47250 47251 47252 47253 47254 47255 47256 47257 47258 47259 47260 47261 47262 47263 47264 47265 47266 47267 47268 47269 47270 47271 47272 47273 47274 47275 47276 47277 47278 47279 47280 47281 47282 47283 47284 47285 47286 47287 47288 47289 47290 47291 47292 47293 47294 47295 47296 47297 47298 47299 47300 47301 47302 47303 47304 47305 47306 47307 47308 47309 47310 47311 47312 47313 47314 47315 47316 47317 47318 47319 47320 47321 47322 47323 47324 47325 47326 47327 47328 47329 47330 47331 47332 47333 47334 47335 47336 47337 47338 47339 47340 47341 47342 47343 47344 47345 47346 47347 47348 47349 47350 47351 47352 47353 47354 47355 47356 47357 47358 47359 47360 47361 47362 47363 47364 47365 47366 47367 47368 47369 47370 47371 47372 47373 47374 47375 47376 47377 47378 47379 47380 47381 47382 47383 47384 47385 47386 47387 47388 47389 47390 47391 47392 47393 47394 47395 47396 47397 47398 47399 47400 47401 47402 47403 47404 47405 47406 47407 47408 47409 47410 47411 47412 47413 47414 47415 47416 47417 47418 47419 47420 47421 47422 47423 47424 47425 47426 47427 47428 47429 47430 47431 47432 47433 47434 47435 47436 47437 47438 47439 47440 47441 47442 47443 47444 47445 47446 47447 47448 47449 47450 47451 47452 47453 47454 47455 47456 47457 47458 47459 47460 47461 47462 47463 47464 47465 47466 47467 47468 47469 47470 47471 47472 47473 47474 47475 47476 47477 47478 47479 47480 47481 47482 47483 47484 47485 47486 47487 47488 47489 47490 47491 47492 47493 47494 47495 47496 47497 47498 47499 47500 47501 47502 47503 47504 47505 47506 47507 47508 47509 47510 47511 47512 47513 47514 47515 47516 47517 47518 47519 47520 47521 47522 47523 47524 47525 47526 47527 47528 47529 47530 47531 47532 47533 47534 47535 47536 47537 47538 47539 47540 47541 47542 47543 47544 47545 47546 47547 47548 47549 47550 47551 47552 47553 47554 47555 47556 47557 47558 47559 47560 47561 47562 47563 47564 47565 47566 47567 47568 47569 47570 47571 47572 47573 47574 47575 47576 47577 47578 47579 47580 47581 47582 47583 47584 47585 47586 47587 47588 47589 47590 47591 47592 47593 47594 47595 47596 47597 47598 47599 47600 47601 47602 47603 47604 47605 47606 47607 47608 47609 47610 47611 47612 47613 47614 47615 47616 47617 47618 47619 47620 47621 47622 47623 47624 47625 47626 47627 47628 47629 47630 47631 47632 47633 47634 47635 47636 47637 47638 47639 47640 47641 47642 47643 47644 47645 47646 47647 47648 47649 47650 47651 47652 47653 47654 47655 47656 47657 47658 47659 47660 47661 47662 47663 47664 47665 47666 47667 47668 47669 47670 47671 47672 47673 47674 47675 47676 47677 47678 47679 47680 47681 47682 47683 47684 47685 47686 47687 47688 47689 47690 47691 47692 47693 47694 47695 47696 47697 47698 47699 47700 47701 47702 47703 47704 47705 47706 47707 47708 47709 47710 47711 47712 47713 47714 47715 47716 47717 47718 47719 47720 47721 47722 47723 47724 47725 47726 47727 47728 47729 47730 47731 47732 47733 47734 47735 47736 47737 47738 47739 47740 47741 47742 47743 47744 47745 47746 47747 47748 47749 47750 47751 47752 47753 47754 47755 47756 47757 47758 47759 47760 47761 47762 47763 47764 47765 47766 47767 47768 47769 47770 47771 47772 47773 47774 47775 47776 47777 47778 47779 47780 47781 47782 47783 47784 47785 47786 47787 47788 47789 47790 47791 47792 47793 47794 47795 47796 47797 47798 47799 47800 47801 47802 47803 47804 47805 47806 47807 47808 47809 47810 47811 47812 47813 47814 47815 47816 47817 47818 47819 47820 47821 47822 47823 47824 47825 47826 47827 47828 47829 47830 47831 47832 47833 47834 47835 47836 47837 47838 47839 47840 47841 47842 47843 47844 47845 47846 47847 47848 47849 47850 47851 47852 47853 47854 47855 47856 47857 47858 47859 47860 47861 47862 47863 47864 47865 47866 47867 47868 47869 47870 47871 47872 47873 47874 47875 47876 47877 47878 47879 47880 47881 47882 47883 47884 47885 47886 47887 47888 47889 47890 47891 47892 47893 47894 47895 47896 47897 47898 47899 47900 47901 47902 47903 47904 47905 47906 47907 47908 47909 47910 47911 47912 47913 47914 47915 47916 47917 47918 47919 47920 47921 47922 47923 47924 47925 47926 47927 47928 47929 47930 47931 47932 47933 47934 47935 47936 47937 47938 47939 47940 47941 47942 47943 47944 47945 47946 47947 47948 47949 47950 47951 47952 47953 47954 47955 47956 47957 47958 47959 47960 47961 47962 47963 47964 47965 47966 47967 47968 47969 47970 47971 47972 47973 47974 47975 47976 47977 47978 47979 47980 47981 47982 47983 47984 47985 47986 47987 47988 47989 47990 47991 47992 47993 47994 47995 47996 47997 47998 47999 48000 48001 48002 48003 48004 48005 48006 48007 48008 48009 48010 48011 48012 48013 48014 48015 48016 48017 48018 48019 48020 48021 48022 48023 48024 48025 48026 48027 48028 48029 48030 48031 48032 48033 48034 48035 48036 48037 48038 48039 48040 48041 48042 48043 48044 48045 48046 48047 48048 48049 48050 48051 48052 48053 48054 48055 48056 48057 48058 48059 48060 48061 48062 48063 48064 48065 48066 48067 48068 48069 48070 48071 48072 48073 48074 48075 48076 48077 48078 48079 48080 48081 48082 48083 48084 48085 48086 48087 48088 48089 48090 48091 48092 48093 48094 48095 48096 48097 48098 48099 48100 48101 48102 48103 48104 48105 48106 48107 48108 48109 48110 48111 48112 48113 48114 48115 48116 48117 48118 48119 48120 48121 48122 48123 48124 48125 48126 48127 48128 48129 48130 48131 48132 48133 48134 48135 48136 48137 48138 48139 48140 48141 48142 48143 48144 48145 48146 48147 48148 48149 48150 48151 48152 48153 48154 48155 48156 48157 48158 48159 48160 48161 48162 48163 48164 48165 48166 48167 48168 48169 48170 48171 48172 48173 48174 48175 48176 48177 48178 48179 48180 48181 48182 48183 48184 48185 48186 48187 48188 48189 48190 48191 48192 48193 48194 48195 48196 48197 48198 48199 48200 48201 48202 48203 48204 48205 48206 48207 48208 48209 48210 48211 48212 48213 48214 48215 48216 48217 48218 48219 48220 48221 48222 48223 48224 48225 48226 48227 48228 48229 48230 48231 48232 48233 48234 48235 48236 48237 48238 48239 48240 48241 48242 48243 48244 48245 48246 48247 48248 48249 48250 48251 48252 48253 48254 48255 48256 48257 48258 48259 48260 48261 48262 48263 48264 48265 48266 48267 48268 48269 48270 48271 48272 48273 48274 48275 48276 48277 48278 48279 48280 48281 48282 48283 48284 48285 48286 48287 48288 48289 48290 48291 48292 48293 48294 48295 48296 48297 48298 48299 48300 48301 48302 48303 48304 48305 48306 48307 48308 48309 48310 48311 48312 48313 48314 48315 48316 48317 48318 48319 48320 48321 48322 48323 48324 48325 48326 48327 48328 48329 48330 48331 48332 48333 48334 48335 48336 48337 48338 48339 48340 48341 48342 48343 48344 48345 48346 48347 48348 48349 48350 48351 48352 48353 48354 48355 48356 48357 48358 48359 48360 48361 48362 48363 48364 48365 48366 48367 48368 48369 48370 48371 48372 48373 48374 48375 48376 48377 48378 48379 48380 48381 48382 48383 48384 48385 48386 48387 48388 48389 48390 48391 48392 48393 48394 48395 48396 48397 48398 48399 48400 48401 48402 48403 48404 48405 48406 48407 48408 48409 48410 48411 48412 48413 48414 48415 48416 48417 48418 48419 48420 48421 48422 48423 48424 48425 48426 48427 48428 48429 48430 48431 48432 48433 48434 48435 48436 48437 48438 48439 48440 48441 48442 48443 48444 48445 48446 48447 48448 48449 48450 48451 48452 48453 48454 48455 48456 48457 48458 48459 48460 48461 48462 48463 48464 48465 48466 48467 48468 48469 48470 48471 48472 48473 48474 48475 48476 48477 48478 48479 48480 48481 48482 48483 48484 48485 48486 48487 48488 48489 48490 48491 48492 48493 48494 48495 48496 48497 48498 48499 48500 48501 48502 48503 48504 48505 48506 48507 48508 48509 48510 48511 48512 48513 48514 48515 48516 48517 48518 48519 48520 48521 48522 48523 48524 48525 48526 48527 48528 48529 48530 48531 48532 48533 48534 48535 48536 48537 48538 48539 48540 48541 48542 48543 48544 48545 48546 48547 48548 48549 48550 48551 48552 48553 48554 48555 48556 48557 48558 48559 48560 48561 48562 48563 48564 48565 48566 48567 48568 48569 48570 48571 48572 48573 48574 48575 48576 48577 48578 48579 48580 48581 48582 48583 48584 48585 48586 48587 48588 48589 48590 48591 48592 48593 48594 48595 48596 48597 48598 48599 48600 48601 48602 48603 48604 48605 48606 48607 48608 48609 48610 48611 48612 48613 48614 48615 48616 48617 48618 48619 48620 48621 48622 48623 48624 48625 48626 48627 48628 48629 48630 48631 48632 48633 48634 48635 48636 48637 48638 48639 48640 48641 48642 48643 48644 48645 48646 48647 48648 48649 48650 48651 48652 48653 48654 48655 48656 48657 48658 48659 48660 48661 48662 48663 48664 48665 48666 48667 48668 48669 48670 48671 48672 48673 48674 48675 48676 48677 48678 48679 48680 48681 48682 48683 48684 48685 48686 48687 48688 48689 48690 48691 48692 48693 48694 48695 48696 48697 48698 48699 48700 48701 48702 48703 48704 48705 48706 48707 48708 48709 48710 48711 48712 48713 48714 48715 48716 48717 48718 48719 48720 48721 48722 48723 48724 48725 48726 48727 48728 48729 48730 48731 48732 48733 48734 48735 48736 48737 48738 48739 48740 48741 48742 48743 48744 48745 48746 48747 48748 48749 48750 48751 48752 48753 48754 48755 48756 48757 48758 48759 48760 48761 48762 48763 48764 48765 48766 48767 48768 48769 48770 48771 48772 48773 48774 48775 48776 48777 48778 48779 48780 48781 48782 48783 48784 48785 48786 48787 48788 48789 48790 48791 48792 48793 48794 48795 48796 48797 48798 48799 48800 48801 48802 48803 48804 48805 48806 48807 48808 48809 48810 48811 48812 48813 48814 48815 48816 48817 48818 48819 48820 48821 48822 48823 48824 48825 48826 48827 48828 48829 48830 48831 48832 48833 48834 48835 48836 48837 48838 48839 48840 48841 48842 48843 48844 48845 48846 48847 48848 48849 48850 48851 48852 48853 48854 48855 48856 48857 48858 48859 48860 48861 48862 48863 48864 48865 48866 48867 48868 48869 48870 48871 48872 48873 48874 48875 48876 48877 48878 48879 48880 48881 48882 48883 48884 48885 48886 48887 48888 48889 48890 48891 48892 48893 48894 48895 48896 48897 48898 48899 48900 48901 48902 48903 48904 48905 48906 48907 48908 48909 48910 48911 48912 48913 48914 48915 48916 48917 48918 48919 48920 48921 48922 48923 48924 48925 48926 48927 48928 48929 48930 48931 48932 48933 48934 48935 48936 48937 48938 48939 48940 48941 48942 48943 48944 48945 48946 48947 48948 48949 48950 48951 48952 48953 48954 48955 48956 48957 48958 48959 48960 48961 48962 48963 48964 48965 48966 48967 48968 48969 48970 48971 48972 48973 48974 48975 48976 48977 48978 48979 48980 48981 48982 48983 48984 48985 48986 48987 48988 48989 48990 48991 48992 48993 48994 48995 48996 48997 48998 48999 49000 49001 49002 49003 49004 49005 49006 49007 49008 49009 49010 49011 49012 49013 49014 49015 49016 49017 49018 49019 49020 49021 49022 49023 49024 49025 49026 49027 49028 49029 49030 49031 49032 49033 49034 49035 49036 49037 49038 49039 49040 49041 49042 49043 49044 49045 49046 49047 49048 49049 49050 49051 49052 49053 49054 49055 49056 49057 49058 49059 49060 49061 49062 49063 49064 49065 49066 49067 49068 49069 49070 49071 49072 49073 49074 49075 49076 49077 49078 49079 49080 49081 49082 49083 49084 49085 49086 49087 49088 49089 49090 49091 49092 49093 49094 49095 49096 49097 49098 49099 49100 49101 49102 49103 49104 49105 49106 49107 49108 49109 49110 49111 49112 49113 49114 49115 49116 49117 49118 49119 49120 49121 49122 49123 49124 49125 49126 49127 49128 49129 49130 49131 49132 49133 49134 49135 49136 49137 49138 49139 49140 49141 49142 49143 49144 49145 49146 49147 49148 49149 49150 49151 49152 49153 49154 49155 49156 49157 49158 49159 49160 49161 49162 49163 49164 49165 49166 49167 49168 49169 49170 49171 49172 49173 49174 49175 49176 49177 49178 49179 49180
|
\input texinfo @c -*-texinfo-*-
@c ONE SENTENCE PER LINE
@c ---------------------
@c For main printed text in this file, to allow easy tracking of history
@c with Git, we are following a one-sentence-per-line convention.
@c
@c Since the manual is long, this is being done gradually from the start.
@c %**start of header
@setfilename gnuastro.info
@settitle GNU Astronomy Utilities
@documentencoding UTF-8
@allowcodebreaks false
@c@afourpaper
@c %**end of header
@include version.texi
@include formath.texi
@c So dashes and underscores can be used in HTMLs
@allowcodebreaks true
@c So function output type is printed on first line
@deftypefnnewline on
@c Enable single quotes
@codequoteundirected on
@c Use section titles in cross references, not node titles.
@xrefautomaticsectiontitle on
@c For the indexes:
@syncodeindex vr cp
@syncodeindex pg cp
@c Copyright information:
@copying
This book documents version @value{VERSION} of the GNU Astronomy Utilities (Gnuastro).
Gnuastro provides various programs and libraries for astronomical data manipulation and analysis.
Copyright @copyright{} 2015-2025 Free Software Foundation, Inc.
@quotation
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled ``GNU Free Documentation License''.
@end quotation
@end copying
@c To include in the info directory.
@dircategory Astronomy
@direntry
* Gnuastro: (gnuastro). GNU Astronomy Utilities.
* libgnuastro: (gnuastro)Gnuastro library. Full Gnuastro library doc.
* help-gnuastro: (gnuastro)help-gnuastro mailing list. Getting help.
* bug-gnuastro: (gnuastro)Report a bug. How to report bugs
* Arithmetic: (gnuastro)Arithmetic. Arithmetic operations on pixels.
* astarithmetic: (gnuastro)Invoking astarithmetic. Options to Arithmetic.
* BuildProgram: (gnuastro)BuildProgram. Compile and run programs using Gnuastro's library.
* astbuildprog: (gnuastro)Invoking astbuildprog. Options to BuildProgram.
* ConvertType: (gnuastro)ConvertType. Convert different file types.
* astconvertt: (gnuastro)Invoking astconvertt. Options to ConvertType.
* Convolve: (gnuastro)Convolve. Convolve an input file with kernel.
* astconvolve: (gnuastro)Invoking astconvolve. Options to Convolve.
* CosmicCalculator: (gnuastro)CosmicCalculator. For cosmological params.
* astcosmiccal: (gnuastro)Invoking astcosmiccal. Options to CosmicCalculator.
* Crop: (gnuastro)Crop. Crop region(s) from image(s).
* astcrop: (gnuastro)Invoking astcrop. Options to Crop.
* Fits: (gnuastro)Fits. View and manipulate FITS extensions and keywords.
* astfits: (gnuastro)Invoking astfits. Options to Fits.
* MakeCatalog: (gnuastro)MakeCatalog. Make a catalog from labeled image.
* astmkcatalog: (gnuastro)Invoking astmkcatalog. Options to MakeCatalog.
* MakeProfiles: (gnuastro)MakeProfiles. Make mock profiles.
* astmkprof: (gnuastro)Invoking astmkprof. Options to MakeProfiles.
* Match: (gnuastro)Match. Match two separate catalogs.
* astmatch: (gnuastro)Invoking astmatch. Options to Match.
* NoiseChisel: (gnuastro)NoiseChisel. Detect signal in noise.
* astnoisechisel: (gnuastro)Invoking astnoisechisel. Options to NoiseChisel.
* Segment: (gnuastro)Segment. Segment detections based on signal structure.
* astsegment: (gnuastro)Invoking astsegment. Options to Segment.
* Query: (gnuastro)Query. Access remote databases for downloading data.
* astquery: (gnuastro)Invoking astquery. Options to Query.
* Statistics: (gnuastro)Statistics. Get image Statistics.
* aststatistics: (gnuastro)Invoking aststatistics. Options to Statistics.
* Table: (gnuastro)Table. Read and write FITS binary or ASCII tables.
* asttable: (gnuastro)Invoking asttable. Options to Table.
* Warp: (gnuastro)Warp. Warp a dataset to a new grid.
* astwarp: (gnuastro)Invoking astwarp. Options to Warp.
* astscript: (gnuastro)Installed scripts. Gnuastro's installed scripts.
* astscript-ds9-region: (gnuastro)Invoking astscript-ds9-region. Options to this script
* astscript-fits-view: (gnuastro)Invoking astscript-fits-view. Options to this script
* astscript-pointing-simulate: (gnuastro)Invoking astscript-pointing-simulate. Options to this script
* astscript-psf-scale-factor: (gnuastro)Invoking astscript-psf-scale-factor. Options to this script
* astscript-psf-select-stars: (gnuastro)Invoking astscript-psf-select-stars. Options to this script
* astscript-psf-stamp: (gnuastro)Invoking astscript-psf-stamp. Options to this script
* astscript-psf-subtract: (gnuastro)Invoking astscript-psf-subtract. Options to this script
* astscript-psf-unite: (gnuastro)Invoking astscript-psf-unite. Options to this script
* astscript-radial-profile: (gnuastro)Invoking astscript-radial-profile. Options to this script
* astscript-sort-by-night: (gnuastro)Invoking astscript-sort-by-night. Options to this script
* astscript-zeropoint: (gnuastro)Invoking astscript-zeropoint. Options to this script
@end direntry
@c Print title information:
@titlepage
@title GNU Astronomy Utilities
@subtitle Astronomical data manipulation and analysis programs and libraries
@subtitle for version @value{VERSION}, @value{UPDATED}
@iftex
@subtitle
@subtitle
@subtitle
@subtitle @image{gnuastro-figures/gnuastro, 6cm} { } { } { } { } { } { } { } { } { } { } { } { } { } { } { } { } { } { }
@subtitle
@subtitle
@end iftex
@c @subtitle @strong{Important note:}
@c @subtitle This is an @strong{under-development} Gnuastro release (bleeding-edge!).
@c @subtitle It is not yet officially released.
@c @subtitle The source tarball corresponding to this version is (temporarily) available at this URL:
@c @subtitle @url{http://akhlaghi.org/src/gnuastro-@value{VERSION}.tar.lz}
@c @subtitle (the tarball link above will not be available after the next official release)
@c @subtitle The most recent under-development source and its corresponding book are available at:
@c @subtitle @url{http://akhlaghi.org/gnuastro.pdf}
@c @subtitle @url{http://akhlaghi.org/gnuastro-latest.tar.lz}
@c @subtitle To stay up to date with Gnuastro's official releases, please subscribe to this mailing list:
@c @subtitle @url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}
@author Mohammad Akhlaghi
@page
@vskip 0pt plus 1filll
@insertcopying
@page
Gnuastro (source code, book and web page) authors (sorted by number of
commits):
@quotation
@include authors.texi
@end quotation
@page
@quotation
@*
@*
@*
@*
@*
@*
@*
@*
@*
For myself, I am interested in science and in philosophy only because I want to learn something about the riddle of the world in which we live, and the riddle of man's knowledge of that world.
And I believe that only a revival of interest in these riddles can save the sciences and philosophy from narrow specialization and from an obscurantist faith in the expert's special skill, and in his personal knowledge and authority; a faith that so well fits our `post-rationalist' and `post-critical' age, proudly dedicated to the destruction of the tradition of rational philosophy, and of rational thought itself.
@author Karl Popper. The logic of scientific discovery. 1959.
@end quotation
@end titlepage
@shortcontents
@contents
@c Online version top information.
@ifnottex
@node Top, Introduction, (dir), (dir)
@top GNU Astronomy Utilities
@insertcopying
@ifhtml
To navigate easily in this web page, you can use the @code{Next}, @code{Previous}, @code{Up} and @code{Contents} links in the top and bottom of each page.
@code{Next} and @code{Previous} will take you to the next or previous topic in the same level, for example, from chapter 1 to chapter 2 or vice versa.
To go to the sections or subsections, you have to click on the menu entries that are there when ever a sub-component to a title is present.
@end ifhtml
@end ifnottex
@menu
* Introduction:: General introduction.
* Tutorials:: Tutorials or Cookbooks.
* Installation:: Requirements and installation.
* Common program behavior:: Common behavior in all programs.
* Data containers:: Tools to operate on extensions and tables.
* Data manipulation:: Tools for basic image manipulation.
* Data analysis:: Analyze images.
* Data modeling:: Modeling observed data.
* High-level calculations:: Physical calculations.
* Installed scripts:: Installed scripts that operate like programs.
* Makefile extensions:: Use Gnuastro's features as GNU Make functions.
* Library:: Gnuastro's library of useful functions.
* Developing:: The development environment.
* Other useful software:: Installing other useful software.
* GNU Free Doc License:: Full text of the GNU Free documentation license.
* GNU General Public License:: Full text of the GNU General public license.
* Index:: Index of terms
@detailmenu
--- The Detailed Node Listing ---
Introduction
* Quick start:: A quick start to installation.
* Gnuastro programs list:: List of command-line programs.
* Science and its tools:: Some philosophy and history.
* Your rights:: User rights.
* Logo of Gnuastro:: Meaning behind Gnuastro's logo.
* Naming convention:: About names of programs in Gnuastro.
* Version numbering:: Understanding version numbers.
* New to GNU/Linux?:: Suggested GNU/Linux distribution.
* Report a bug:: Search and report the bug you found.
* Suggest new feature:: How to suggest a new feature.
* Announcements:: How to stay up to date with Gnuastro.
* Conventions:: Conventions used in this book.
* Acknowledgments and short history::
Version numbering
* GNU Astronomy Utilities 1.0:: Plans for version 1.0 release
New to GNU/Linux?
* Command-line interface:: Introduction to the command-line
Tutorials
* General program usage tutorial:: Tutorial on many programs in generic scenario.
* Detecting large extended targets:: NoiseChisel for huge extended targets.
* Building the extended PSF:: How to extract an extended PSF from science data.
* Sufi simulates a detection:: Simulating a detection.
* Detecting lines and extracting spectra in 3D data:: Extracting spectra and emission line properties.
* Color images with full dynamic range:: Bright pixels with color, faint pixels in grayscale.
* Zero point of an image:: Estimate the zero point of an image.
* Pointing pattern design:: Optimizing the pointings of your observations.
* Moire pattern in coadding and its correction:: How to avoid this grid-based artifact.
* Clipping outliers:: How to avoid outliers in your measurements.
General program usage tutorial
* Calling Gnuastro's programs:: Easy way to find Gnuastro's programs.
* Accessing documentation:: Access to manual of programs you are running.
* Setup and data download:: Setup this template and download datasets.
* Dataset inspection and cropping:: Crop the flat region to use in next steps.
* Angular coverage on the sky:: Measure the field size on the sky.
* Cosmological coverage and visualizing tables:: Size in Mpc2, and plotting its change.
* Building custom programs with the library:: Easy way to build new programs.
* Option management and configuration files:: Dealing with options and configuring them.
* Warping to a new pixel grid:: Transforming/warping the dataset.
* NoiseChisel and Multi-Extension FITS files:: Running NoiseChisel and having multiple HDUs.
* NoiseChisel optimization for detection:: Check NoiseChisel's operation and improve it.
* NoiseChisel optimization for storage:: Dramatically decrease output's volume.
* Segmentation and making a catalog:: Finding true peaks and creating a catalog.
* Measuring the dataset limits:: One way to measure the ``depth'' of your data.
* Working with catalogs estimating colors:: Estimating colors using the catalogs.
* Column statistics color-magnitude diagram:: Visualizing column correlations.
* Aperture photometry:: Doing photometry on a fixed aperture.
* Matching catalogs:: Easily find corresponding rows from two catalogs.
* Reddest clumps cutouts and parallelization:: Parallelization and selecting a subset of the data.
* FITS images in a publication:: How to display FITS images in a PDF.
* Marking objects for publication:: How to mark some objects over the image in a PDF.
* Writing scripts to automate the steps:: Scripts will greatly help in re-doing things fast.
* Citing Gnuastro:: How to cite and acknowledge Gnuastro in your papers.
Detecting large extended targets
* Downloading and validating input data:: How to get and check the input data.
* NoiseChisel optimization:: Detect the extended and diffuse wings.
* Skewness caused by signal and its measurement:: How signal changes the distribution.
* Image surface brightness limit:: Standards to quantify the noise level.
* Achieved surface brightness level:: Calculate the outer surface brightness.
* Extract clumps and objects:: Find sub-structure over the detections.
Building the extended PSF
* Preparing input for extended PSF:: Which stars should be used?
* Saturated pixels and Segment's clumps:: Saturation is a major hurdle!
* One object for the whole detection:: Avoiding over-segmentation in objects.
* Building outer part of PSF:: Building the outermost PSF wings.
* Inner part of the PSF:: Going towards the PSF center.
* Uniting the different PSF components:: Merging all the components into one PSF.
* Subtracting the PSF:: Having the PSF, we now want to subtract it.
Detecting lines and extracting spectra in 3D data
* Viewing spectra and redshifted lines:: Interactively see the spectra of an object
* Sky lines in optical IFUs:: How to see sky lines in a cube.
* Continuum subtraction:: Removing the continuum from a data cube.
* 3D detection with NoiseChisel:: Finding emission-lines and their spectra.
* 3D measurements and spectra:: Measuring 3d properties including spectra.
* Extracting a single spectrum and plotting it:: Extracting a single vector row.
* Cubes with logarithmic third dimension:: When the wavelength/frequency is logarithmic.
* Synthetic narrow-band images:: Collapsing the third dimension into a 2D image.
Color images with full dynamic range
* Color channels in same pixel grid:: Warping all inputs to the same pixel grid.
* Color image using linear transformation:: A linear color mapping won't show much!
* Color for bright regions and grayscale for faint:: Show the full dynamic range.
* Manually setting color-black-gray regions:: Physically motivated regions.
* Weights contrast markers and other customizations:: Nice ways to enhance visual appearance.
Zero point of an image
* Zero point tutorial with reference image:: Using a reference image.
* Zero point tutorial with reference catalog:: Using a reference catalog.
Pointing pattern design
* Preparing input and generating exposure map:: Download and image and build exposure map.
* Area of non-blank pixels on sky:: Account for the curved area on the sky.
* Script with pointing simulation steps so far:: Summary of steps for easy testing.
* Larger steps sizes for better calibration:: The initial small dither is not enough.
* Pointings that account for sky curvature:: Sky curvature will cause problems!
* Accounting for non-exposed pixels:: Parts of the detector do not get exposed to light.
Clipping outliers
* Building inputs and analysis without clipping:: Building a dataset for demonstration below.
* Sigma clipping:: Standard deviation (STD) clipping.
* MAD clipping:: Median Absolute Deviation (MAD) clipping.
* Contiguous outliers:: Two clips with holes filled in the middle.
Installation
* Dependencies:: Necessary packages for Gnuastro.
* Downloading the source:: Ways to download the source code.
* Build and install:: Configure, build and install Gnuastro.
Dependencies
* Mandatory dependencies:: Gnuastro will not install without these.
* Optional dependencies:: Adding more functionality.
* Bootstrapping dependencies:: If you have the version controlled source.
* Dependencies from package managers:: Installing from OS package managers.
Mandatory dependencies
* GNU Scientific Library:: Installing GSL.
* CFITSIO:: C interface to the FITS standard.
* WCSLIB:: C interface to the WCS standard of FITS.
Downloading the source
* Release tarball:: Download a stable official release.
* Version controlled source:: Get and use the version controlled source.
Version controlled source
* Bootstrapping:: Adding all the automatically generated files.
* Synchronizing:: Keep your local clone up to date.
Build and install
* Configuring:: Configure Gnuastro
* Separate build and source directories:: Keeping derivate/build files separate.
* Tests:: Run tests to see if it is working.
* A4 print book:: Customize the print book.
* Known issues:: Issues you might encounter.
Configuring
* Gnuastro configure options:: Configure options particular to Gnuastro.
* Installation directory:: Specify the directory to install.
* Executable names:: Changing executable names.
* Configure and build in RAM:: For minimal use of HDD or SSD, and clean source.
Common program behavior
* Command-line:: How to use the command-line.
* Configuration files:: Values for unspecified variables.
* Getting help:: Getting more information on the go.
* Multi-threaded operations:: How threads are managed in Gnuastro.
* Numeric data types:: Different types and how to specify them.
* Memory management:: How memory is allocated (in RAM or HDD/SSD).
* Tables:: Recognized table formats.
* Tessellation:: Tile the dataset into non-overlapping bins.
* Automatic output:: About automatic output names.
* Output FITS files:: Common properties when outputs are FITS.
* Numeric locale:: Decimal point printed like 0.5 instead of 0,5.
Command-line
* Arguments and options:: Different ways to specify inputs and configuration.
* Common options:: Options that are shared between all programs.
* Shell TAB completion:: Customized TAB completion in Gnuastro.
* Standard input:: Using output of another program as input.
* Shell tips:: Useful tips and tricks for program usage.
Arguments and options
* Arguments:: For specifying the main input files/operations.
* Options:: For configuring the behavior of the program.
Common options
* Input output options:: Common input/output options.
* Processing options:: Options for common processing steps.
* Operating mode options:: Common operating mode options.
Shell tips
* Separate shell variables for multiple outputs:: When you get values from one command.
* Truncating start of long string FITS keyword values:: When the end of the string matters.
Configuration files
* Configuration file format:: ASCII format of configuration file.
* Configuration file precedence:: Precedence of configuration files.
* Current directory and User wide:: Local and user configuration files.
* System wide:: System wide configuration files.
Getting help
* --usage:: View option names and value formats.
* --help:: List all options with description.
* Man pages:: Man pages generated from --help.
* Info:: View complete book in terminal.
* help-gnuastro mailing list:: Contacting experienced users.
Multi-threaded operations
* A note on threads:: Caution and suggestion on using threads.
* How to run simultaneous operations:: How to run things simultaneously.
Tables
* Recognized table formats:: Table formats that are recognized in Gnuastro.
* Gnuastro text table format:: Gnuastro's convention plain text tables.
* Selecting table columns:: Identify/select certain columns from a table
Data containers
* Fits:: View and manipulate extensions and keywords.
* ConvertType:: Convert data to various formats.
* Table:: Read and Write FITS tables to plain text.
* Query:: Import data from external databases.
Fits
* Invoking astfits:: Arguments and options to Header.
Invoking Fits
* HDU information and manipulation:: Learn about the HDUs and move them.
* Keyword inspection and manipulation:: Manipulate metadata keywords in a HDU.
* Pixel information images:: Pixel values contain information on the pixels.
ConvertType
* Raster and Vector graphics:: Images coming from nature, and the abstract.
* Recognized file formats:: Recognized file formats
* Color:: Some explanations on color.
* Annotations for figure in paper:: Adding coordinates or physical scale.
* Invoking astconvertt:: Options and arguments to ConvertType.
Color
* Pixel colors:: Multiple filters in each pixel.
* Colormaps for single-channel pixels:: Better display of single-filter images.
* Vector graphics colors::
Annotations for figure in paper
* Full script of annotations on figure:: All the steps in one script
Invoking ConvertType
* ConvertType input and output:: Input/output file names and formats.
* Pixel visualization:: Visualizing the pixels in the output.
* Drawing with vector graphics:: Adding marks in many shapes and colors over the pixels.
Table
* Printing floating point numbers:: Optimal storage of floating point types.
* Vector columns:: How to keep more than one value in each column.
* Column arithmetic:: How to do operations on table columns.
* Operation precedence in Table:: Order of running options in Table.
* Invoking asttable:: Options and arguments to Table.
Query
* Available databases:: List of available databases to Query.
* Invoking astquery:: Inputs, outputs and configuration of Query.
Data manipulation
* Crop:: Crop region(s) from a dataset.
* Arithmetic:: Arithmetic on input data.
* Convolve:: Convolve an image with a kernel.
* Warp:: Warp/Transform an image to a different grid.
Crop
* Crop modes:: Basic modes to define crop region.
* Crop section syntax:: How to define a section to crop.
* Blank pixels:: Pixels with no value.
* Invoking astcrop:: Calling Crop on the command-line
Invoking Crop
* Crop options:: A list of all the options with explanation.
* Crop output:: The outputs of Crop.
* Crop known issues:: Known issues in running Crop.
Arithmetic
* Reverse polish notation:: The current notation style for Arithmetic.
* Integer benefits and pitfalls:: Integers have benefits, but require care.
* Noise basics:: Introduction various noise models.
* Arithmetic operators:: List of operators known to Arithmetic.
* Invoking astarithmetic:: How to run Arithmetic: options and output.
Noise basics
* Photon counting noise:: Poisson noise
* Instrumental noise:: Readout, dark current and other sources.
* Final noised pixel value:: How the final noised value is calculated.
* Generating random numbers:: How random numbers are generated.
Arithmetic operators
* Basic mathematical operators:: For example, +, -, /, log, and pow.
* Trigonometric and hyperbolic operators:: sin, cos, atan, asinh, etc.
* Constants:: Physical and Mathematical constants.
* Coordinate conversion operators:: For example equatorial J2000 to Galactic.
* Unit conversion operators:: Various unit conversions necessary.
* Statistical operators:: Statistics of a single dataset (for example, mean).
* Coadding operators:: Coadding or combining multiple datasets into one.
* Filtering operators:: Smoothing a dataset through mixing pixel with neighbors.
* Pooling operators:: Reducing size through statistics of pixels in window.
* Interpolation operators:: Giving blank pixels a value.
* Dimensionality changing operators:: Collapse or expand a dataset.
* Conditional operators:: Select certain pixels within the dataset.
* Mathematical morphology operators:: Work on binary images, for example, erode.
* Bitwise operators:: Work on bits within one pixel.
* Numerical type conversion operators:: Convert the numeric datatype of a dataset.
* Random number generators:: Random numbers can be used to add noise for example.
* Coordinate and border operators:: Return edges of 2D boxes.
* Loading external columns:: Read a column from a table into the stack.
* Size and position operators:: Extracting image size and pixel positions.
* New operands:: How to construct an empty dataset from scratch.
* Operand storage in memory or a file:: Tools for complex operations in one command.
Convolve
* Spatial domain convolution:: Only using the input image values.
* Frequency domain and Fourier operations:: Using frequencies in input.
* Spatial vs. Frequency domain:: When to use which?
* Convolution kernel:: How to specify the convolution kernel.
* Invoking astconvolve:: Options and argument to Convolve.
Spatial domain convolution
* Convolution process:: More basic explanations.
* Edges in the spatial domain:: Dealing with the edges of an image.
Frequency domain and Fourier operations
* Fourier series historical background:: Historical background.
* Circles and the complex plane:: Interpreting complex numbers.
* Fourier series:: Fourier Series definition.
* Fourier transform:: Fourier Transform definition.
* Dirac delta and comb:: Dirac delta and Dirac comb.
* Convolution theorem:: Derivation of Convolution theorem.
* Sampling theorem:: Sampling theorem (Nyquist frequency).
* Discrete Fourier transform:: Derivation and explanation of DFT.
* Fourier operations in two dimensions:: Extend to 2D images.
* Edges in the frequency domain:: Interpretation of edge effects.
Warp
* Linear warping basics:: Basics of coordinate transformation.
* Merging multiple warpings:: How to merge multiple matrices.
* Resampling:: Warping an image is re-sampling it.
* Invoking astwarp:: Arguments and options for Warp.
Invoking Warp
* Align pixels with WCS considering distortions:: Default operation.
* Linear warps to be called explicitly:: Other warps.
Data analysis
* Statistics:: Calculate dataset statistics.
* NoiseChisel:: Detect objects in an image.
* Segment:: Segment detections based on signal structure.
* MakeCatalog:: Catalog from input and labeled images.
* Match:: Match two datasets.
Statistics
* Histogram and Cumulative Frequency Plot:: Basic definitions.
* 2D Histograms:: Plotting the distribution of two variables.
* Least squares fitting:: Fitting with various parametric functions.
* Sky value:: Definition and derivation of the Sky value.
* Invoking aststatistics:: Arguments and options to Statistics.
2D Histograms
* 2D histogram as a table for plotting:: Format and usage in table format.
* 2D histogram as an image:: Format and usage in image format
Least squares fitting
* One dimensional polynomial fitting:: Independent variable is a column.
* Two dimensional polynomial fitting:: Independent variable is an image or two columns.
Sky value
* Sky value definition:: Definition of the Sky/reference value.
* Sky value misconceptions:: Wrong methods to estimate the Sky value.
* Quantifying signal in a tile:: Method to estimate the presence of signal.
Invoking Statistics
* Input to Statistics:: How to specify the inputs to Statistics.
* Single value measurements:: Can be used together (like --mean, or --maximum).
* Generating histograms and cumulative frequency plots:: Histogram and CFP tables.
* Fitting options:: Least squares fitting.
* Contour options:: Table of contours.
* Statistics on tiles:: Possible to do single-valued measurements on tiles.
NoiseChisel
* NoiseChisel changes after publication:: Updates since published papers.
* Invoking astnoisechisel:: Options and arguments for NoiseChisel.
Invoking NoiseChisel
* NoiseChisel input:: NoiseChisel's input options.
* Detection options:: Configure detection in NoiseChisel.
* NoiseChisel output:: NoiseChisel's output options and format.
Segment
* Invoking astsegment:: Inputs, outputs and options to Segment
Invoking Segment
* Segment input:: Input files and options.
* Segmentation options:: Parameters of the segmentation process.
* Segment output:: Outputs of Segment
MakeCatalog
* Detection and catalog production:: Discussing why/how to treat these separately.
* Brightness flux magnitude:: More on Magnitudes, surface brightness, etc.
* Standard deviation vs Standard error:: To avoid confusions with error measurements.
* MakeCatalog measurements on each label:: The columns that you can request in output catalog.
* Metameasurements on full input:: Measurements on/about measurements.
* Manual metameasurements:: These need custom runs of MakeCatalog and/or other programs.
* Adding new columns to MakeCatalog:: How to add new columns.
* Invoking astmkcatalog:: Options and arguments to MakeCatalog.
MakeCatalog measurements on each label
* Identifier columns:: Identifying labels of each row (object/clumps).
* Position measurements in pixels:: Containing image/pixel (X/Y) measurements.
* Position measurements in WCS:: Containing WCS (for example RA/Dec) measurements.
* Brightness measurements:: Using pixel values of each label.
* Surface brightness measurements:: Various ways to measure surface brightness.
* Upper limit measurements::
* Morphology measurements nonparametric:: Non-parametric morphology.
* Morphology measurements elliptical:: Elliptical morphology measurements.
* Measurements per slice spectra:: Measurements on each slice (like spectra).
Metameasurements on full input
* Surface brightness limit of image:: A portable measure of the noise level.
* Noise based magnitude limit of image:: Magnitude of objects above a certain threshold of the noise.
* Confusion limit of image:: A measure of density in the image.
Manual metameasurements
* Expected surface brightness limit::
* Upper limit surface brightness of image:: Necessary for diffuse regions of image.
* Magnitude limit for certain objects::
* Completeness limit for certain objects::
Invoking MakeCatalog
* MakeCatalog inputs and basic settings:: Input files and basic settings.
* Upper-limit settings:: Settings for upper-limit measurements.
* MakeCatalog output HDUs:: Options that specify the output HDUs.
* MakeCatalog output keywords:: Meta data and metameasures and related options.
Match
* Arranging match output:: Various meanings of matching.
* Unambiguous matching:: How we avoid multiple matches.
* Matching algorithms:: Different ways to find the match.
* Invoking astmatch:: Inputs, outputs and options of Match.
Data modeling
* MakeProfiles:: Making mock galaxies and stars.
MakeProfiles
* Modeling basics:: Astronomical modeling basics.
* If convolving afterwards:: Considerations for convolving later.
* Profile magnitude:: Definition of total profile magnitude.
* Invoking astmkprof:: Inputs and Options for MakeProfiles.
Modeling basics
* Defining an ellipse and ellipsoid:: Definition of these important shapes.
* PSF:: Radial profiles for the PSF.
* Stars:: Making mock star profiles.
* Galaxies:: Radial profiles for galaxies.
* Sampling from a function:: Sample a function on a pixelated canvas.
* Oversampling:: Oversampling the model.
Invoking MakeProfiles
* MakeProfiles catalog:: Required catalog properties.
* MakeProfiles profile settings:: Configuration parameters for all profiles.
* MakeProfiles output dataset:: The canvas/dataset to build profiles over.
* MakeProfiles log file:: A description of the optional log file.
High-level calculations
* CosmicCalculator:: Calculate cosmological variables
CosmicCalculator
* Distance on a 2D curved space:: Distances in 2D for simplicity.
* Extending distance concepts to 3D:: Going to 3D (our real universe).
* Invoking astcosmiccal:: How to run CosmicCalculator.
Invoking CosmicCalculator
* CosmicCalculator input options:: Options to specify input conditions.
* CosmicCalculator basic cosmology calculations:: Such as distance modulus and distances.
* CosmicCalculator spectral line calculations:: How they get affected by redshift.
Installed scripts
* Sort FITS files by night:: Sort many files by date.
* Generate radial profile:: Radial profile of an object in an image.
* SAO DS9 region files from table:: Create ds9 region file from a table.
* Viewing FITS file contents with DS9 or TOPCAT:: Open DS9 (images/cubes) or TOPCAT (tables).
* Zero point estimation:: Zero point of an image from reference catalog or image(s).
* Pointing pattern simulation:: Simulate a coadd with a given series of pointings.
* Color images with gray faint regions:: Color for bright pixels and grayscale for faint.
* PSF construction and subtraction:: Set of scripts to create extended PSF of an image.
Sort FITS files by night
* Invoking astscript-sort-by-night:: Inputs and outputs to this script.
Generate radial profile
* Invoking astscript-radial-profile:: How to call astscript-radial-profile
SAO DS9 region files from table
* Invoking astscript-ds9-region:: How to call astscript-ds9-region
Viewing FITS file contents with DS9 or TOPCAT
* Invoking astscript-fits-view:: How to call this script
Zero point estimation
* Invoking astscript-zeropoint:: How to call the script
Invoking astscript-zeropoint
* zero point output:: Format of the output.
* zero point options:: List and details of options.
Pointing pattern simulation
* Invoking astscript-pointing-simulate:: Options and running mode.
Color images with gray faint regions
* Invoking astscript-color-faint-gray:: Details of options and arguments.
PSF construction and subtraction
* Overview of the PSF scripts:: Summary of concepts and methods
* Invoking astscript-psf-select-stars:: Select good starts within an image.
* Invoking astscript-psf-stamp:: Make a stamp of each star to coadd.
* Invoking astscript-psf-unite:: Merge coadds of different regions of PSF.
* Invoking astscript-psf-scale-factor:: Calculate factor to scale PSF to star.
* Invoking astscript-psf-subtract:: Put the PSF in the image to subtract.
Makefile extensions (for GNU Make)
* Loading the Gnuastro Make functions:: How to find and load Gnuastro's Make library.
* Makefile functions of Gnuastro:: The available functions.
Makefile functions of Gnuastro
* Text functions for Makefiles:: Basic text operations to supplement Make.
* Astronomy functions for Makefiles:: Astronomy/FITS related functions.
Library
* Review of library fundamentals:: Guide on libraries and linking.
* BuildProgram:: Link and run source files with this library.
* Gnuastro library:: Description of all library functions.
* Library demo programs:: Demonstration for using libraries.
Review of library fundamentals
* Headers:: Header files included in source.
* Linking:: Linking the compiled source files into one.
* Summary and example on libraries:: A summary and example on using libraries.
BuildProgram
* Invoking astbuildprog:: Options and examples for using this program.
Gnuastro library
* Configuration information:: General information about library config.
* Multithreaded programming:: Tools for easy multi-threaded operations.
* Library data types:: Definitions and functions for types.
* Pointers:: Wrappers for easy working with pointers.@strong{}
* Library blank values:: Blank values and functions to deal with them.
* Library data container:: General data container in Gnuastro.
* Dimensions:: Dealing with coordinates and dimensions.
* Linked lists:: Various types of linked lists.
* Array input output:: Reading and writing images or cubes.
* Table input output:: Reading and writing table columns.
* FITS files:: Working with FITS data.
* File input output:: Reading and writing to various file formats.
* World Coordinate System:: Dealing with the world coordinate system.
* Arithmetic on datasets:: Arithmetic operations on a dataset.
* Tessellation library:: Functions for working on tiles.
* Bounding box:: Finding the bounding box.
* Polygons:: Working with the vertices of a polygon.
* Qsort functions:: Helper functions for Qsort.
* K-d tree:: Space partitioning in K dimensions.
* Permutations:: Re-order (or permute) the values in a dataset.
* Matching:: Matching catalogs based on position.
* Statistical operations:: Functions for basic statistics.
* Fitting functions:: Fit independent and measured variables.
* Binary datasets:: Datasets that can only have values of 0 or 1.
* Labeled datasets:: Working with Segmented/labeled datasets.
* Convolution functions:: Library functions to do convolution.
* Pooling functions:: Reduce size of input by statistical methods.
* Interpolation:: Interpolate (over blank values possibly).
* Warp library:: Warp pixel grid to a new one.
* Color functions:: Definitions and operations related to colors.
* Git wrappers:: Wrappers for functions in libgit2.
* Python interface:: Functions to help in writing Python wrappers.
* Unit conversion library:: Converting between recognized units.
* Spectral lines library:: Functions for operating on Spectral lines.
* Cosmology library:: Cosmological calculations.
* SAO DS9 library:: Take inputs from files generated by SAO DS9.
Multithreaded programming (@file{threads.h})
* Implementation of pthread_barrier:: Some systems do not have pthread_barrier
* Gnuastro's thread related functions:: Functions for managing threads.
Data container (@file{data.h})
* Generic data container:: Definition of Gnuastro's generic container.
* Dataset allocation:: Allocate, initialize and free a dataset.
* Arrays of datasets:: Functions to help with array of datasets.
* Copying datasets:: Functions to copy a dataset to a new one.
Linked lists (@file{list.h})
* List of strings:: Simply linked list of strings.
* List of int32_t:: Simply linked list of int32_ts.
* List of size_t:: Simply linked list of size_ts.
* List of float:: Simply linked list of floats.
* List of double:: Simply linked list of doubles
* List of size_t and double:: Simply linked list of a size_t and double.
* List of two size_ts and a double:: Simply linked list of two size_ts and a double.
* List of void:: Simply linked list of void * pointers.
* Ordered list of size_t:: Simply linked, ordered list of size_t.
* Doubly linked ordered list of size_t:: Definition and functions.
* List of gal_data_t:: Simply linked list Gnuastro's generic datatype.
FITS files (@file{fits.h})
* FITS macros errors filenames:: General macros, errors and checking names.
* CFITSIO and Gnuastro types:: Conversion between FITS and Gnuastro types.
* FITS HDUs:: Opening and getting information about HDUs.
* FITS header keywords:: Reading and writing FITS header keywords.
* FITS arrays - images or cubes:: Reading and writing FITS images or cubes.
* FITS tables:: Reading and writing FITS tables.
File input output
* Text files:: Reading and writing from/to plain text files.
* TIFF files:: Reading and writing from/to TIFF files.
* JPEG files:: Reading and writing from/to JPEG files.
* EPS files:: Writing to EPS files.
* PDF files:: Writing to PDF files.
Tessellation library (@file{tile.h})
* Independent tiles:: Work on or check independent tiles.
* Tile grid:: Cover a full dataset with non-overlapping tiles.
Library demo programs
* Library demo - reading a FITS image:: Read a FITS image into memory.
* Library demo - inspecting neighbors:: Inspect the neighbors of a pixel.
* Library demo - multi-threaded operation:: Doing an operation on threads.
* Library demo - reading and writing table columns:: Simple Column I/O.
* Library demo - Warp to another image:: Output pixel grid and WCS from another image.
* Library demo - Warp to new grid:: Define a new pixel grid and WCS to resample the input.
Developing
* Why C:: Why Gnuastro is designed in C.
* Program design philosophy:: General ideas behind the package structure.
* Coding conventions:: Gnuastro coding conventions.
* Program source:: Conventions for the code.
* Documentation:: Documentation is an integral part of Gnuastro.
* Building and debugging:: Build and possibly debug during development.
* Test scripts:: Understanding the test scripts.
* Bash programmable completion:: Auto-completions for better user experience.
* Developer's checklist:: Checklist to finalize your changes.
* Gnuastro project webpage:: Central hub for Gnuastro activities.
* Developing mailing lists:: Stay up to date with Gnuastro's development.
* Contributing to Gnuastro:: Share your changes with all users.
Program source
* Mandatory source code files:: Description of files common to all programs.
* The TEMPLATE program:: Template for easy creation of a new program.
Bash programmable completion
* Bash TAB completion tutorial:: Fast tutorial to get you started on concepts.
* Implementing TAB completion in Gnuastro:: How Gnuastro uses Bash auto-completion features.
Contributing to Gnuastro
* Copyright assignment:: Copyright has to be assigned to the FSF.
* Commit guidelines:: Guidelines for commit messages.
* Production workflow:: Submitting your commits (work) for inclusion.
* Forking tutorial:: Tutorial on workflow steps with Git.
Other useful software
* SAO DS9:: Viewing FITS images.
* TOPCAT:: Plotting tables of data.
* PGPLOT:: Plotting directly in C.
@end detailmenu
@end menu
@node Introduction, Tutorials, Top, Top
@chapter Introduction
@cindex GNU coding standards
@cindex GNU Astronomy Utilities (Gnuastro)
GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of separate programs and libraries for the manipulation and analysis of astronomical data.
All the programs share the same basic command-line user interface for the comfort of both the users and developers.
Gnuastro is written to comply fully with the GNU coding standards so it integrates finely with the GNU/Linux operating system.
This also enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use.
The official and always up to date version of this book (or manual) is freely available under @ref{GNU Free Doc License} in various formats (PDF, HTML, plain text, info, and as its Texinfo source) at @url{http://www.gnu.org/software/gnuastro/manual/}.
For users who are new to the GNU/Linux environment, unless otherwise specified most of the topics in @ref{Installation} and @ref{Common program behavior} are common to all GNU software, for example, installation, managing command-line options or getting help (also see @ref{New to GNU/Linux?}).
So if you are new to this empowering environment, we encourage you to go through these chapters carefully.
They can be a starting point from which you can continue to learn more from each program's own manual and fully benefit from and enjoy this wonderful environment.
Gnuastro also comes with a large set of libraries, so you can write your own programs using Gnuastro's building blocks, see @ref{Review of library fundamentals} for an introduction.
In Gnuastro, no change to any program or library will be committed to its history, before it has been fully documented here first.
As discussed in @ref{Science and its tools} this is a founding principle of the Gnuastro.
@menu
* Quick start:: A quick start to installation.
* Gnuastro programs list:: List of command-line programs.
* Science and its tools:: Some philosophy and history.
* Your rights:: User rights.
* Logo of Gnuastro:: Meaning behind Gnuastro's logo.
* Naming convention:: About names of programs in Gnuastro.
* Version numbering:: Understanding version numbers.
* New to GNU/Linux?:: Suggested GNU/Linux distribution.
* Report a bug:: Search and report the bug you found.
* Suggest new feature:: How to suggest a new feature.
* Announcements:: How to stay up to date with Gnuastro.
* Conventions:: Conventions used in this book.
* Acknowledgments and short history::
@end menu
@node Quick start, Gnuastro programs list, Introduction, Introduction
@section Quick start
@cindex Test
@cindex Gzip
@cindex Lzip
@cindex Check
@cindex Build
@cindex Compile
@cindex GNU Tar
@cindex Uncompress source
@cindex Source, uncompress
The latest official release tarball is always available as @url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz, @file{gnuastro-latest.tar.lz}}.
The @url{http://www.nongnu.org/lzip/lzip.html, Lzip} format is used for better compression (smaller output size, thus faster download), and robust archival features and standards.
For historical reasons (those users that do not yet have Lzip), the Gzip'd tarball@footnote{The Gzip library and program are commonly available on most systems.
However, Gnuastro recommends Lzip as described above and the beta-releases are also only distributed in @file{tar.lz}.} is available at the same URL (just change the @file{.lz} suffix above to @file{.gz}; however, the Lzip'd file is recommended).
See @ref{Release tarball} for more details on the tarball release.
Let's assume the downloaded tarball is in the @file{TOPGNUASTRO} directory.
You can follow the commands below to download and un-compress the Gnuastro source.
You need to have the @command{lzip} program for the decompression (see @ref{Dependencies from package managers})
If your Tar implementation does not recognize Lzip (the third command fails), run the fourth command.
Note that lines starting with @code{##} do not need to be typed (they are only a description of the following command):
@example
## Go into the download directory.
$ cd TOPGNUASTRO
## If you do not already have the tarball, you can download it:
$ wget http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz
## If this fails, run the next command.
$ tar -xf gnuastro-latest.tar.lz
## Only when the previous command fails.
$ lzip -cd gnuastro-latest.tar.lz | tar -xf -
@end example
Gnuastro has three mandatory dependencies and some optional dependencies for extra functionality, see @ref{Dependencies} for the full list.
In @ref{Dependencies from package managers} we have prepared the command to easily install Gnuastro's dependencies using the package manager of some operating systems.
When the mandatory dependencies are ready, you can configure, compile, check and install Gnuastro on your system with the following commands.
See @ref{Known issues} if you confront any complications and if you plan to install without root permissions (such that you will not need @code{sudo} in the last command below) see @ref{Installation directory}.
@example
$ cd gnuastro-X.X # Replace X.X with version number.
$ ./configure
$ make -j8 # Replace 8 with no. CPU threads.
$ make check -j8 # Replace 8 with no. CPU threads.
$ sudo make install
@c $ echo "source /usr/local/share/gnuastro/completion.bash" >> ~/.bashrc
@end example
@c @noindent
@c The last command is to enable Gnuastro's custom TAB completion in Bash.
@c For more on this useful feature, see @ref{Shell TAB completion}).
For each program there is an `Invoke ProgramName' sub-section in this book which explains how the programs should be run on the command-line (for example, see @ref{Invoking asttable}).
In @ref{Tutorials}, we have prepared some complete tutorials with common Gnuastro usage scenarios in astronomical research.
They even contain links to download the necessary data, and thoroughly describe every step of the process (the science, statistics and optimal usage of the command-line).
We therefore recommend to read (an run the commands in) the tutorials before starting to use Gnuastro.
@node Gnuastro programs list, Science and its tools, Quick start, Introduction
@section Gnuastro programs list
One of the most common ways to operate Gnuastro is through its command-line programs.
For some tutorials on several real-world usage scenarios, see @ref{Tutorials}.
The list here is just provided as a general summary for those who are new to Gnuastro.
GNU Astronomy Utilities @value{VERSION}, contains the following programs.
They are sorted in alphabetical order and a short description is provided for each program.
The description starts with the executable names in @file{thisfont} followed by a pointer to the respective section in parenthesis.
Throughout this book, they are ordered based on their context, please see the top-level contents for contextual ordering (based on what they do).
@table @asis
@item Arithmetic
(@file{astarithmetic}, see @ref{Arithmetic}) For arithmetic operations on multiple (theoretically unlimited) number of datasets (images).
It has a large and growing set of arithmetic, mathematical, and even statistical operators (for example, @command{+}, @command{-}, @command{*}, @command{/}, @command{sqrt}, @command{log}, @command{min}, @command{average}, @command{median}, see @ref{Arithmetic operators}).
@item BuildProgram
(@file{astbuildprog}, see @ref{BuildProgram}) Compile, link and run custom C programs that depend on the Gnuastro library (see @ref{Gnuastro library}).
This program will automatically link with the libraries that Gnuastro depends on, so there is no need to explicitly mention them every time you are compiling a Gnuastro library dependent program.
@item ConvertType
(@file{astconvertt}, see @ref{ConvertType}) Convert astronomical data files (FITS or IMH) to and from several other standard image and data formats, for example, TXT, JPEG, EPS or PDF.
Optionally, it is also possible to add vector graphics markers over the output image (for example, circles from catalogs containing RA or Dec).
@item Convolve
(@file{astconvolve}, see @ref{Convolve}) Convolve (blur or smooth) data with a given kernel in spatial and frequency domain on multiple threads.
Convolve can also do deconvolution to find the appropriate kernel to PSF-match two images.
@item CosmicCalculator
(@file{astcosmiccal}, see @ref{CosmicCalculator}) Do cosmological calculations, for example, the luminosity distance, distance modulus, comoving volume and many more.
@item Crop
(@file{astcrop}, see @ref{Crop}) Crop region(s) from one or many image(s) and stitch several images if necessary.
Input coordinates can be in pixel coordinates or world coordinates.
@item Fits
(@file{astfits}, see @ref{Fits}) View and manipulate FITS file extensions and header keywords.
@item MakeCatalog
(@file{astmkcatalog}, see @ref{MakeCatalog}) Make catalog of labeled image (output of NoiseChisel).
The catalogs are highly customizable and adding new calculations/columns is very straightforward, see Akhlaghi @url{https://arxiv.org/abs/1611.06387,2019}.
@item MakeProfiles
(@file{astmkprof}, see @ref{MakeProfiles}) Make mock 2D profiles in an image.
The central regions of radial profiles are made with a configurable 2D Monte Carlo integration.
It can also build the profiles on an over-sampled image.
@item Match
(@file{astmatch}, see @ref{Match}) Given two input catalogs, find the rows that match with each other within a given aperture (may be an ellipse).
@item NoiseChisel
(@file{astnoisechisel}, see @ref{NoiseChisel}) Detect signal in noise.
It uses a technique to detect very faint and diffuse, irregularly shaped signal in noise (galaxies in the sky), using thresholds that are below the Sky value, see Akhlaghi and Ichikawa @url{http://arxiv.org/abs/1505.01664,2015} and Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
@item Query
(@file{astquery}, see @ref{Query}) High-level interface to query pre-defined remote, or external databases, and directly download the required sub-tables on the command-line.
@item Segment
(@file{astsegment}, see @ref{Segment}) Segment detected regions based on the structure of signal and the input dataset's noise properties.
@item Statistics
(@file{aststatistics}, see @ref{Statistics}) Statistical calculations on the input dataset (column in a table, image or data cube).
This includes man operations such as generating histogram, sigma clipping, and least squares fitting.
@item Table
(@file{asttable}, @ref{Table}) Convert FITS binary and ASCII tables into other such tables, print them on the command-line, save them in a plain text file, do arithmetic on the columns or get the FITS table information.
For a full list of operations, see @ref{Operation precedence in Table}.
@item Warp
(@file{astwarp}, see @ref{Warp}) Warp image to new pixel grid.
By default it will align the pixel and WCS coordinates, removing any non-linear WCS distortions.
Any linear warp (projective transformation or Homography) can also be applied to the input images by explicitly calling the respective operation.
@end table
The programs listed above are designed to be highly modular and generic.
Hence, they are naturally for lower-level operations.
In Gnuastro, higher-level operations (combining multiple programs, or running a program in a special way), are done with installed Bash scripts (all prefixed with @code{astscript-}).
They can be run just like a program and behave very similarly (with minor differences, see @ref{Installed scripts}).
@table @code
@item astscript-ds9-region
(See @ref{SAO DS9 region files from table}) Given a table (either as a file or from standard input), create an SAO DS9 region file from the requested positional columns (WCS or image coordinates).
@item astscript-fits-view
(see @ref{Viewing FITS file contents with DS9 or TOPCAT}) Given any number of FITS files, this script will either open SAO DS9 (for images or cubes) or TOPCAT (for tables) to view them in a graphic user interface (GUI).
@item astscript-pointing-simulate
(See @ref{Pointing pattern simulation}) Given a table of pointings on the sky, create and a reference image that contains your camera's distortions and properties, generate a coadded exposure map.
This is very useful in testing the coverage of dither patterns when designing your observing strategy and it is highly customizable.
See Akhlaghi @url{https://arxiv.org/abs/2310.15006,2023}, or the dedicated tutorial in @ref{Pointing pattern design}.
@item astscript-radial-profile
(See @ref{Generate radial profile}) Calculate the 1D radial profile or 2D polar plot of an object within an image.
The object can be at any location in the image, using various measures (median, sigma-clipped mean, etc.), and the radial distance can also be measured on any general ellipse.
See Infante-Sainz et al. @url{https://arxiv.org/abs/2401.05303,2024} for a general description and Eskandarlou @& Akhlaghi @url{https://arxiv.org/abs/2406.14619,2024} for the radial profile component.
@item astscript-color-faint-gray
(see @ref{Color images with gray faint regions}) Given three images for the Red-Green-Blue (RGB) channels, this script will use the bright pixels for color and will show the faint/diffuse regions in grayscale.
This greatly helps in visualizing the full dynamic range of astronomical data.
See Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024} or a dedicated tutorial in @ref{Color images with full dynamic range}.
@item astscript-sort-by-night
(See @ref{Sort FITS files by night}) Given a list of FITS files, and a HDU and keyword name (for a date), this script separates the files in the same night (possibly over two calendar days).
@item astscript-zeropoint
(see @ref{Zero point estimation}) Estimate the zero point (to calibrate pixel values) of an input image using a reference image or a reference catalog.
This is necessary to produce measurements with physical units from new images.
See Eskandarlou et al. @url{https://arxiv.org/abs/2312.04263,2023}, or a dedicated tutorial in @ref{Zero point of an image}.
@item astscript-psf-*
The following scripts are used to estimate the extended PSF estimation and subtraction as described in the tutorial @ref{Building the extended PSF}:
@table @code
@item astscript-psf-select-stars
(see @ref{Invoking astscript-psf-select-stars}) Find all the stars within an image that are suitable for constructing an extended PSF.
If the image has WCS, this script can automatically query Gaia to find the good stars.
@item astscript-psf-stamp
(see @ref{Invoking astscript-psf-stamp}) build a crop (stamp) of a certain width around a star at a certain coordinate in a larger image.
This script will do sub-pixel re-positioning to make sure the star is centered and can optionally mask all other background sources).
@item astscript-psf-scale-factor
(see @ref{Invoking astscript-psf-scale-factor}) Given a PSF model, and the central coordinates of a star in an image, find the scale factor that has to be multiplied by the PSF to scale it to that star.
@item astscript-psf-unite
(see @ref{Invoking astscript-psf-unite}) Unite the various components of a PSF into one.
Because of saturation and non-linearity, to get a good estimate of the extended PSF, it is necessary to construct various parts from different magnitude ranges.
@item astscript-psf-subtract
(see @ref{Invoking astscript-psf-subtract}) Given the model of a PSF and the central coordinates of a star in the image, do sub-pixel re-positioning of the PSF, scale it to the star and subtract it from the image.
@end table
@end table
@node Science and its tools, Your rights, Gnuastro programs list, Introduction
@section Gnuastro manifesto: Science and its tools
History of science indicates that there are always inevitably unseen faults, hidden assumptions, simplifications and approximations in all our theoretical models, data acquisition and analysis techniques.
It is precisely these that will ultimately allow future generations to advance the existing experimental and theoretical knowledge through their new solutions and corrections.
In the past, scientists would gather data and process them individually to achieve an analysis thus having a much more intricate knowledge of the data and analysis.
The theoretical models also required little (if any) simulations to compare with the data.
Today both methods are becoming increasingly more dependent on pre-written software.
Scientists are dissociating themselves from the intricacies of reducing raw observational data in experimentation or from bringing the theoretical models to life in simulations.
These `intricacies' are precisely those unseen faults, hidden assumptions, simplifications and approximations that define scientific progress.
@quotation
@cindex Anscombe F. J.
Unfortunately, most persons who have recourse to a computer for statistical analysis of data are not much interested either in computer programming or in statistical method, being primarily concerned with their own proper business.
Hence the common use of library programs and various statistical packages. ... It's time that was changed.
@author F.J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
@end quotation
@cindex Anscombe's quartet
@cindex Statistical analysis
@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet} demonstrates how four data sets with widely different shapes (when plotted) give nearly identical output from standard regression techniques.
Anscombe uses this (now famous) quartet, which was introduced in the paper quoted above, to argue that ``@emph{Good statistical analysis is not a purely routine matter, and generally calls for more than one pass through the computer}''.
Echoing Anscombe's concern after 44 years, some of the highly recognized statisticians of our time (Leek, McShane, Gelman, Colquhoun, Nuijten and Goodman), wrote in Nature that:
@quotation
We need to appreciate that data analysis is not purely computational and algorithmic -- it is a human behavior....Researchers who hunt hard enough will turn up a result that fits statistical criteria -- but their discovery will probably be a false positive.
@author Five ways to fix statistics, Nature, 551, Nov 2017.
@end quotation
Users of statistical (scientific) methods (software) are therefore not passive (objective) agents in their results.
It is necessary to actually understand the method, not just use it as a black box.
The subjective experience gained by frequently using a method/software is not sufficient to claim an understanding of how the tool/method works and how relevant it is to the data and analysis.
This kind of subjective experience is prone to serious misunderstandings about the data, what the software/statistical-method really does (especially as it gets more complicated), and thus the scientific interpretation of the result.
This attitude is further encouraged through non-free software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly written (or non-existent) scientific software manuals, and non-reproducible papers@footnote{Where the authors omit many of the analysis/processing ``details'' from the paper by arguing that they would make the paper too long/unreadable.
However, software engineers have been dealing with such issues for a long time.
There are thus software management solutions that allow us to supplement papers with all the details necessary to exactly reproduce the result.
For example, see Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}.}.
This approach to scientific software and methods only helps in producing dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in his personal knowledge and authority}''@footnote{Karl Popper. The logic of scientific discovery. 1959.
Larger quote is given at the start of the PDF (for print) version of this book.}.
@quotation
@cindex Douglas Rushkoff
Program or be programmed.
Choose the former, and you gain access to the control panel of civilization.
Choose the latter, and it could be the last real choice you get to make.
@author Douglas Rushkoff. Program or be programmed, O/R Books (2010).
@end quotation
It is obviously impractical for any one human being to gain the intricate knowledge explained above for every step of an analysis.
On the other hand, scientific data can be large and numerous, for example, images produced by telescopes in astronomy.
This requires efficient algorithms.
To make things worse, natural scientists have generally not been trained in the advanced software techniques, paradigms and architecture that are taught in computer science or engineering courses and thus used in most software.
The GNU Astronomy Utilities are an effort to tackle this issue.
Gnuastro is not just a software, this book is as important to the idea behind Gnuastro as the source code (software).
This book has tried to learn from the success of the ``Numerical Recipes'' book in educating those who are not software engineers and computer scientists but still heavy users of computational algorithms, like astronomers.
There are two major differences.
The first difference is that Gnuastro's code and the background information are segregated: the code is moved within the actual Gnuastro software source code and the underlying explanations are given here in this book.
In the source code, every non-trivial step is heavily commented and correlated with this book, it follows the same logic of this book, and all the programs follow a similar internal data, function and file structure, see @ref{Program source}.
Complementing the code, this book focuses on thoroughly explaining the concepts behind those codes (history, mathematics, science, software and usage advice when necessary) along with detailed instructions on how to run the programs.
At the expense of frustrating ``professionals'' or ``experts'', this book and the comments in the code also intentionally avoid jargon and abbreviations.
The source code and this book are thus intimately linked, and when considered as a single entity can be thought of as a real (an actual software accompanying the algorithms) ``Numerical Recipes'' for astronomy.
@cindex GNU free documentation license
@cindex GNU General Public License (GPL)
The second major, and arguably more important, difference is that ``Numerical Recipes'' does not allow you to distribute any code that you have learned from it.
In other words, it does not allow you to release your software's source code if you have used their codes, you can only publicly release binaries (a black box) to the community.
Therefore, while it empowers the privileged individual who has access to it, it exacerbates social ignorance.
Exactly at the opposite end of the spectrum, Gnuastro's source code is released under the GNU general public license (GPL) and this book is released under the GNU free documentation license.
You are therefore free to distribute any software you create using parts of Gnuastro's source code or text, or figures from this book, see @ref{Your rights}.
With these principles in mind, Gnuastro's developers aim to impose the minimum requirements on you (in computer science, engineering and even the mathematics behind the tools) to understand and modify any step of Gnuastro if you feel the need to do so, see @ref{Why C} and @ref{Program design philosophy}.
@cindex Brahe, Tycho
@cindex Galileo, Galilei
Without prior familiarity and experience with optics, it is hard to imagine how, Galileo could have come up with the idea of modifying the Dutch military telescope optics to use in astronomy.
Astronomical objects could not be seen with the Dutch military design of the telescope.
In other words, it is unlikely that Galileo could have asked a random optician to make modifications (not understood by Galileo) to the Dutch design, to do something no astronomer of the time took seriously.
In the paradigm of the day, what could be the purpose of enlarging geometric spheres (planets) or points (stars)? In that paradigm only the position and movement of the heavenly bodies was important, and that had already been accurately studied (recently by Tycho Brahe).
In the beginning of his ``The Sidereal Messenger'' (published in 1610) he cautions the readers on this issue and @emph{before} describing his results/observations, Galileo instructs us on how to build a suitable instrument.
Without a detailed description of @emph{how} he made his tools and done his observations, no reasonable person would believe his results.
Before he actually saw the moons of Jupiter, the mountains on the Moon or the crescent of Venus, Galileo was “evasive”@footnote{Galileo G. (Translated by Maurice A. Finocchiaro). @emph{The essential Galileo}.Hackett publishing company, first edition, 2008.} to Kepler.
Science is defined by its tools/methods, @emph{not} its raw results@footnote{For example, take the following two results on the age of the universe: roughly 14 billion years (suggested by the current consensus of the standard model of cosmology) and less than 10,000 years (suggested from some interpretations of the Bible).
Both these numbers are @emph{results}.
What distinguishes these two results, is the tools/methods that were used to derive them.
Therefore, as the term ``Scientific method'' also signifies, a scientific statement it defined by its @emph{method}, not its result.}.
The same is true today: science cannot progress with a black box, or poorly released code.
The source code of a research is the new (abstractified) communication language in science, understandable by humans @emph{and} computers.
Source code (in any programming language) is a language/notation designed to express all the details that would be too tedious/long/frustrating to report in spoken languages like English, similar to mathematic notation.
@quotation
An article about computational science [almost all sciences today] ... is not the scholarship itself, it is merely advertising of the scholarship.
The Actual Scholarship is the complete software development environment and the complete set of instructions which generated the figures.
@author Buckheit & Donoho, Lecture Notes in Statistics, Vol 103, 1996
@end quotation
Today, the quality of the source code that goes into a scientific result (and the distribution of that code) is as critical to scientific vitality and integrity, as the quality of its written language/English used in publishing/distributing its paper.
A scientific paper will not even be reviewed by any respectable journal if its written in a poor language/English.
A similar level of quality assessment is thus increasingly becoming necessary regarding the codes/methods used to derive the results of a scientific paper.
For more on this, please see Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}).
@cindex Ken Thomson
@cindex Stroustrup, Bjarne
Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without understanding software, you are reduced to believing in magic}''.
Ken Thomson (the designer of the Unix operating system) says ``@emph{I abhor a system designed for the `user' if that word is a coded pejorative meaning `stupid and unsophisticated'}.'' Certainly no scientist (user of a scientific software) would want to be considered a believer in magic, or stupid and unsophisticated.
This can happen when scientists get too distant from the raw data and methods, and are mainly discussing results.
In other words, when they feel they have tamed Nature into their own high-level (abstract) models (creations), and are mainly concerned with scaling up, or industrializing those results.
Roughly five years before special relativity, and about two decades before quantum mechanics fundamentally changed Physics, Lord Kelvin is quoted as saying:
@quotation
@cindex Lord Kelvin
@cindex William Thomson
There is nothing new to be discovered in physics now.
All that remains is more and more precise measurement.
@author William Thomson (Lord Kelvin), 1900
@end quotation
@noindent
A few years earlier Albert. A. Michelson made the following statement:
@quotation
@cindex Albert. A. Michelson
@cindex Michelson, Albert. A.
The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote....
Our future discoveries must be looked for in the sixth place of decimals.
@author Albert. A. Michelson, dedication of Ryerson Physics Lab, U. Chicago 1894
@end quotation
@cindex Puzzle solving scientist
@cindex Scientist, puzzle solver
If scientists are considered to be more than mere puzzle solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific Revolutions}, University of Chicago Press, 1962.} (simply adding to the decimals of existing values or observing a feature in 10, 100, or 100000 more galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just passively sit back and uncritically repeat the previous (observational or theoretical) methods/tools on new data.
Today there is a wealth of raw telescope images ready (mostly for free) at the finger tips of anyone who is interested with a fast enough internet connection to download them.
The only thing lacking is new ways to analyze this data and dig out the treasure that is lying hidden in them to existing methods and techniques.
@quotation
@cindex Jaynes E. T.
New data that we insist on analyzing in terms of old ideas (that is, old models which are not questioned) cannot lead us out of the old ideas.
However many data we record and analyze, we may just keep repeating the same old errors, missing the same crucially important things that the experiment was competent to find.
@author Jaynes, Probability theory, the logic of science. Cambridge U. Press (2003).
@end quotation
@node Your rights, Logo of Gnuastro, Science and its tools, Introduction
@section Your rights
@cindex GNU Texinfo
The paragraphs below, in this section, belong to the GNU Texinfo@footnote{Texinfo is the GNU documentation system.
It is used to create this book in all the various formats.} manual and are not written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy Utilities'' or ``Gnuastro'' because they are released under the same licenses and it is beautifully written to inform you of your rights.
@cindex Free software
@cindex Copyright
@cindex Public domain
GNU Astronomy Utilities is ``free software''; this means that everyone is free to use it and free to redistribute it on certain conditions.
Gnuastro is not in the public domain; it is copyrighted and there are restrictions on its distribution, but these restrictions are designed to permit everything that a good cooperating citizen would want to do.
What is not allowed is to try to prevent others from further sharing any version of Gnuastro that they might get from you.
Specifically, we want to make sure that you have the right to give away copies of the programs that relate to Gnuastro, that you receive the source code or else can get it if you want it, that you can change these programs or use pieces of them in new free programs, and that you know you can do these things.
To make sure that everyone has such rights, we have to forbid you to deprive anyone else of these rights.
For example, if you distribute copies of the Gnuastro related programs, you must give the recipients all the rights that you have.
You must make sure that they, too, receive or can get the source code.
And you must tell them their rights.
Also, for our own protection, we must make certain that everyone finds out that there is no warranty for the programs that relate to Gnuastro.
If these programs are modified by someone else and passed on, we want their recipients to know that what they have is not what we distributed, so that any problems introduced by others will not reflect on our reputation.
@cindex GNU General Public License (GPL)
@cindex GNU Free Documentation License
The full text of the licenses for the Gnuastro book and software can be respectively found in @ref{GNU General Public License}@footnote{Also available in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free Doc License}@footnote{Also available in @url{http://www.gnu.org/copyleft/fdl.html}}.
@node Logo of Gnuastro, Naming convention, Your rights, Introduction
@section Logo of Gnuastro
Gnuastro's logo is an abstract image of a @url{https://en.wikipedia.org/wiki/Barred_spiral_galaxy,barred spiral galaxy}.
The galaxy is vertically cut in half: on the left side, the beauty of a contiguous galaxy image is visible.
But on the right, the image gets pixelated, and we only see the parts that are within the pixels.
The pixels that are more near to the center of the galaxy (which is brighter) are also larger.
But as we follow the spiral arms (and get more distant from the center), the pixels get smaller (signifying less signal).
This sharp distinction between the contiguous and pixelated view of the galaxy signifies the main struggle in science: in the ``real'' world, objects are not pixelated or discrete and have no noise.
However, when we observe nature, we are confined and constrained by the resolution of our data collection (CCD imager in this case).
On the other hand, we read English text from the left and progress towards the right.
This defines the positioning of the ``real'' and observed halves of the galaxy: the no-noised and contiguous half (on the left) passes through our observing tools and becomes pixelated and noisy half (on the right).
It is the job of scientific software like Gnuastro to help interpret the underlying mechanisms of the ``real'' universe from the pixelated and noisy data.
Gnuastro's logo was designed by Marjan Akbari.
The concept behind it was created after several design iterations with Mohammad Akhlaghi.
@node Naming convention, Version numbering, Logo of Gnuastro, Introduction
@section Naming convention
@cindex Names, programs
@cindex Program names
Gnuastro is a package of independent programs and a collection of libraries, here we are mainly concerned with the programs.
Each program has an official name which consists of one or two words, describing what they do.
The latter are printed with no space, for example, NoiseChisel or Crop.
On the command-line, you can run them with their executable names which start with an @file{ast} and might be an abbreviation of the official name, for example, @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.
@pindex ProgramName
@pindex @file{astprogname}
We will use ``ProgramName'' for a generic official program name and @file{astprogname} for a generic executable name.
In this book, the programs are classified based on what they do and thoroughly explained.
An alphabetical list of the programs that are installed on your system with this installation are given in @ref{Gnuastro programs list}.
That list also contains the executable names and version numbers along with a one line description.
@node Version numbering, New to GNU/Linux?, Naming convention, Introduction
@section Version numbering
@cindex Version number
@cindex Number, version
@cindex Major version number
@cindex Minor version number
@cindex Mailing list: info-gnuastro
Gnuastro can have two formats of version numbers, for official and unofficial releases.
Official Gnuastro releases are announced on the @command{info-gnuastro} mailing list, they have a version control tag in Gnuastro's development history, and their version numbers are formatted like ``@file{A.B}''.
@file{A} is a major version number, marking a significant planned achievement (for example, see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor version number, see below for more on the distinction.
Note that the numbers are not decimals, so version 2.34 is much more recent than version 2.5, which is not equal to 2.50.
Gnuastro also allows a unique version number for unofficial releases.
Unofficial releases can mark any point in Gnuastro's development history.
This is done to allow astronomers to easily use any point in the version controlled history for their data-analysis and research publication.
See @ref{Version controlled source} for a complete introduction.
This section is not just for developers and is intended to straightforward and easy to read, so please have a look if you are interested in the cutting-edge.
This unofficial version number is a meaningful and easy to read string of characters, unique to that particular point of history.
With this feature, users can easily stay up to date with the most recent bug fixes and additions that are committed between official releases.
The unofficial version number is formatted like: @file{A.B.C-D}.
@file{A} and @file{B} are the most recent official version number.
@file{C} is the number of commits that have been made after version @file{A.B}.
@file{D} is the first 4 or 5 characters of the commit hash number@footnote{Each point in Gnuastro's history is uniquely identified with a 40 character long hash which is created from its contents and previous history for example: @code{5b17501d8f29ba3cd610673261e6e2229c846d35}.
So the string @file{D} in the version for this commit could be @file{5b17}, or @file{5b175}.}.
Therefore, the unofficial version number `@code{3.92.8-29c8}', corresponds to the 8th commit after the official version @code{3.92} and its commit hash begins with @code{29c8}.
The unofficial version number is sort-able (unlike the raw hash) and as shown above is descriptive of the state of the unofficial release.
Of course an official release is preferred for publication (since its tarballs are easily available and it has gone through more tests, making it more stable), so if an official release is announced prior to your publication's final review, please consider updating to the official release.
The major version number is set by a major goal which is defined by the developers and user community beforehand, for example, see @ref{GNU Astronomy Utilities 1.0}.
The incremental work done in minor releases are commonly small steps in achieving the major goal.
Therefore, there is no limit on the number of minor releases and the difference between the (hypothetical) versions 2.927 and 3.0 can be a small (negligible to the user) improvement that finalizes the defined goals.
@menu
* GNU Astronomy Utilities 1.0:: Plans for version 1.0 release
@end menu
@node GNU Astronomy Utilities 1.0, , Version numbering, Version numbering
@subsection GNU Astronomy Utilities 1.0
@cindex Gnuastro major version number
Like all software, version 1.0 is a unique milestone: a point where the developers feel it is complete to a minimal level.
In Gnuastro, the goal to achieve for version 1.0 is to have all the necessary tools for optical imaging data reduction: starting from raw images of individual exposures to the final deep image ready for high-level science.
While various software did already exist and were commonly used when Gnuastro was first released in 2016.
The existing software are mostly written without following any robust, or even common, coding and usage standards or up-to-date and well-maintained documentation.
This makes it very hard to reduce astronomical data without learning those software's peculiarities through trial and error.
@node New to GNU/Linux?, Report a bug, Version numbering, Introduction
@section New to GNU/Linux?
Some astronomers initially install and use a GNU/Linux operating system because their necessary tools can only be installed in this environment.
However, the transition is not necessarily easy.
To encourage you in investing the patience and time to make this transition, and actually enjoy it, we will first start with a basic introduction to GNU/Linux operating systems.
Afterwards, in @ref{Command-line interface} we will discuss the wonderful benefits of the command-line interface, how it beautifully complements the graphic user interface, and why it is worth the (apparently steep) learning curve.
Finally a complete chapter (@ref{Tutorials}) is devoted to real world scenarios of using Gnuastro (on the command-line).
Therefore if you do not yet feel comfortable with the command-line we strongly recommend going through that chapter after finishing this section.
You might have already noticed that we are not using the name ``Linux'', but ``GNU/Linux''.
Please take the time to have a look at the following essays and FAQs for a complete understanding of this very important distinction.
@itemize
@item
@url{https://gnu.org/philosophy}
@item
@url{https://www.gnu.org/gnu/the-gnu-project.html}
@item
@url{https://www.gnu.org/gnu/gnu-users-never-heard-of-gnu.html}
@item
@url{https://www.gnu.org/gnu/linux-and-gnu.html}
@item
@url{https://www.gnu.org/gnu/why-gnu-linux.html}
@item
@url{https://www.gnu.org/gnu/gnu-linux-faq.html}
@item
Recorded talk: @url{https://peertube.stream/w/ddeSSm33R1eFWKJVqpcthN} (first 20 min is about the history of Unix-like operating systems).
@end itemize
@cindex Linux
@cindex GNU/Linux
@cindex GNU C library
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
In short, the Linux kernel@footnote{In Unix-like operating systems, the kernel connects software and hardware worlds.} is built using the GNU C library (glibc) and GNU compiler collection (gcc).
The Linux kernel software alone is just a means for other software to access the hardware resources, it is useless alone!
A normal astronomer (or scientist) will never interact with the kernel directly!
For example, the command-line environment that you interact with is usually GNU Bash.
It is GNU Bash that then talks to kernel.
To better clarify, let's use this analogy inspired from one of the links above@footnote{https://www.gnu.org/gnu/gnu-users-never-heard-of-gnu.html}: saying that you are ``running Linux'' is like saying you are ``driving your engine''.
The car's engine is the main source of power in the car, no one doubts that.
But you do not ``drive'' the engine, you drive the ``car''.
The engine alone is useless for transportation without the radiator, battery, transmission, wheels, chassis, seats, wind-shield, etc.
@cindex Window Subsystem for Linux
To have an operating system, you need lower-level tools (to build the kernel), and higher-level (to use it) software packages.
For the Linux kernel, both the lower-level and higher-level tools are GNU.
In other words,``the whole system is basically GNU with Linux loaded''.
You can replace the Linux kernel and still have the GNU shell and higher-level utilities.
For example, using the ``Windows Subsystem for Linux'', you can use almost all GNU tools without the original Linux kernel, but using the host Windows operating system, as in @url{https://ubuntu.com/wsl}.
Alternatively, you can build a fully functional GNU-based working environment on a macOS or BSD-based operating system (using the host's kernel and C compiler), for example, through projects like Maneage, see Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}, in particular Appendix C with all the GNU software tools that is exactly reproducible on a macOS also.
Therefore to acknowledge GNU's instrumental role in the creation and usage of the Linux kernel and the operating systems that use it, we should call these operating systems ``GNU/Linux''.
@menu
* Command-line interface:: Introduction to the command-line
@end menu
@node Command-line interface, , New to GNU/Linux?, New to GNU/Linux?
@subsection Command-line interface
@cindex Shell
@cindex Graphic user interface
@cindex Command-line user interface
@cindex GUI: graphic user interface
@cindex CLI: command-line user interface
One aspect of Gnuastro that might be a little troubling to new GNU/Linux users is that (at least for the time being) it only has a command-line user interface (CLI).
This might be contrary to the mostly graphical user interface (GUI) experience with proprietary operating systems.
Since the various actions available are not always on the screen, the command-line interface can be complicated, intimidating, and frustrating for a first-time user.
This is understandable and also experienced by anyone who started using the computer (from childhood) in a graphical user interface (this includes most of Gnuastro's authors).
Here we hope to convince you of the unique benefits of this interface which can greatly enhance your productivity while complementing your GUI experience.
@cindex GNOME 3
Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based operating systems now have an advanced and useful GUI.
Since the GUI was created long after the command-line, some wrongly consider the command-line to be obsolete.
Both interfaces are useful for different tasks.
For example, you cannot view an image, video, PDF document or web page on the command-line.
On the other hand you cannot reproduce your results easily in the GUI.
Therefore they should not be regarded as rivals but as complementary user interfaces, here we will outline how the CLI can be useful in scientific programs.
You can think of the GUI as a veneer over the CLI to facilitate a small subset of all the possible CLI operations.
Each click you do on the GUI, can be thought of as internally running a different CLI command.
So asymptotically (if a good designer can design a GUI which is able to show you all the possibilities to click on) the GUI is only as powerful as the command-line.
In practice, such graphical designers are very hard to find for every program, so the GUI operations are always a subset of the internal CLI commands.
For programs that are only made for the GUI, this results in not including lots of potentially useful operations.
It also results in `interface design' to be a crucially important part of any GUI program.
Scientists do not usually have enough resources to hire a graphical designer, also the complexity of the GUI code is far more than CLI code, which is harmful for a scientific software, see @ref{Science and its tools}.
@cindex GUI: repeating operations
For programs that have a GUI, one action on the GUI (moving and clicking a mouse, or tapping a touchscreen) might be more efficient and easier than its CLI counterpart (typing the program name and your desired configuration).
However, if you have to repeat that same action more than once, the GUI will soon become frustrating and prone to errors.
Unless the designers of a particular program decided to design such a system for a particular GUI action, there is no general way to run any possible series of actions automatically on the GUI.
@cindex GNU Bash
@cindex Reproducible results
@cindex CLI: repeating operations
On the command-line, you can run any series of actions which can come from various CLI capable programs you have decided yourself in any possible permutation with one command@footnote{By writing a shell script and running it, for example, see the tutorials in @ref{Tutorials}.}.
This allows for much more creativity and exact reproducibility that is not possible to a GUI user.
For technical and scientific operations, where the same operation (using various programs) has to be done on a large set of data files, this is crucially important.
It also allows exact reproducibility which is a foundation principle for scientific results.
The most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash, we strongly encourage you to put aside several hours and go through this beautifully explained web page: @url{https://flossmanuals.net/command-line/}.
You do not need to read or even fully understand the whole thing, only a general knowledge of the first few chapters are enough to get you going.
Since the operations in the GUI are limited and they are visible, reading a manual is not that important in the GUI (most programs do not even have any!).
However, to give you the creative power explained above, with a CLI program, it is best if you first read the manual of any program you are using.
You do not need to memorize any details, only an understanding of the generalities is needed.
Once you start working, there are more easier ways to remember a particular option or operation detail, see @ref{Getting help}.
@cindex GNU Emacs
@cindex Virtual console
To experience the command-line in its full glory and not in the GUI terminal emulator, press the following keys together: @key{CTRL+ALT+F4}@footnote{Instead of @key{F4}, you can use any of the keys from @key{F1} to @key{F6} for different virtual consoles depending on your GNU/Linux distribution, try them all out.
You can also run a separate GUI from within this console if you want to.} to access the virtual console.
To return back to your GUI, press the same keys above replacing @key{F4} with @key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux distribution).
In the virtual console, the GUI, with all its distracting colors and information, is gone.
Enabling you to focus entirely on your actual work.
@cindex Resource heavy operations
For operations that use a lot of your system's resources (processing a large number of large astronomical images for example), the virtual console is the place to run them.
This is because the GUI is not competing with your research work for your system's RAM and CPU.
Since the virtual consoles are completely independent, you can even log out of your GUI environment to give even more of your hardware resources to the programs you are running and thus reduce the operating time.
@cindex Secure shell
@cindex SSH
@cindex Remote operation
Since it uses far less system resources, the CLI is also convenient for remote access to your computer.
Using secure shell (SSH) you can log in securely to your system (similar to the virtual console) from anywhere even if the connection speeds are low.
There are apps for smart phones and tablets which allow you to do this.
@node Report a bug, Suggest new feature, New to GNU/Linux?, Introduction
@section Report a bug
@cindex Bug
@cindex Wrong output
@cindex Software bug
@cindex Output, wrong
@cindex Wrong results
@cindex Results, wrong
@cindex Halted program
@cindex Program crashing
@cindex Inconsistent results
According to Wikipedia ``a software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways''.
So when you see that a program is crashing, not reading your input correctly, giving the wrong results, or not writing your output correctly, you have found a bug.
In such cases, it is best if you report the bug to the developers.
The programs will also inform you if known impossible situations occur (which are caused by something unexpected) and will ask the users to report the bug issue.
@cindex Bug reporting
Prior to actually filing a bug report, it is best to search previous reports.
The issue might have already been found and even solved.
The best place to check if your bug has already been discussed is the bugs tracker on @ref{Gnuastro project webpage} at @url{https://savannah.gnu.org/bugs/?group=gnuastro}.
In the top search fields (under ``Display Criteria'') set the ``Open/Closed'' drop-down menu to ``Any'' and choose the respective program or general category of the bug in ``Category'' and click the ``Apply'' button.
The results colored green have already been solved and the status of those colored in red is shown in the table.
@cindex Version control
Recently corrected bugs are probably not yet publicly released because they are scheduled for the next Gnuastro stable release.
If the bug is solved but not yet released and it is an urgent issue for you, you can get the version controlled source and compile that, see @ref{Version controlled source}.
To solve the issue as readily as possible, please follow the following to guidelines in your bug report.
The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How To Ask Questions The Smart Way} essays also provide some good generic advice for all software (do not contact their authors for Gnuastro's problems).
Mastering the art of giving good bug reports (like asking good questions) can greatly enhance your experience with any free and open source software.
So investing the time to read through these essays will greatly reduce your frustration after you see something does not work the way you feel it is supposed to for a large range of software, not just Gnuastro.
@table @strong
@item Be descriptive
Please provide as many details as possible and be very descriptive.
Explain what you expected and what the output was: it might be that your expectation was wrong.
Also please clearly state which sections of the Gnuastro book (this book), or other references you have studied to understand the problem.
This can be useful in correcting the book (adding links to likely places where users will check).
But more importantly, it will be encouraging for the developers, since you are showing how serious you are about the problem and that you have actually put some thought into it.
``To be able to ask a question clearly is two-thirds of the way to getting it answered.'' -- John Ruskin (1819-1900).
@item Individual and independent bug reports
If you have found multiple bugs, please send them as separate (and independent) bugs (as much as possible).
This will significantly help us in managing and resolving them sooner.
@cindex Reproducible bug reports
@item Reproducible bug reports
If we cannot exactly reproduce your bug, then it is very hard to resolve it.
So please send us a Minimal working example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}} along with the description.
For example, in running a program, please send us the full command-line text and the output with the @option{-P} option, see @ref{Operating mode options}.
If it is caused only for a certain input, also send us that input file.
In case the input FITS is large, please use Crop to only crop the problematic section and make it as small as possible so it can easily be uploaded and downloaded and not waste the archive's storage, see @ref{Crop}.
@end table
@noindent
There are generally two ways to inform us of bugs:
@itemize
@cindex Mailing list archives
@cindex Mailing list: bug-gnuastro
@cindex @code{bug-gnuastro@@gnu.org}
@item
Send a mail to @code{bug-gnuastro@@gnu.org}.
Any mail you send to this address will be distributed through the bug-gnuastro mailing list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}.
This is the simplest way to send us bug reports.
The developers will then register the bug into the project web page (next choice) for you.
@cindex Gnuastro project page
@cindex Support request manager
@cindex Submit new tracker item
@cindex Anonymous bug submission
@item
Use the Gnuastro project web page at @url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways to get to the submission page as listed below.
Fill in the form as described below and submit it (see @ref{Gnuastro project webpage} for more on the project web page).
@itemize
@item
Using the top horizontal menu items, immediately under the top page title.
Hovering your mouse on ``Support'' will open a drop-down list.
Select ``Submit new''.
Also if you have an account in Savannah, you can choose ``Bugs'' in the menu items and then select ``Submit new''.
@item
In the main body of the page, under the ``Communication tools'' section, click on ``Submit new item''.
@end itemize
@end itemize
@cindex Tracker
@cindex Bug tracker
@cindex Task tracker
@cindex Viewing trackers
Once the items have been registered in the mailing list or web page, the developers will add it to either the ``Bug Tracker'' or ``Task Manager'' trackers of the Gnuastro project web page.
These two trackers can only be edited by the Gnuastro project developers, but they can be browsed by anyone, so you can follow the progress on your bug.
You are most welcome to join us in developing Gnuastro and fixing the bug you have found maybe a good starting point.
Gnuastro is designed to be easy for anyone to develop (see @ref{Science and its tools}) and there is a full chapter devoted to developing it: @ref{Developing}.
@cartouche
@noindent
@strong{Savannah's Markup:} When posting to Savannah, it helps to have the code displayed in mono-space font and a different background, you may also want to make a list of items or make some words bold.
For features like these, you should use Savannah's ``Markup'' guide at @url{https://savannah.gnu.org/markup-test.php}.
You can access this page by clicking on the ``Full Markup'' link that is just beside the ``Preview'' button, near the box that you write your comments.
As you see there, for example when you want to high-light code, you should put it within a ``+verbatim+'' and ``-verbatim-'' environment like below:
@example
+verbatim+
astarithmetic image.fits image_arith.fits -h1 isblank nan where
-verbatim-
@end example
@noindent
Unfortunately, Savannah doesn't have a way to edit submitted comments.
Therefore be sure to press the ``Preview'' button and check your report's final format before the final submission.
@end cartouche
@node Suggest new feature, Announcements, Report a bug, Introduction
@section Suggest new feature
@cindex Feature requests
@cindex Additions to Gnuastro
We would always be happy to hear of suggested new features.
For every program, there are already lists of features that we are planning to add.
You can see the current list of plans from the Gnuastro project web page at @url{https://savannah.gnu.org/projects/gnuastro/} and following @clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the top of the page immediately under the title, see @ref{Gnuastro project webpage}.
If you want to request a feature to an existing program, click on the ``Display Criteria'' above the list and under ``Category'', choose that particular program.
Under ``Category'' you can also see the existing suggestions for new programs or other cases like installation, documentation or libraries.
Also, be sure to set the ``Open/Closed'' value to ``Any''.
If the feature you want to suggest is not already listed in the task manager, then follow the steps that are fully described in @ref{Report a bug}.
Please have in mind that the developers are all busy with their own astronomical research, and implementing existing ``task''s to add or resolve bugs.
Gnuastro is a volunteer effort and none of the developers are paid for their hard work.
So, although we will try our best, please do not expect for your suggested feature to be immediately included (for the next release of Gnuastro).
The best person to apply the exciting new feature you have in mind is you, since you have the motivation and need.
In fact, Gnuastro is designed for making it as easy as possible for you to hack into it (add new features, change existing ones and so on), see @ref{Science and its tools}.
Please have a look at the chapter devoted to developing (@ref{Developing}) and start applying your desired feature.
Once you have added it, you can use it for your own work and if you feel you want others to benefit from your work, you can request for it to become part of Gnuastro.
You can then join the developers and start maintaining your own part of Gnuastro.
If you choose to take this path of action please contact us beforehand (@ref{Report a bug}) so we can avoid possible duplicate activities and get interested people in contact.
@cartouche
@noindent
@strong{Gnuastro is a collection of low level programs:} As described in @ref{Program design philosophy}, a founding principle of Gnuastro is that each library or program should be basic and low-level.
High level jobs should be done by running the separate programs or using separate functions in succession through a shell script or calling the libraries by higher level functions, see the examples in @ref{Tutorials}.
So when making the suggestions please consider how your desired job can best be broken into separate steps and modularized.
@end cartouche
@node Announcements, Conventions, Suggest new feature, Introduction
@section Announcements
@cindex Announcements
@cindex Mailing list: info-gnuastro
Gnuastro has a dedicated mailing list for making announcements (@code{info-gnuastro}).
Anyone can subscribe to this mailing list.
Anytime there is a new stable or test release, an email will be circulated there.
The email contains a summary of the overall changes along with a detailed list (from the @file{NEWS} file).
This mailing list is thus the best way to stay up to date with new releases, easily learn about the updated/new features, or dependencies (see @ref{Dependencies}).
To subscribe to this list, please visit @url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}.
Traffic (number of mails per unit time) in this list is designed to be low: only a handful of mails per year.
Previous announcements are available on @url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
@node Conventions, Acknowledgments and short history, Announcements, Introduction
@section Conventions
In this book we have the following conventions:
@itemize
@item
All commands that are to be run on the shell (command-line) prompt as the user start with a @command{$}.
In case they must be run as a superuser or system administrator, they will start with a single @command{#}.
If the command is in a separate line and next line @code{is also in the code type face}, but does not have any of the @command{$} or @command{#} signs, then it is the output of the command after it is run.
As a user, you do not need to type those lines.
A line that starts with @command{##} is just a comment for explaining the command to a human reader and must not be typed.
@item
If the command becomes larger than the page width a @key{\} is inserted in the code.
If you are typing the code by hand on the command-line, you do not need to use multiple lines or add the extra space characters, so you can omit them.
If you want to copy and paste these examples (highly discouraged!) then the @key{\} should stay.
The @key{\} character is a shell escape character which is used commonly to make characters which have special meaning for the shell, lose that special meaning (the shell will not treat them especially if there is a @key{\} behind them).
When @key{\} is the last visible character in a line (the next character is a new-line character) the new-line character loses its meaning.
Therefore, the shell sees it as a simple white-space character not the end of a command!
This enables you to use multiple lines to write your commands.
@end itemize
This is not a convention, but a bi-product of the PDF building process of the manual:
In the PDF version of this manual, a single quote (or apostrophe) character in the commands or codes is shown like this: @code{'}.
Single quotes are sometimes necessary in combination with commands like @code{awk} or @code{sed}, or when using Column arithmetic in Gnuastro's own Table (see @ref{Column arithmetic}).
Therefore when typing (recommended) or copy-pasting (not recommended) the commands that have a @code{'}, please correct it to the single-quote (or apostrophe) character, otherwise the command will fail.
@node Acknowledgments and short history, , Conventions, Introduction
@section Acknowledgments and short history
Gnuastro would not have been possible without scholarships and grants from several funding institutions.
To acknowledge/cite Gnuastro in your publications, see @ref{Citing Gnuastro}.
Here, you will find Gnuastro's own acknowledgments to all the grants, institutions and of course people who helped make Gnuastro what it is today.
The Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) scholarships for Mohammad Akhlaghi's Masters (2010 to 2012) and PhD (2012 to 2015) degrees in Tohoku University Astronomical Institute had an instrumental role in the long term learning and planning that made the idea of Gnuastro possible.
The very critical view points of Professor Takashi Ichikawa (Mohammad's PhD thesis directory) were also instrumental in the initial ideas and foundational philosophy/design of Gnuastro.
Afterwards, the European Research Council (ERC) advanced grant 339659-MUSICOS (Principal investigator, or PI: Roland Bacon) was vital in the growth and expansion of Gnuastro since it supported Mohammad's first postdoc position from 2016 to 2018.
Working with Roland at the Centre de Recherche Astrophysique de Lyon (CRAL, Saint-Genis-Laval, France), enabled a thorough re-write of the core functionality of all libraries and programs (to generalize them for usage on the wonderful MUSE 3D cubes).
This maturation phase made it possible for Gnuastro to grow into the large collection of generic programs and libraries it is today.
In this period, David Valls-Gabaud (of Paris Observatory) was also instrumental to Gnuastro through sponsoring it in several EU-wide meetings helping to inform the community of this new approach.
In late 2018, Mohammad moved to the Instituto de Astrofisica de Canarias (IAC, Tenerife, Spain) to work with Johan Knapen and Ignacio Trujillo as part of IAC's project P/300724, financed by the SMSI, through the Canary Islands Department of Economy, Knowledge and Employment.
With the support and help of Ignacio and Johan, Gnuastro's user base significantly grew in this phase.
The useful feedback from the many new users resulted in new/improved functionality/features in all the programs.
Since the summer of 2021, work on Gnuastro is continuing primarily in the Centro de Estudios de F@'isica del Cosmos de Arag@'on (CEFCA), located in Teruel, Spain.
Mohammad is a staff researcher at CEFCA and the PI (or collaborator) of grants directly or in-directly (higher-level projects that use Gnuastro) related to the development of Gnuastro.
As of this release, the main grant organizations (and their grants) that have supported Gnuastro (from 2020 with Mohammad as the PI or collaborator) have been:
@table @asis
@item Spanish and Arag@'on governments
In particular the Fondo de Inversiones de Teruel (FITE, or Teruel investment fund) which is a shared funding between the national and Aragonese governments.
But also the dedicated Aragonese government research group 16_23R funding to CEFCA.
@item Agencia Estatal de Investigación (AEI, or Spanish state research agency)
In particular Grants PID2021-124918NA-C43 (PI: Mohammad from 2022-09 for three years as part of J-PAS@footnote{@url{https://j-pas.org}} and J-PLUS@footnote{@url{https://j-plus.es}}), PID2022-138896NA-C54 (PI: Helena Dominguez Sánchez as part of ARRAKIHS@footnote{@url{https://arrakihs-mission.eu}} from 2023-09 for three years) and PID2024-162229NB-I00 (PI: Mohammad from 2025 for four years; for Gnuastro, Maneage@footnote{Maneage (a portmanteau from ``Managing data lineage'') is also maintained by Mohammad and is a higher-level framework for calling atomic Gnuastro operations in a large pipeline; see @url{https://maneage.org}} to be used in J-PAS, ARRAKIHS and Euclid@footnote{@url{https://www.esa.int/Science_Exploration/Space_Science/Euclid}}) which are all funded by MICIU/AEI/10.13039/501100011033 and ``ERDF A way of making Europe'', by ``ERDF/EU''.
@item Google Summer of Code (GSoC)
We applied for GSoC in the following years and all have been successful so far: 2020, 2021, 2022, 2023, 2024.
Special thanks goes to the GNU (for 2020) and OpenAstronomy@footnote{@url{https://openastronomy.org}} (rest of the years) GSoC organizations also for enabling this.
@end table
Rafael Guzm@'an (the PI of ARRAKIHS) and its high-level management deserve a special mention regarding the two ARRAKIHS-related grants above.
Their trust and support for Gnuastro directly lead to Gnuastro becoming to core tool for the ARRAKIHS data reduction pipeline (known as the ARRAKIHS Harvester): it is fully written with Gnuastro and Maneage.
Thanks to all the direct support from the ARRAKIHS, J-PAS and J-PLUS, they are now the primary driver for Gnuastro's development helping us reach the vision @ref{GNU Astronomy Utilities 1.0}).
Being the primary driver does not imply ignoring general user support or bug fixes found by others.
As always, we will address bug reports or simple feature addition requests with high priority; it just refers to long-term strategic decisions.
However, Gnuastro is free and open source software (see @ref{Your rights}) and every contributor has assigned the copyright of all their contributions to the Free software foundation to ensure its freedom in the future (see @ref{Copyright assignment}).
Therefore, all the added components and bugs fixed for the driving projects, can benefit the whole community.
We welcome collaborations with any other mission/survey (to focus on their particular needs) if they can provide a dedicated personnel to help in those developments.
The full list of Gnuastro authors is available at the start of this book (on the third page of its PDF) and the @file{AUTHORS} file in the source code (both are generated automatically from the version controlled history).
The plain text file @file{THANKS} (also distributed along with the source code), contains the list of people who contributed indirectly role to Gnuastro: not committing any code in the Gnuastro version controlled history, but providing useful and constructive comments, suggestions and reported bugs.
Such contributions have also been critical for Gnuastro's current state, so we would also like to gratefully acknowledge their role here also (in alphabetical order by family name):
@c To the developers: please keep this in the same order as the THANKS file
@c (alphabetical, except for the names in the paragraph above).
Valentina Abril-melgarejo,
Marjan Akbari,
Carlos Allende Prieto,
Hamed Altafi,
Roland Bacon,
Roberto Baena Gall@'e,
Zahra Bagheri,
Karl Berry,
Faezeh Bidjarchian,
Leindert Boogaard,
Nicolas Bouch@'e,
Stefan Br@"uns,
Fernando Buitrago Alonso,
Adrian Bunk,
Rosa Calvi,
Mark Calabretta,
Juan Castillo Ramírez,
Alejandro Camazón Pinilla,
Nushkia Chamba,
Sergio Chueca Urzay,
Tamara Civera Lorenzo,
Benjamin Clement,
Nima Dehdilani,
Andr@'es Del Pino Molina,
Antonio Diaz Diaz,
Paola Dimauro,
Alexey Dokuchaev,
Pierre-Alain Duc,
Alessandro Ederoclite,
Elham Eftekhari,
Paul Eggert,
Sepideh Eskandarlou,
S@'ilvia Farras,
Juan Antonio Fernández Ontiveros,
Gaspar Galaz,
Andr@'es García-Serra Romero,
Zohre Ghaffari,
Th@'er@`ese Godefroy,
Giulia Golini,
Craig Gordon,
Martin Guerrero Roncel,
Madusha Gunawardhana,
Rafael G@'uzman,
Bruno Haible,
Stephen Hamer,
Siyang He,
Zahra Hosseini,
Leslie Hunt,
Takashi Ichikawa,
Ra@'ul Infante Sainz,
Brandon Invergo,
Oryna Ivashtenko,
Aur@'elien Jarno,
Ooldooz Kabood,
Lee Kelvin,
Brandon Kelly,
Mohammad-Reza Khellat,
Johan Knapen,
Geoffry Krouchi,
Martin Kuemmel,
Teet Kuutma,
Clotilde Laigle,
Floriane Leclercq,
Alan Lefor,
Javier Licandro,
Jeremy Lim,
Giacomo Lorenzetti,
Alejandro Lumbreras Calle,
Sebasti@'an Luna Valero,
Alberto Madrigal,
Guillaume Mahler,
Carlos Marrero de La Rosa,
Juan Miro,
Alireza Molaeinezhad,
Javier Moldon,
Juan Molina Tobar,
Francesco Montanari,
Raphael Morales,
Carlos Morales Socorro,
Sylvain Mottet,
Benson Muite,
Dmitrii Oparin,
Fran@,{c}ois Ochsenbein,
Bertrand Pain,
Rahna Payyasseri Thanduparackal,
William Pence,
Irene Pintos Castro,
Mamta Pommier,
Marcel Popescu,
Bob Proulx,
Joseph Putko,
Samane Raji,
Ignacio Ruiz Cejudo,
Teymoor Saifollahi,
Joanna Sakowska,
Elham Saremi,
Nafise Sedighi,
Markus Schaney,
Yahya Sefidbakht,
Alejandro Serrano Borlaff,
Zahra Sharbaf,
David Shupe,
Leigh Smith,
Elizabeth Sola,
Jenny Sorce,
Manuel S@'anchez-Benavente,
Lee Spitler,
Richard Stallman,
Michael Stein,
Ole Streicher,
Alfred M. Szmidt,
Michel Tallon,
Juan C. Tello,
Vincenzo Testa,
Peter Teuben,
@'Eric Thi@'ebaut,
Ignacio Trujillo,
Hok Kan (Ronald) Tsang,
Mathias Urbano,
David Valls-Gabaud,
Jes@'us Varela,
Jesús Vega,
Aaron Watkins,
Richard Wilbur,
Phil Wyett,
Dennis Williamson,
Michael H.F. Wilkinson,
Christopher Willmer,
Greg Wooledge,
Xiuqin Wu,
Sara Yousefi Taemeh,
Johannes Zabl.
The GNU French Translation Team is also managing the French version of the top Gnuastro web page which we highly appreciate.
Finally, we should thank all the (sometimes anonymous) people in various online forums who patiently answered all our small (but important) technical questions.
@node Tutorials, Installation, Introduction, Top
@chapter Tutorials
@cindex Tutorial
@cindex Cookbook
To help new users have a smooth and easy start with Gnuastro, in this chapter several thoroughly elaborated tutorials, or cookbooks, are provided.
These tutorials demonstrate the capabilities of different Gnuastro programs and libraries, along with tips and guidelines for the best practices of using them in various realistic situations.
We strongly recommend going through these tutorials to get a good feeling of how the programs are related (built in a modular design to be used together in a pipeline), very similar to the core Unix-based programs that they were modeled on.
Therefore these tutorials will help in optimally using Gnuastro's programs (and generally, the Unix-like command-line environment) effectively for your research.
The first three tutorials (@ref{General program usage tutorial} and @ref{Detecting large extended targets} and @ref{Building the extended PSF}) use real input datasets from some of the deep Hubble Space Telescope (HST) images, the Sloan Digital Sky Survey (SDSS) and the Javalambre Photometric Local Universe Survey (J-PLUS) respectively.
Their aim is to demonstrate some real-world problems that many astronomers often face and how they can be solved with Gnuastro's programs.
The fourth tutorial (@ref{Sufi simulates a detection}) focuses on simulating astronomical images, which is another critical aspect of any analysis!
The ultimate aim of @ref{General program usage tutorial} is to detect galaxies in a deep HST image, measure their positions, magnitude and select those with the strongest colors.
In the process, it takes many detours to introduce you to the useful capabilities of many of the programs.
So please be patient in reading it.
If you do not have much time and can only try one of the tutorials, we recommend this one.
@cindex PSF
@cindex Point spread function
@ref{Detecting large extended targets} deals with a major problem in astronomy: effectively detecting the faint outer wings of bright (and large) nearby galaxies to extremely low surface brightness levels (roughly one quarter of the local noise level in the example discussed).
Besides the interesting scientific questions in these low-surface brightness features, failure to properly detect them will bias the measurements of the background objects and the survey's noise estimates.
This is an important issue, especially in wide surveys.
Because bright/large galaxies and stars@footnote{Stars also have similarly large and extended wings due to the point spread function, see @ref{PSF}.}, cover a significant fraction of the survey area.
@ref{Building the extended PSF} tackles an important problem in astronomy: how the extract the PSF of an image, to the largest possible extent, without assuming any functional form.
In Gnuastro we have multiple installed scripts for this job.
Their usage and logic behind best tuning them for the particular step, is fully described in this tutorial, on a real dataset.
The tutorial concludes with subtracting that extended PSF from the science image; thus giving you a cleaner image (with no scattered light of the brighter stars) for your higher-level analysis.
@ref{Sufi simulates a detection} has a fictional@footnote{The two historically motivated tutorials (@ref{Sufi simulates a detection} is not intended to be a historical reference (the historical facts of this fictional tutorial used Wikipedia as a reference).)
This form of presenting a tutorial was influenced by the PGF/TikZ and Beamer manuals.
They are both packages in @TeX{} and @LaTeX{}, the first is a high-level vector graphic programming environment, while with the second you can make presentation slides.
On a similar topic, there are also some nice words of wisdom for Unix-like systems called @url{http://catb.org/esr/writings/unix-koans, Rootless Root}.
These also have a similar style but they use a mythical figure named Master Foo.
If you already have some experience in Unix-like systems, you will definitely find these Unix Koans entertaining/educative.} setting!
Showing how Abd al-rahman Sufi (903 -- 986 A.D., the first recorded description of ``nebulous'' objects in the heavens is attributed to him) could have used some of Gnuastro's programs for a realistic simulation of his observations and see if his detection of nebulous objects was trust-able.
Because all conditions are under control in a simulated/mock environment/dataset, mock datasets can be a valuable tool to inspect the limitations of your data analysis and processing.
But they need to be as realistic as possible, so this tutorial is dedicated to this important step of an analysis (simulations).
There are other tutorials also, on things that are commonly necessary in astronomical research:
In @ref{Detecting lines and extracting spectra in 3D data}, we use MUSE cubes (an IFU dataset) to show how you can subtract the continuum, detect emission-line features, extract spectra and build synthetic narrow-band images.
In @ref{Color channels in same pixel grid} we demonstrate how you can warp multiple images into a single pixel grid (often necessary with multi-wavelength data), and build a single color image.
In @ref{Moire pattern in coadding and its correction} we show how you can avoid the unwanted Moir@'e pattern which happens when warping separate exposures to build a coadded deeper image.
In @ref{Zero point of an image} we review the process of estimating the zero point of an image using a reference image or catalog.
Finally, in @ref{Pointing pattern design} we show the process by which you can simulate a dither pattern to find the best observing strategy for your next exciting scientific project.
In these tutorials, we have intentionally avoided too many cross references to make it more easy to read.
For more information about a particular program, you can visit the section with the same name as the program in this book.
Each program section in the subsequent chapters starts by explaining the general concepts behind what it does, for example, see @ref{Convolve}.
If you only want practical information on running a program, for example, its options/configuration, input(s) and output(s), please consult the subsection titled ``Invoking ProgramName'', for example, see @ref{Invoking astnoisechisel}.
For an explanation of the conventions we use in the example codes through the book, please see @ref{Conventions}.
@menu
* General program usage tutorial:: Tutorial on many programs in generic scenario.
* Detecting large extended targets:: NoiseChisel for huge extended targets.
* Building the extended PSF:: How to extract an extended PSF from science data.
* Sufi simulates a detection:: Simulating a detection.
* Detecting lines and extracting spectra in 3D data:: Extracting spectra and emission line properties.
* Color images with full dynamic range:: Bright pixels with color, faint pixels in grayscale.
* Zero point of an image:: Estimate the zero point of an image.
* Pointing pattern design:: Optimizing the pointings of your observations.
* Moire pattern in coadding and its correction:: How to avoid this grid-based artifact.
* Clipping outliers:: How to avoid outliers in your measurements.
@end menu
@node General program usage tutorial, Detecting large extended targets, Tutorials, Tutorials
@section General program usage tutorial
@cindex Hubble Space Telescope (HST)
@cindex Colors, broad-band photometry
Measuring colors of astronomical objects in broad-band or narrow-band images is one of the most basic and common steps in astronomical analysis.
Here, we will use Gnuastro's programs to get a physical scale (area at certain redshifts) of the field we are studying, detect objects in a Hubble Space Telescope (HST) image, measure their colors and identify the ones with the strongest colors, do a visual inspection of these objects and inspect spatial position in the image.
After this tutorial, you can also try the @ref{Detecting large extended targets} tutorial which goes into a little more detail on detecting very low surface brightness signal.
During the tutorial, we will take many detours to explain, and practically demonstrate, the many capabilities of Gnuastro's programs.
In the end you will see that the things you learned during this tutorial are much more generic than this particular problem and can be used in solving a wide variety of problems involving the analysis of data (images or tables).
So please do not rush, and go through the steps patiently to optimally master Gnuastro.
@cindex XDF survey
@cindex eXtreme Deep Field (XDF) survey
In this tutorial, we will use the HST@url{https://archive.stsci.edu/prepds/xdf, eXtreme Deep Field} dataset.
Like almost all astronomical surveys, this dataset is free for download and usable by the public.
You will need the following tools in this tutorial: Gnuastro, SAO DS9 @footnote{See @ref{SAO DS9}, available at @url{http://ds9.si.edu/site/Home.html}}, GNU Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most common implementation is GNU AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
This tutorial was first prepared for the ``Exploring the Ultra-Low Surface Brightness Universe'' workshop (November 2017) at the ISSI in Bern, Switzerland.
It was further extended in the ``4th Indo-French Astronomy School'' (July 2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA in Lyon, France.
We are very grateful to the organizers of these workshops and the attendees for the very fruitful discussions and suggestions that made this tutorial possible.
@cartouche
@noindent
@strong{Write the example commands manually:} Try to type the example commands on your terminal manually and use the history feature of your command-line (by pressing the ``up'' button to retrieve previous commands).
Do not simply copy and paste the commands shown here.
This will help simulate future situations when you are processing your own datasets.
@end cartouche
@menu
* Calling Gnuastro's programs:: Easy way to find Gnuastro's programs.
* Accessing documentation:: Access to manual of programs you are running.
* Setup and data download:: Setup this template and download datasets.
* Dataset inspection and cropping:: Crop the flat region to use in next steps.
* Angular coverage on the sky:: Measure the field size on the sky.
* Cosmological coverage and visualizing tables:: Size in Mpc2, and plotting its change.
* Building custom programs with the library:: Easy way to build new programs.
* Option management and configuration files:: Dealing with options and configuring them.
* Warping to a new pixel grid:: Transforming/warping the dataset.
* NoiseChisel and Multi-Extension FITS files:: Running NoiseChisel and having multiple HDUs.
* NoiseChisel optimization for detection:: Check NoiseChisel's operation and improve it.
* NoiseChisel optimization for storage:: Dramatically decrease output's volume.
* Segmentation and making a catalog:: Finding true peaks and creating a catalog.
* Measuring the dataset limits:: One way to measure the ``depth'' of your data.
* Working with catalogs estimating colors:: Estimating colors using the catalogs.
* Column statistics color-magnitude diagram:: Visualizing column correlations.
* Aperture photometry:: Doing photometry on a fixed aperture.
* Matching catalogs:: Easily find corresponding rows from two catalogs.
* Reddest clumps cutouts and parallelization:: Parallelization and selecting a subset of the data.
* FITS images in a publication:: How to display FITS images in a PDF.
* Marking objects for publication:: How to mark some objects over the image in a PDF.
* Writing scripts to automate the steps:: Scripts will greatly help in re-doing things fast.
* Citing Gnuastro:: How to cite and acknowledge Gnuastro in your papers.
@end menu
@node Calling Gnuastro's programs, Accessing documentation, General program usage tutorial, General program usage tutorial
@subsection Calling Gnuastro's programs
A handy feature of Gnuastro is that all program names start with @code{ast}.
This will allow your command-line processor to easily list and auto-complete Gnuastro's programs for you.
Try typing the following command (press @key{TAB} key when you see @code{<TAB>}) to see the list:
@example
$ ast<TAB><TAB>
@end example
@noindent
Any program that starts with @code{ast} (including all Gnuastro programs) will be shown.
By choosing the subsequent characters of your desired program and pressing @key{<TAB><TAB>} again, the list will narrow down and the program name will auto-complete once your input characters are unambiguous.
In short, you often do not need to type the full name of the program you want to run.
@node Accessing documentation, Setup and data download, Calling Gnuastro's programs, General program usage tutorial
@subsection Accessing documentation
Gnuastro contains a large number of programs and it is natural to forget the details of each program's options or inputs and outputs.
Therefore, before starting the analysis steps of this tutorial, let's review how you can access this book to refresh your memory any time you want, without having to take your hands off the keyboard.
When you install Gnuastro, this book is also installed on your system along with all the programs and libraries, so you do not need an internet connection to access/read it.
Also, by accessing this book as described below, you can be sure that it corresponds to your installed version of Gnuastro.
@cindex GNU Info
GNU Info@footnote{GNU Info is already available on almost all Unix-like operating systems.} is the program in charge of displaying the manual on the command-line (for more, see @ref{Info}).
To see this whole book on your command-line, please run the following command and press subsequent keys.
Info has its own mini-environment, therefore we will show the keys that must be pressed in the mini-environment after a @code{->} sign.
You can also ignore anything after the @code{#} sign in the middle of the line, they are only for your information.
@example
$ info gnuastro # Open the top of the manual.
-> <SPACE> # All the book chapters.
-> <SPACE> # Continue down: show sections.
-> <SPACE> ... # Keep pressing space to go down.
-> q # Quit Info, return to the command-line.
@end example
The thing that greatly simplifies navigation in Info is the links (regions with an underline).
You can immediately go to the next link in the page with the @key{<TAB>} key and press @key{<ENTER>} on it to go into that part of the manual.
Try the commands above again, but this time also use @key{<TAB>} to go to the links and press @key{<ENTER>} on them to go to the respective section of the book.
Then follow a few more links and go deeper into the book.
To return to the previous page, press @key{l} (small L).
If you are searching for a specific phrase in the whole book (for example, an option name), press @key{s} and type your search phrase and end it with an @key{<ENTER>}.
Finally, you can return to the command line and quit Info by pressing the @key{q} key.
You do not need to start from the top of the manual every time.
For example, to get to @ref{Invoking astnoisechisel}, run the following command.
In general, all programs have such an ``Invoking ProgramName'' section in this book.
These sections are specifically for the description of inputs, outputs and configuration options of each program.
You can access them directly for each program by giving its executable name to Info.
@example
$ info astnoisechisel
@end example
The other sections do not have such shortcuts.
To directly access them from the command-line, you need to tell Info to look into Gnuastro's manual, then look for the specific section (an unambiguous title is necessary).
For example, if you only want to review/remember NoiseChisel's @ref{Detection options}), just run the following command.
Note how case is irrelevant for Info when calling a title in this manner.
@example
$ info gnuastro "Detection options"
@end example
In general, Info is a powerful and convenient way to access this whole book with detailed information about the programs you are running.
If you are not already familiar with it, please run the following command and just read along and do what it says to learn it.
Do not stop until you feel sufficiently fluent in it.
Please invest the half an hour's time necessary to start using Info comfortably.
It will greatly improve your productivity and you will start reaping the rewards of this investment very soon.
@example
$ info info
@end example
As a good scientist you need to feel comfortable to play with the features/options and avoid (be critical to) using default values as much as possible.
On the other hand, our human memory is limited, so it is important to be able to easily access any part of this book fast and remember the option names, what they do and their acceptable values.
If you just want the option names and a short description, calling the program with the @option{--help} option might also be a good solution like the first example below.
If you know a few characters of the option name, you can feed the printed output to @command{grep} like the second or third example commands.
@example
$ astnoisechisel --help
$ astnoisechisel --help | grep quant
$ astnoisechisel --help | grep check
@end example
@node Setup and data download, Dataset inspection and cropping, Accessing documentation, General program usage tutorial
@subsection Setup and data download
The first step in the analysis of the tutorial is to download the necessary input datasets.
First, to keep things clean, let's create a @file{gnuastro-tutorial} directory and continue all future steps in it:
@example
$ mkdir gnuastro-tutorial
$ cd gnuastro-tutorial
@end example
We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3, Wide Field Camera} dataset.
If you already have them in another directory (for example, @file{XDFDIR}, with the same FITS file names), you can set the @file{download} directory to be a symbolic link to @file{XDFDIR} with a command like this:
@example
$ ln -s XDFDIR download
@end example
@noindent
Otherwise, when the following images are not already present on your system, you can make a @file{download} directory and download them there.
@example
$ mkdir download
$ cd download
$ xdfurl=http://archive.stsci.edu/pub/hlsp/xdf
$ wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_f105w_v1_sci.fits
$ wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_f125w_v1_sci.fits
$ wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
$ cd ..
@end example
@noindent
In this tutorial, we will just use these three filters.
Later, you may need to download more filters.
To do that, you can use the shell's @code{for} loop to download them all in series (one after the other@footnote{Note that you only have one port to the internet, so downloading in parallel will actually be slower than downloading in series.}) with one command like the one below for the WFC3 filters.
Put this command instead of the three @code{wget} commands above.
Recall that all the extra spaces, backslashes (@code{\}), and new lines can be ignored if you are typing on the lines on the terminal.
@example
$ for f in f105w f125w f140w f160w; do \
wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_"$f"_v1_sci.fits; \
done
@end example
@node Dataset inspection and cropping, Angular coverage on the sky, Setup and data download, General program usage tutorial
@subsection Dataset inspection and cropping
First, let's visually inspect the datasets we downloaded in @ref{Setup and data download}.
Let's take F160W image as an example.
One of the most common programs for viewing FITS images is SAO DS9, which is usually called through the @command{ds9} command-line program, like the command below.
If you do not already have DS9 on your computer and the command below fails, please see @ref{SAO DS9}.
@example
$ ds9 download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
@end example
By default, DS9 open a relatively small window (for modern browsers) and its default scale and color bar make it very hard to see any structure in the image: everything will look black.
Also, by default, it zooms into the center of the image and you need to scroll to zoom-out and see the whole thing.
To avoid these problems, Gnuastro has the @command{astscript-fits-view} script:
@example
$ astscript-fits-view \
download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
@end example
After running this command, you will see that the DS9 window fully covers the height of your monitor, it is showing the whole image, using a more clear color-map, and many more useful things.
In fact, you see the DS9 command that is used in your terminal@footnote{When comparing DS9's command-line options to Gnuastro's, you will notice how SAO DS9 does not follow the GNU style of options where ``long'' and ``short'' options are preceded by @option{--} and @option{-} respectively (for example, @option{--width} and @option{-w}, see @ref{Options}).}.
On GNU/Linux operating systems (like Ubuntu, and Fedora), you can also set your graphics user interface to use this script for opening FITS files when you click on them.
For more, see the instructions in the checklist at the start of @ref{Invoking astscript-fits-view}.
As you hover your mouse over the image, notice how the ``Value'' and positional fields on the top of the ds9 window get updated.
The first thing you might notice is that when you hover the mouse over the regions with no data, they have a value of zero.
The next thing might be that the dataset has a shallower and deeper component (see @ref{Metameasurements on full input}).
Recall that this is a combined/reduced image of many exposures, and the parts that have more exposures are deeper.
In particular, the exposure time of the deep inner region is more than 4 times the exposure time of the outer (more shallower) parts.
To simplify the analysis in this tutorial, we will only be working on the deep field, so let's crop it out of the full dataset.
Fortunately the XDF survey web page (above) contains the vertices of the deep flat WFC3-IR field@footnote{@url{https://archive.stsci.edu/prepds/xdf/#dataproducts}}.
With Gnuastro's Crop program, you can use those vertices to cutout this deep region from the larger image (to learn more about the Crop program see @ref{Crop}).
But before that, to keep things organized, let's make a directory called @file{flat-ir} and keep the flat (single-depth) regions in that directory (with a `@file{xdf-}' prefix for a shorter and easier filename).
@example
$ mkdir flat-ir
$ astcrop --mode=wcs -h0 --output=flat-ir/xdf-f105w.fits \
--polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
53.134517,-27.787144 : 53.161906,-27.807208" \
download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f105w_v1_sci.fits
$ astcrop --mode=wcs -h0 --output=flat-ir/xdf-f125w.fits \
--polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
53.134517,-27.787144 : 53.161906,-27.807208" \
download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f125w_v1_sci.fits
$ astcrop --mode=wcs -h0 --output=flat-ir/xdf-f160w.fits \
--polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
53.134517,-27.787144 : 53.161906,-27.807208" \
download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
@end example
@noindent
Run the command below to have a look at the cropped images:
@example
$ astscript-fits-view flat-ir/*.fits
@end example
You only see the deep region now, does not the noise look much cleaner?
An important result of this crop is that regions with no data now have a NaN (Not-a-Number, or a blank value) value.
Any self-respecting statistical program will ignore NaN values, so they will not affect your outputs.
For example, notice how changing the DS9 color bar will not affect the NaN pixels (their colors will not change).
However, do you remember that in the downloaded files, such regions had a value of zero?
That is a big problem!
Because zero is a number, and is thus meaningful, especially when you later want to NoiseChisel to detect@footnote{As you will see below, unlike most other detection algorithms, NoiseChisel detects the objects from their faintest parts, it does not start with their high signal-to-noise ratio peaks.
Since the Sky is already subtracted in many images and noise fluctuates around zero, zero is commonly higher than the initial threshold applied.
Therefore keeping zero-valued pixels in this image will cause them to identified as part of the detections!} all the signal from the deep universe in this image.
Generally, when you want to ignore some pixels in a dataset, and avoid higher-level ambiguities or complications, it is always best to give them blank values (not zero, or some other absurdly large or small number).
Gnuastro has the Arithmetic program for such cases, and we will introduce it later in this tutorial.
In the example above, the polygon vertices are in degrees, but you can also replace them with sexagesimal@footnote{@url{https://en.wikipedia.org/wiki/Sexagesimal}} coordinates (for example, using @code{03h32m44.9794} or @code{03:32:44.9794} instead of @code{53.187414}, the first RA, and @code{-27d46m44.9472} or @code{-27:46:44.9472} instead of @code{-27.779152}, the first Dec).
To further simplify things, you can even define your polygon visually as a DS9 ``region'', save it as a ``region file'' and give that file to crop.
But we need to continue, so if you are interested to learn more, see @ref{Crop}.
Before closing this section, let's just take a look at the three cropping commands we ran above.
The only thing varying in the three commands the filter name!
Note how everything else is the same!
In such cases, you should generally avoid repeating a command manually, it is prone to @emph{many} bugs, and as you see, it is very hard to read (did not you suddenly write a @code{7} as an @code{8}?).
To simplify the command, and allow you to work on more filters, we can use the shell's @code{for} loop as shown below.
Notice how the place where the filter names (@file{f105w}, @file{f125w} and @file{f160w}) are used above, have been replaced with @file{$f} (the shell variable that @code{for} will update in every loop) below.
@example
$ rm flat-ir/*.fits
$ for f in f105w f125w f160w; do \
astcrop --mode=wcs -h0 --output=flat-ir/xdf-$f.fits \
--polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
53.134517,-27.787144 : 53.161906,-27.807208" \
download/hlsp_xdf_hst_wfc3ir-60mas_hudf_"$f"_v1_sci.fits; \
done
@end example
@node Angular coverage on the sky, Cosmological coverage and visualizing tables, Dataset inspection and cropping, General program usage tutorial
@subsection Angular coverage on the sky
@cindex @code{CDELT}
@cindex Coordinate scales
@cindex Scales, coordinate
The cropped images in @ref{Dataset inspection and cropping} are the deepest images we currently have of the sky.
The first thing that comes to mind may be this: ``How large is this field on the sky?''.
@cartouche
@noindent
@strong{More accurate method:} the steps mentioned in this section are primarily designed to help you get familiar with the FITS WCS standard and some shells scripting.
The accuracy of this method will decrease as your image becomes large (on the scale of degrees).
For an accurate method, see @ref{Area of non-blank pixels on sky}.
@end cartouche
@noindent
You can get a fast and crude answer with Gnuastro's Fits program, using this command:
@example
$ astfits flat-ir/xdf-f160w.fits --skycoverage
@end example
It will print the sky coverage in two formats (all numbers are in units of degrees for this image): 1) the image's central RA and Dec and full width around that center, 2) the range of RA and Dec covered by this image.
You can use these values in various online query systems.
You can also use this option to automatically calculate the area covered by this image.
With the @option{--quiet} option, the printed output of @option{--skycoverage} will not contain human-readable text, making it easier for automatic (computer) processing:
@example
$ astfits flat-ir/xdf-f160w.fits --skycoverage --quiet
@end example
The second row is the coverage range along RA and Dec (compare with the outputs before using @option{--quiet}).
We can thus simply subtract the second from the first column and multiply it with the difference of the fourth and third columns to calculate the image area.
We will also multiply each by 60 to have the area in arc-minutes squared.
@example
$ astfits flat-ir/xdf-f160w.fits --skycoverage --quiet \
| awk 'NR==2@{print ($2-$1)*60*($4-$3)*60@}'
@end example
The returned value is @mymath{9.06711} arcmin@mymath{^2}.
@strong{However, this method ignores the fact that many of the image pixels are blank!}
In other words, the image does cover this area, but there is no data in more than half of the pixels.
So let's calculate the area coverage over-which we actually have data.
The FITS world coordinate system (WCS) metadata standard contains the key to answering this question.
Run the following command to see all the FITS keywords (metadata) for one of the images (almost identical with the other images because they are scaled to the same region of Sky):
@example
$ astfits flat-ir/xdf-f160w.fits -h1
@end example
Look into the keywords grouped under the `@code{World Coordinate System (WCS)}' title.
These keywords define how the image relates to the outside world.
In particular, the @code{CDELT*} keywords (or @code{CDELT1} and @code{CDELT2} in this 2D image) contain the ``Coordinate DELTa'' (or change in coordinate units) with a change in one pixel.
But what is the units of each ``world'' coordinate?
The @code{CUNIT*} keywords (for ``Coordinate UNIT'') have the answer.
In this case, both @code{CUNIT1} and @code{CUNIT1} have a value of @code{deg}, so both ``world'' coordinates are in units of degrees.
We can thus conclude that the value of @code{CDELT*} is in units of degrees-per-pixel@footnote{With the FITS @code{CDELT} convention, rotation (@code{PC} or @code{CD} keywords) and scales (@code{CDELT}) are separated.
In the FITS standard the @code{CDELT} keywords are optional.
When @code{CDELT} keywords are not present, the @code{PC} matrix is assumed to contain @emph{both} the coordinate rotation and scales.
Note that not all FITS writers use the @code{CDELT} convention.
So you might not find the @code{CDELT} keywords in the WCS metadata of some FITS files.
However, all Gnuastro programs (which use the default FITS keyword writing format of WCSLIB) write their output WCS with the @code{CDELT} convention, even if the input does not have it.
If your dataset does not use the @code{CDELT} convention, you can feed it to any (simple) Gnuastro program (for example, Arithmetic) and the output will have the @code{CDELT} keyword.
See Section 8 of the @url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS standard} for more}.
With the commands below, we will use @code{CDELT} (along with the number of non-blank pixels) to find the answer of our initial question: ``how much of the sky does this image cover?''.
The lines starting with @code{##} are just comments for you to read and understand each command.
Do not type them on the terminal (no problem if you do, they will just not have any effect).
The commands are intentionally repetitive in some places to better understand each step and also to demonstrate the beauty of command-line features like history, variables, pipes and loops (which you will commonly use as you become more proficient on the command-line).
@cartouche
@noindent
@strong{Use shell history:} Do not forget to make effective use of your shell's history: you do not have to re-type previous command to add something to them (like the examples below).
This is especially convenient when you just want to make a small change to your previous command.
Press the ``up'' key on your keyboard (possibly multiple times) to see your previous command(s) and modify them accordingly.
@end cartouche
@cartouche
@noindent
@strong{Your locale does not use `.' as decimal separator:} on systems that do not use an English language environment, the dates, numbers, etc., can be printed in different formats (for example, `0.5' can be written as `0,5': with a comma).
With the @code{LC_NUMERIC} line at the start of the script below, we are ensuring a unified format in the output of @command{seq}.
For more, please see @ref{Numeric locale}.
@end cartouche
@example
## Make sure that the decimal separator is a point in any environment.
$ export LC_NUMERIC=C
## See the general statistics of non-blank pixel values.
$ aststatistics flat-ir/xdf-f160w.fits
## We only want the number of non-blank pixels (add '--number').
$ aststatistics flat-ir/xdf-f160w.fits --number
## Keep the result of the command above in the shell variable `n'.
$ n=$(aststatistics flat-ir/xdf-f160w.fits --number)
## See what is stored the shell variable `n'.
$ echo $n
## Show all the FITS keywords of this image.
$ astfits flat-ir/xdf-f160w.fits -h1
## The resolution (in degrees/pixel) is in the `CDELT' keywords.
## Only show lines that contain these characters, by feeding
## the output of the previous command to the `grep' program.
$ astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT
## Since the resolution of both dimensions is (approximately) equal,
## we will only read the value of one (CDELT1) with '--keyvalue'.
$ astfits flat-ir/xdf-f160w.fits -h1 --keyvalue=CDELT1
## We do not need the file name in the output (add '--quiet').
$ astfits flat-ir/xdf-f160w.fits -h1 --keyvalue=CDELT1 --quiet
## Save it as the shell variable `r'.
$ r=$(astfits flat-ir/xdf-f160w.fits -h1 --keyvalue=CDELT1 --quiet)
## Print the values of `n' and `r'.
$ echo $n $r
## Use the number of pixels (first number passed to AWK) and
## length of each pixel's edge (second number passed to AWK)
## to estimate the area of the field in arc-minutes squared.
$ echo $n $r | awk '@{print $1 * ($2*60)^2@}'
@end example
The output of the last command (area of this field) is 4.03817 (or approximately 4.04) arc-minutes squared.
Just for comparison, this is roughly 175 times smaller than the average moon's angular area (with a diameter of 30 arc-minutes or half a degree).
Some FITS writers do not use the @code{CDELT} convention, making it hard to use the steps above.
In such cases, you can extract the pixel scale with the @option{--pixelscale} option of Gnuastro's Fits program like the command below.
Similar to the @option{--skycoverage} option above, you can also use the @option{--quiet} option to allow easy usage of the values in scripts.
@example
$ astfits flat-ir/xdf-f160w.fits --pixelscale
@end example
@cindex GNU AWK
@cartouche
@noindent
@strong{AWK for table/value processing:} As you saw above AWK is a powerful and simple tool for text processing.
You will see it often in shell scripts.
GNU AWK (the most common implementation) comes with a free and wonderful @url{https://www.gnu.org/software/gawk/manual/, book} in the same format as this book which will allow you to master it nicely.
Just like this manual, you can also access GNU AWK's manual on the command-line whenever necessary without taking your hands off the keyboard.
Just run @code{info awk}.
@end cartouche
@node Cosmological coverage and visualizing tables, Building custom programs with the library, Angular coverage on the sky, General program usage tutorial
@subsection Cosmological coverage and visualizing tables
Having found the angular coverage of the dataset in @ref{Angular coverage on the sky}, we can now use Gnuastro to answer a more physically motivated question: ``How large is this area at different redshifts?''.
To get a feeling of the tangential area that this field covers at redshift 2, you can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}).
In particular, you need the tangential distance covered by 1 arc-second as raw output.
Combined with the field's area that was measured before, we can calculate the tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
@example
## If your system language uses ',' (not '.') as decimal separator.
$ export LC_NUMERIC=C
## Print general cosmological properties at redshift 2 (for example).
$ astcosmiccal -z2
## When given a "Specific calculation" option, CosmicCalculator
## will just print that particular calculation. To see all such
## calculations, add a `--help' token to the previous command
## (under the same title). Note that with `--help', no processing
## is done, so you can always simply append it to remember
## something without modifying the command you want to run.
$ astcosmiccal -z2 --help
## Only print the "Tangential dist. for 1arcsec at z (physical kpc)".
## in units of kpc/arc-seconds.
$ astcosmiccal -z2 --arcsectandist
## It is easier to use the short (single character) version of
## this option when typing (but this is hard to read, so use
## the long version in scripts or notes you plan to archive).
$ astcosmiccal -z2 -s
## Short options can be merged (they are only a single character!)
$ astcosmiccal -sz2
## Convert this distance to kpc^2/arcmin^2 and save in `k'.
$ k=$(astcosmiccal -sz2 | awk '@{print ($1*60)^2@}')
## Calculate the area of the dataset in arcmin^2.
$ n=$(aststatistics flat-ir/xdf-f160w.fits --number)
$ r=$(astfits flat-ir/xdf-f160w.fits -h1 --keyvalue=CDELT1 -q)
$ a=$(echo $n $r | awk '@{print $1 * ($2*60)^2 @}')
## Multiply `k' and `a' and divide by 10^6 for value in Mpc^2.
$ echo $k $a | awk '@{print $1 * $2 / 1e6@}'
@end example
@noindent
At redshift 2, this field therefore covers approximately 1.07 @mymath{Mpc^2}.
If you would like to see how this tangential area changes with redshift, you can use a shell loop like below.
@example
$ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do \
k=$(astcosmiccal -sz$z); \
echo $z $k $a | awk '@{print $1, ($2*60)^2 * $3 / 1e6@}'; \
done
@end example
@noindent
Fortunately, the shell has a useful tool/program to print a sequence of numbers that is nicely called @code{seq} (short for ``sequence'').
You can use it instead of typing all the different redshifts in the loop above.
For example, the loop below will calculate and print the tangential coverage of this field across a larger range of redshifts (0.1 to 5) and with finer increments of 0.1.
For more on the @code{LC_NUMERIC} command, see @ref{Numeric locale}.
@example
## If your system language uses ',' (not '.') as decimal separator.
$ export LC_NUMERIC=C
## The loop over the redshifts
$ for z in $(seq 0.1 0.1 5); do \
k=$(astcosmiccal -z$z --arcsectandist); \
echo $z $k $a | awk '@{print $1, ($2*60)^2 * $3 / 1e6@}'; \
done
@end example
Have a look at the two printed columns.
The first is the redshift, and the second is the area of this image at that redshift (in mega-parsecs squared).
@url{https://en.wikipedia.org/wiki/Redshift, Redshift} (@mymath{z}) is often used as a proxy for distance in galaxy evolution and cosmology: a higher redshift corresponds to larger line-of-sight comoving distance.
@cindex Turn over point (angular diameter distance)
Now, have a look at the first few values.
At @mymath{z=0.1} and @mymath{z=0.5}, this image covers @mymath{0.05 Mpc^2} and @mymath{0.57 Mpc^2} respectively.
This increase of coverage with redshift is expected because a fixed angle will cover a larger tangential area at larger distances.
However, as you come down the list (to higher redshifts) you will notice that this relation does not hold!
The largest coverage is at @mymath{z=1.6}: at higher redshifts, the area decreases, and continues decreasing!!!
In flat FLRW cosmology (including @mymath{\Lambda{}}CDM), the only factor contributing to this is the @mymath{(1+z)}$ factor from the expansion of the universe, see @url{https://en.wikipedia.org/wiki/Angular_diameter_distance#Angular_diameter_turnover_point, the Wikipedia page}, with no curvature effect.
In case you have TOPCAT, you can visualize this as a plot (if you do not have TOPCAT, see @ref{TOPCAT}).
To do so, first you need to save the output of the loop above into a FITS table by piping the output to Gnuastro's Table program and giving an output name:
@example
$ for z in $(seq 0.1 0.1 5); do \
k=$(astcosmiccal -z$z --arcsectandist); \
echo $z $k $a | awk '@{print $1, ($2*60)^2 * $3 / 1e6@}'; \
done | asttable --output=z-vs-tandist.fits
@end example
You can now use Gnuastro's @command{astscript-fits-view} to open this table in TOPCAT with the command below.
Do you remember this script from @ref{Dataset inspection and cropping}?
There, we used it to view a FITS image with DS9!
This script will see if the first dataset in the image is a table or an image and will call TOPCAT or DS9 accordingly: making it a very convenient tool to inspect the contents of all types of FITS data.
@example
$ astscript-fits-view z-vs-tandist.fits
@end example
After TOPCAT opens, you will see the name of the table @file{z-vs-tandist.fits} in the left panel.
On the top menu bar, select the ``Graphics'' menu, then select ``Plane plot'' to visualize the two columns printed above as a plot and get a better impression of the turn over point of the image cosmological coverage.
@node Building custom programs with the library, Option management and configuration files, Cosmological coverage and visualizing tables, General program usage tutorial
@subsection Building custom programs with the library
In @ref{Cosmological coverage and visualizing tables}, we repeated a certain calculation/output of a program multiple times using the shell's @code{for} loop.
This simple way of repeating a calculation is great when it is only necessary once.
However, if you commonly need this calculation and possibly for a larger number of redshifts at higher precision, the command above can be slow.
Please try it out by changing the sequence command in the previous section to `@code{seq 0.1 0.01 10}'.
It will take about 11 seconds@footnote{To measure how much time the loop of @ref{Cosmological coverage and visualizing tables} takes on your system, you can use the @command{time} command.
First put the whole loop (and pipe) into a plain-text file (to be loaded as a shell script) called @file{z-vs-tandist.sh}.
Then run this command: @command{time -p bash z-vs-tandist.sh}.
The relevant time (in seconds) is shown after @code{real}.}!
This can be improved by @emph{hundreds} of times!
This section will show you how.
Generally, repeated calls to a generic program (like CosmicCalculator) are slow, because a generic program can have a lot of overhead on each call.
To be generic and easy to operate, CosmicCalculator has to parse the command-line and all configuration files (see @ref{Option management and configuration files}) which contain human-readable characters and need a lot of pre-processing to be ready for processing by the computer.
Afterwards, CosmicCalculator has to check the sanity of its inputs and check which of its many options you have asked for.
All the this pre-processing takes as much time as the high-level calculation you are requesting, and it has to re-do all of these for every redshift in your loop.
To greatly speed up the processing, you can directly access the core work-horse of CosmicCalculator without all that overhead by designing your custom program for this job.
Using Gnuastro's library, you can write your own tiny program particularly designed for this exact calculation (and nothing else!).
To do that, copy and paste the following C program in a file called @file{myprogram.c}.
@example
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/cosmology.h>
int
main(void)
@{
double area=4.03817; /* Area of field (arcmin^2). */
double z, adist, tandist; /* Temporary variables. */
/* Constants from Plank 2018 (arXiv:1807.06209, Table 2) */
double H0=67.66, olambda=0.6889, omatter=0.3111, oradiation=0;
/* Do the same thing for all redshifts (z) between 0.1 and 10. */
for(z=0.1; z<10; z+=0.01)
@{
/* Calculate the angular diameter distance. */
adist=gal_cosmology_angular_distance(z, H0, olambda,
omatter, oradiation);
/* Calculate the tangential distance of one arcsecond. */
tandist = adist * 1000 * M_PI / 3600 / 180;
/* Print the redshift and area. */
printf("%-5.2f %g\n", z, pow(tandist * 60,2) * area / 1e6);
@}
/* Tell the system that everything finished successfully. */
return EXIT_SUCCESS;
@}
@end example
@noindent
Then run the following command to compile your program and run it.
@example
$ astbuildprog myprogram.c
@end example
@noindent
In the command above, you used Gnuastro's BuildProgram program.
Its job is to simplify the compilation, linking and running of simple C programs that use Gnuastro's library (like this one).
BuildProgram is designed to manage Gnuastro's dependencies, compile and link your custom program and then run it.
Did you notice how your custom program created the table almost instantaneously?
Technically, it only took about 0.03 seconds!
Recall that the @code{for} loop of @ref{Cosmological coverage and visualizing tables} took more than 11 seconds (or @mymath{\sim367} times slower!).
Please run the @command{ls} command to see a listing of the files in the current directory.
You will notice that a new file called @file{myprogram} has been created.
This is the compiled program that was created and run by the command above (its in binary machine code format, not human-readable any more).
You can run it again to get the same results by executing it:
@example
$ ./myprogram
@end example
The efficiency of your custom @file{myprogram} compared to repeated calls to CosmicCalculator is because in the latter, the requested processing is comparable to the necessary overheads.
For other programs that take large input datasets and do complicated processing on them, the overhead is usually negligible compared to the processing.
In such cases, the libraries are only useful if you want a different/new processing compared to the functionalities in Gnuastro's existing programs.
Gnuastro has a large library which is used extensively by all the programs.
In other words, the library is like the skeleton of Gnuastro.
For the full list of available functions classified by context, please see @ref{Gnuastro library}.
Gnuastro's library and BuildProgram are created to make it easy for you to use these powerful features as you like.
This gives you a high level of creativity, while also providing efficiency and robustness.
Several other complete working examples (involving images and tables) of Gnuastro's libraries can be see in @ref{Library demo programs}.
But for this tutorial, let's stop discussing the libraries here and get back to Gnuastro's already built programs (which do not need C programming).
But before continuing, let's clean up the files we do not need any more:
@example
$ rm myprogram* z-vs-tandist*
@end example
@node Option management and configuration files, Warping to a new pixel grid, Building custom programs with the library, General program usage tutorial
@subsection Option management and configuration files
In the previous section (@ref{Cosmological coverage and visualizing tables}), when you ran CosmicCalculator, you only specified the redshfit with @option{-z2} option.
You did not specify the cosmological parameters that are necessary for the calculations!
Parameters like the Hubble constant (@mymath{H_0}) and the matter density.
In spite of this, CosmicCalculator done its processing and printed results.
None of Gnuastro's programs keep a default value internally within their code (they are all set by the user)!
So where did the necessary cosmological parameters that are necessary for its calculations come from?
What were the values to those parameters?
In short, they come from a configuration file (see @ref{Configuration file precedence}), and the final used values can be checked/edited on the command-line.
In this section we will review this important aspect of all the programs in Gnuastro.
Configuration files are an important part of all Gnuastro's programs, especially the ones with a large number of options, so it is important to understand this part well.
Once you get comfortable with configuration files, you can make good use of them in all Gnuastro programs (for example, NoiseChisel).
For example, to do optimal detection on various datasets, you can have configuration files for different noise properties.
The configuration of each program (besides its version) is vital for the reproducibility of your results, so it is important to manage them properly.
As we saw above, the full list of the options in all Gnuastro programs can be seen with the @option{--help} option.
Try calling it with CosmicCalculator as shown below.
Note how options are grouped by context to make it easier to find your desired option.
However, in each group, options are ordered alphabetically.
@example
$ astcosmiccal --help
@end example
After running the command above, please scroll to the line that you ran this command and read through the output (its the same format for all the programs).
All options have a long format (starting with @code{--} and a multi-character name) and some have a short format (starting with @code{-} and a single character), for more see @ref{Options}.
The options that expect a value, have an @key{=} sign after their long version.
The format of their expected value is also shown as @code{FLT}, @code{INT} or @code{STR} for floating point numbers, integer numbers, and strings (filenames for example) respectively.
You can see the values of all options that need one with the @option{--printparams} option (or its short format: @code{-P}).
@option{--printparams} is common to all programs (see @ref{Common options}).
You can see the default cosmological parameters, from the Plank collaboration @url{https://arxiv.org/abs/1807.06209,2020}, under the @code{# Input:} title:
@example
$ astcosmiccal -P
# Input:
H0 67.66 # Current expansion rate (Hubble constant).
olambda 0.6889 # Current cosmological cst. dens. per crit. dens.
omatter 0.3111 # Current matter density per critical density.
oradiation 0 # Current radiation density per critical density.
@end example
Let's say you want to do the calculation in the previous section using @mymath{H_0=70} km/s/Mpc.
To do this, just add @option{--H0=70} after the command above (while keeping the @option{-P}).
In the output, you can see that the used Hubble constant has also changed.
@example
$ astcosmiccal -P --H0=70
@end example
@noindent
Afterwards, delete the @option{-P} and add a @option{-z2} to see the
calculations with the new cosmology (or configuration).
@example
$ astcosmiccal --H0=70 -z2
@end example
From the output of the @code{--help} option, note how the option for Hubble constant has both short (@code{-H}) and long (@code{--H0}) formats.
One final note is that the equal (@key{=}) sign is not mandatory.
In the short format, the value can stick to the actual option (the short option name is just one character after-all, thus easily identifiable) and in the long format, a white-space character is also enough.
@example
$ astcosmiccal -H70 -z2
$ astcosmiccal --H0 70 -z2 --arcsectandist
@end example
@noindent
When an option does not need a value, and has a short format (like @option{--arcsectandist}), you can easily append it @emph{before} other short options.
So the last command above can also be written as:
@example
$ astcosmiccal --H0 70 -sz2
@end example
Let's assume that in one project, you want to only use rounded cosmological parameters (@mymath{H_0} of 70km/s/Mpc and matter density of 0.3).
You should therefore run CosmicCalculator like this:
@example
$ astcosmiccal --H0=70 --olambda=0.7 --omatter=0.3 -z2
@end example
But having to type these extra options every time you run CosmicCalculator will be prone to errors (typos in particular), frustrating and slow.
Therefore in Gnuastro, you can put all the options and their values in a ``Configuration file'' and tell the programs to read the option values from there.
Let's create a configuration file...
With your favorite text editor, make a file named @file{my-cosmology.conf} (or @file{my-cosmology.txt}, the suffix does not matter for Gnuastro, but a more descriptive suffix like @file{.conf} is recommended for humans reading your code and seeing your files: this includes you, looking into your own project, in a couple of months that you have forgot the details!).
Then put the following lines inside of the plain-text file.
One space between the option value and name is enough, the values are just under each other to help in readability.
Also note that you should only use @emph{long option names} in configuration files.
@example
H0 70
olambda 0.7
omatter 0.3
@end example
@noindent
You can now tell CosmicCalculator to read this file for option values immediately using the @option{--config} option as shown below.
Do you see how the output of the following command corresponds to the option values in @file{my-cosmology.conf}, and is therefore identical to the previous command?
@example
$ astcosmiccal --config=my-cosmology.conf -z2
@end example
But still, having to type @option{--config=my-cosmology.conf} every time is annoying, is not it?
If you need this cosmology every time you are working in a specific directory, you can use Gnuastro's default configuration file names and avoid having to type it manually.
The default configuration files (that are checked if they exist) must be placed in the hidden @file{.gnuastro} sub-directory (in the same directory you are running the program).
Their file name (within @file{.gnuastro}) must also be the same as the program's executable name.
So in the case of CosmicCalculator, the default configuration file in a given directory is @file{.gnuastro/astcosmiccal.conf}.
Let's do this.
We will first make a directory for our custom cosmology, then build a @file{.gnuastro} within it.
Finally, we will copy the custom configuration file there:
@example
$ mkdir my-cosmology
$ mkdir my-cosmology/.gnuastro
$ mv my-cosmology.conf my-cosmology/.gnuastro/astcosmiccal.conf
@end example
Once you run CosmicCalculator within @file{my-cosmology} (as shown below), you will see how your custom cosmology has been implemented without having to type anything extra on the command-line.
@example
$ cd my-cosmology
$ astcosmiccal -P # Your custom cosmology is printed.
$ cd ..
$ astcosmiccal -P # The default cosmology is printed.
@end example
To further simplify the process, you can use the @option{--setdirconf} option.
If you are already in your desired working directory, calling this option with the others will automatically write the final values (along with descriptions) in @file{.gnuastro/astcosmiccal.conf}.
For example, try the commands below:
@example
$ mkdir my-cosmology2
$ cd my-cosmology2
$ astcosmiccal -P
$ astcosmiccal --H0 70 --olambda=0.7 --omatter=0.3 --setdirconf
$ astcosmiccal -P
$ cd ..
@end example
Gnuastro's programs also have default configuration files for a specific user (when run in any directory).
This allows you to set a special behavior every time a program is run by a specific user.
Only the directory and filename differ from the above, the rest of the process is similar to before.
Finally, there are also system-wide configuration files that can be used to define the option values for all users on a system.
See @ref{Configuration file precedence} for a more detailed discussion.
We will stop the discussion on configuration files here, but you can always read about them in @ref{Configuration files}.
Before continuing the tutorial, let's delete the two extra directories that we do not need any more:
@example
$ rm -rf my-cosmology*
@end example
@node Warping to a new pixel grid, NoiseChisel and Multi-Extension FITS files, Option management and configuration files, General program usage tutorial
@subsection Warping to a new pixel grid
We are now ready to start processing the deep HST images that were prepared in @ref{Dataset inspection and cropping}.
One of the most important points while using several images for data processing is that those images must have the same pixel grid.
The process of changing the pixel grid is named `warp'.
Fortunately, Gnuastro has Warp program for warping the pixel grid (see @ref{Warp}).
Warping to a different/matched pixel grid is commonly needed before higher-level analysis especially when you are using datasets from different instruments.
The XDF datasets we are using here are already aligned to the same pixel grid.
But let's have a look at some of Gnuastro's linear warping features here.
For example, try rotating one of the images by 20 degrees with the first command below.
With the second command, open the output and input to see how it is rotated.
@example
$ astwarp flat-ir/xdf-f160w.fits --rotate=20
$ astscript-fits-view flat-ir/xdf-f160w.fits xdf-f160w_rotated.fits
@end example
Warp can generally be used for many kinds of pixel grid manipulation (warping), not just rotations.
For example, the outputs of the commands below will have larger pixels respectively (new resolution being one quarter the original resolution), get shifted by 2.8 (by sub-pixel), get a shear of 0.2, and be tilted (projected).
Run each of them and open the output file to see the effect, they will become handy for you in the future.
@example
$ astwarp flat-ir/xdf-f160w.fits --scale=0.25
$ astwarp flat-ir/xdf-f160w.fits --translate=2.8
$ astwarp flat-ir/xdf-f160w.fits --shear=0.2
$ astwarp flat-ir/xdf-f160w.fits --project=0.001,0.0005
$ astscript-fits-view flat-ir/xdf-f160w.fits *.fits
@end example
@noindent
If you need to do multiple warps, you can combine them in one call to Warp.
For example, to first rotate the image, then scale it, run this command:
@example
$ astwarp flat-ir/xdf-f160w.fits --rotate=20 --scale=0.25
@end example
If you have multiple warps, do them all in one command.
Do not warp them in separate commands because the correlated noise will become too strong.
As you see in the matrix that is printed when you run Warp, it merges all the warps into a single warping matrix (see @ref{Merging multiple warpings}) and simply applies that (mixes the pixel values) just once.
However, if you run Warp multiple times, the pixels will be mixed multiple times, creating a strong artificial blur/smoothing, or stronger correlated noise.
Recall that the merging of multiple warps is done through matrix multiplication, therefore order matters in the separate operations.
At a lower level, through Warp's @option{--matrix} option, you can directly request your desired final warp and do not have to break it up into different warps like above (see @ref{Invoking astwarp}).
Fortunately these datasets are already aligned to the same pixel grid, so you do not actually need the files that were just generated.
You can safely delete them all with the following command.
Here, you see why we put the processed outputs that we need later into a separate directory.
In this way, the top directory can be used for temporary files for testing that you can simply delete with a generic command like below.
@example
$ rm *.fits
@end example
@node NoiseChisel and Multi-Extension FITS files, NoiseChisel optimization for detection, Warping to a new pixel grid, General program usage tutorial
@subsection NoiseChisel and Multi-Extension FITS files
In the previous sections, we completed a review of the basics of Gnuastro's programs.
We are now ready to do some more serious analysis on the downloaded images: extract the pixels containing signal from the image, find sub-structure of the extracted signal, do measurements over the extracted objects and analyze them (finding certain objects of interest in the image).
The first step is to separate the signal (galaxies or stars) from the background noise in the image.
We will be using the results of @ref{Dataset inspection and cropping}, so be sure you already have them.
Gnuastro has NoiseChisel for this job.
But NoiseChisel's output is a multi-extension FITS file, therefore to better understand how to use NoiseChisel, let's take a look at multi-extension FITS files and how you can interact with them.
In the FITS format, each extension contains a separate dataset (image in this case).
You can get basic information about the extensions in a FITS file with Gnuastro's Fits program (see @ref{Fits}).
To start with, let's run NoiseChisel without any options, then use Gnuastro's Fits program to inspect the number of extensions in this file.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits
$ astfits xdf-f160w_detected.fits
@end example
From the output list, we see that NoiseChisel's output contains 5 extensions.
The zero-th (counting from zero, with name @code{NOISECHISEL-CONFIG}) is empty: it has value of @code{0} in the fourth column (which shows its size in pixels).
Like NoiseChisel, in all of Gnuastro's programs, the first (or zero-th) extension of the output only contains meta-data: data about/describing the datasets within (all) the output's extensions.
This is recommended by the FITS standard, see @ref{Fits} for more.
In the case of Gnuastro's programs, this generic zero-th/meta-data extension (for the whole file) contains all the configuration options of the program that created the file.
Metadata regarding how the analysis was done (or a dataset was created) is very important for higher-level analysis and reproducibility.
Therefore, Let's first take a closer look at the @code{NOISECHISEL-CONFIG} extension.
If you specify a special header in the FITS file, Gnuastro's Fits program will print the header keywords (metadata) of that extension.
You can either specify the HDU/extension counter (starting from 0), or name.
Therefore, the two commands below are identical for this file.
We are usually tempted to use the first (shorter format), but when putting your commands into a script, please use the second format which is more human-friendly and understandable for readers of your code who may not know what is in the 0-th extension (this includes yourself in a few months!):
@example
$ astfits xdf-f160w_detected.fits -h0
$ astfits xdf-f160w_detected.fits -hNOISECHISEL-CONFIG
@end example
The first group of FITS header keywords you see (containing the @code{SIMPLE} and @code{BITPIX} keywords; before the first empty line) are standard keywords.
They are required by the FITS standard and must be present in any FITS extension.
The second group starts with the input file name (value to the @code{INPUT} keyword).
The rest of the keywords you see afterwards have the same name as NoiseChisel's options, and the value used by NoiseChisel in this run is shown after the @code{=} sign.
Finally, the last group (starting with @code{DATE}) contains the date and version information of Gnuastro and its dependencies that were used to generate this file.
Besides the option values, these are also critical for future reproducibility of the result (you may update Gnuastro or its dependencies, and they may behave differently afterwards).
The ``versions and date'' group of keywords are present in all Gnuastro's FITS extension outputs, for more see @ref{Output FITS files}.
Note that if a keyword name is larger than 8 characters, it is preceded by a @code{HIERARCH} keyword and that all keyword names are in capital letters.
These are all part of the FITS standard and originate from its history.
But in short, both can be ignored!
For example, with the commands below, let's see at first what the default values are, and then just check the value of @option{--detgrowquant} option (using the @option{-P} option described in @ref{Option management and configuration files}).
@example
$ astnoisechisle -P
$ astnoisechisel -P | grep detgrowquant
@end example
To confirm that NoiseChisel used this value when we ran it above, let's use @code{grep} to extract the keyword line with @code{detgrowquant} from the metadata extension.
However, as you saw above, keyword names in the header is in all caps.
So we need to ask @code{grep} to ignore case with the @option{-i} option.
@example
$ astfits xdf-f160w_detected.fits -h0 | grep -i detgrowquant
@end example
In the output of the above command, you see @code{HIERARCH} at the start of the line.
According to the FITS standard, @code{HIERARCH} is placed at the start of all keywords that have a name that is more than 8 characters long.
Both the all-caps and the @code{HIERARCH} keyword can be annoying when you want to read/check the value.
Therefore, the best solution is to use the @option{--keyvalue} option of Gnuastro's @command{astfits} program as shown below.
With it, you do not have to worry about @code{HIERARCH} or the case of the name (FITS keyword names are not case-sensitive).
@example
$ astfits xdf-f160w_detected.fits -h0 --keyvalue=detgrowquant -q
@end example
@noindent
The metadata (that is stored in the output) can later be used to exactly reproduce/understand your result, even if you have lost/forgot the command you used to create the file.
This feature is present in all of Gnuastro's programs, not just NoiseChisel.
The rest of the HDUs in NoiseChisel have data.
So let's open them in a DS9 window and then describe each:
@example
$ astscript-fits-view xdf-f160w_detected.fits
@end example
A ``cube'' window opens along with DS9's main window.
The buttons and horizontal scroll bar in this small new window can be used to navigate between the extensions.
In this mode, all DS9's settings (for example, zoom or color-bar) will be identical between the extensions.
Try zooming into one part and flipping through the extensions to see how the galaxies were detected along with the Sky and Sky standard deviation values for that region.
Just have in mind that NoiseChisel's job is @emph{only} detection (separating signal from noise).
We will do segmentation on this result later to find the individual galaxies/peaks over the detected pixels.
The second extension of NoiseChisel's output (numbered 1, named @code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary image with only two possible values for all pixels: 0 for noise and 1 for signal.
Since it only has two values, to avoid taking too much space on your computer, its numeric datatype an unsigned 8-bit integer (or @code{uint8})@footnote{To learn more about numeric data types see @ref{Numeric data types}.}.
The fourth and fifth (@code{SKY} and @code{SKY_STD}) extensions, have the Sky and its standard deviation values for the input on a tile grid and were calculated over the undetected regions (for more on the importance of the Sky value, see @ref{Sky value}).
Each HDU/extension in a FITS file is an independent dataset (image or table) which you can delete from the FITS file, or copy/cut to another file.
For example, with the command below, you can copy NoiseChisel's @code{DETECTIONS} HDU/extension to another file:
@example
$ astfits xdf-f160w_detected.fits --copy=DETECTIONS -odetections.fits
@end example
There are similar options to conveniently cut (@option{--cut}, copy, then remove from the input) or delete (@option{--remove}) HDUs from a FITS file also.
See @ref{HDU information and manipulation} for more.
@node NoiseChisel optimization for detection, NoiseChisel optimization for storage, NoiseChisel and Multi-Extension FITS files, General program usage tutorial
@subsection NoiseChisel optimization for detection
In @ref{NoiseChisel and Multi-Extension FITS files}, we ran NoiseChisel and reviewed NoiseChisel's output format.
Now that you have a better feeling for multi-extension FITS files, let's optimize NoiseChisel for this particular dataset.
One good way to see if you have missed any signal (small galaxies, or the wings of brighter galaxies) is to mask all the detected pixels and inspect the noise pixels.
For this, you can use Gnuastro's Arithmetic program (in particular its @code{where} operator, see @ref{Arithmetic operators}).
The command below will produce @file{mask-det.fits}.
In it, all the pixels in the @code{INPUT-NO-SKY} extension that are flagged 1 in the @code{DETECTIONS} extension (dominated by signal, not noise) will be set to NaN.
Since the various extensions are in the same file, for each dataset we need the file and extension name.
To make the command easier to read/write/understand, let's use shell variables: `@code{in}' will be used for the Sky-subtracted input image and `@code{det}' will be used for the detection map.
Recall that a shell variable's value can be retrieved by adding a @code{$} before its name, also note that the double quotations are necessary when we have white-space characters in a variable value (like this case).
@example
$ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
$ det="xdf-f160w_detected.fits -hDETECTIONS"
$ astarithmetic $in $det nan where --output=mask-det.fits
@end example
@noindent
To invert the result (only keep the detected pixels), you can flip the detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after the second @code{$det}:
@example
$ astarithmetic $in $det not nan where --output=mask-sky.fits
@end example
@cindex Correlated noise
@cindex Noise, correlated
Look again at the @code{DETECTIONS} extension, in particular the long worm-like structure around @footnote{To find a particular coordiante easily in DS9, you can do this: Click on the ``Edit'' menu, and select ``Region''.
Then click on any random part of the image to see a circle show up in that location (this is the ``region'').
Double-click on the region and a ``Circle'' window will open.
If you have celestial coordinates, keep the default ``icrs'' or ``fk5'' in the scroll-down menu after the ``Center''.
But if you have pixel/image coordinates, click on the ``icrs'' or ``fk5'' and select ``Image''.
Now you can set the ``Center'' coordinates of the region (@code{1650} and @code{1470} in this case) by manually typing them in the two boxes in front of ``Center''.
Finally, when everything is ready, click on the ``Apply'' button and your region will go over your requested coordinates.
You can zoom out (to see the whole image) and visually find it.} pixel 1650 (X) and 1470 (Y).
These types of long wiggly structures show that we have dug too deep into the noise, and are a signature of correlated noise.
Correlated noise is created when we warp (for example, rotate) individual exposures (that are each slightly offset compared to each other) into the same pixel grid before adding them into one deeper image.
During the warping, nearby pixels are mixed and the effect of this mixing on the noise (which is in every pixel) is called ``correlated noise''.
Correlated noise is a form of convolution and it slightly smooths the image.
In terms of the number of exposures (and thus correlated noise), the XDF dataset is by no means an ordinary dataset.
Therefore the default parameters need to be slightly customized.
It is the result of warping and adding roughly 80 separate exposures which can create strong correlated noise/smoothing.
In common surveys the number of exposures is usually 10 or less.
See Figure 2 of Akhlaghi @url{https://arxiv.org/abs/1909.11230, 2019} and the discussion on @option{--detgrowquant} there for more on how NoiseChisel ``grow''s the detected objects and the patterns caused by correlated noise.
Let's tweak NoiseChisel's configuration a little to get a better result on this dataset.
Do not forget that ``@emph{Good statistical analysis is not a purely routine matter, and generally calls for more than one pass through the computer}'' (Anscombe 1973, see @ref{Science and its tools}).
A good scientist must have a good understanding of her tools to make a meaningful analysis.
So do not hesitate in playing with the default configuration and reviewing the manual when you have a new dataset (from a new instrument) in front of you.
Robust data analysis is an art, therefore a good scientist must first be a good artist.
Once you have found the good configuration for that particular noise pattern (instrument) you can safely use it for all new data that have a similar noise pattern.
NoiseChisel can produce ``Check images'' to help you visualize and inspect how each step is done.
You can see all the check images it can produce with this command.
@example
$ astnoisechisel --help | grep check
@end example
Let's check the overall detection process to get a better feeling of what NoiseChisel is doing with the following command.
To learn the details of NoiseChisel in more detail, please see @ref{NoiseChisel}, Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
@end example
The check images/tables are also multi-extension FITS files.
As you saw from the command above, when check datasets are requested, NoiseChisel will not go to the end.
It will abort as soon as all the extensions of the check image are ready.
Please list the extensions of the output with @command{astfits} and then opening it with @command{ds9} as we done above.
If you have read the paper, you will see why there are so many extensions in the check image.
@example
$ astfits xdf-f160w_detcheck.fits
$ astscript-fits-view xdf-f160w_detcheck.fits
@end example
In order to understand the parameters and their biases (especially as you are starting to use Gnuastro, or running it a new dataset), it is @emph{strongly} encouraged to play with the different parameters and use the respective check images to see which step is affected by your changes and how, for example, see @ref{Detecting large extended targets}.
@cindex FWHM
Let's focus on one step: the @code{OPENED_AND_LABELED} extension shows the initial detection step of NoiseChisel.
We see the seeds of that correlated noise structure with many small detections (a relatively early stage in the processing).
Such connections at the lowest surface brightness limits usually occur when the dataset is too smoothed, the threshold is too low, or the final ``growth'' is too much.
As you see from the 2nd (@code{CONVOLVED}) extension, the first operation that NoiseChisel does on the data is to slightly smooth it.
However, the natural correlated noise of this dataset is already one level of artificial smoothing, so further smoothing it with the default kernel may be the culprit.
To see the effect, let's use a sharper kernel as a first step to convolve/smooth the input.
By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM) of 2 pixels.
We can use Gnuastro's MakeProfiles to build a kernel with FWHM of 1.5 pixel (truncated at 5 times the FWHM, like the default) using the following command.
MakeProfiles is a powerful tool to build any number of mock profiles on one image or independently, to learn more of its features and capabilities, see @ref{MakeProfiles}.
@example
$ astmkprof --kernel=gaussian,1.5,5 --oversample=1
@end example
@noindent
Please open the output @file{kernel.fits} and have a look (it is very small and sharp).
We can now tell NoiseChisel to use this instead of the default kernel with the following command (we will keep the @option{--checkdetection} to continue checking the detection steps)
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--checkdetection
@end example
Open the output @file{xdf-f160w_detcheck.fits} as a multi-extension FITS file and go to the last extension (@code{DETECTIONS-FINAL}, it is the same pixels as the final NoiseChisel output without @option{--checkdetections}).
Look again at that position mentioned above (1650,1470), you see that the long wiggly structure is gone.
This shows we are making progress :-).
Looking at the new @code{OPENED_AND_LABELED} extension, we see that the thin connections between smaller peaks has now significantly decreased.
Going two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can see that during the process of finding false pseudo-detections, too many holes have been filled: do you see how the many of the brighter galaxies are connected? At this stage all holes are filled, irrespective of their size.
Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can see that there are not too many pseudo-detections because of all those extended filled holes.
If you look closely, you can see the number of pseudo-detections in the printed outputs of NoiseChisel (around 6400).
This is another side-effect of correlated noise.
To address it, we should slightly increase the pseudo-detection threshold (before changing @option{--dthresh}, run with @option{-P} to see the default value):
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--dthresh=0.1 --checkdetection
@end example
Before visually inspecting the check image, you can already see the effect of this small change in NoiseChisel's command-line output: notice how the number of pseudo-detections has increased to more than 7100!
Open the check image now and have a look, you can see how the pseudo-detections are distributed much more evenly in the blank sky regions of the @code{PSEUDOS-FOR-SN} extension.
@cartouche
@noindent
@strong{Maximize the number of pseudo-detections:} When using NoiseChisel on datasets with a new noise-pattern (for example, going to a Radio astronomy image, or a shallow ground-based image), play with @code{--dthresh} until you get a maximal number of pseudo-detections: the total number of pseudo-detections is printed on the command-line when you run NoiseChisel, you do not even need to open a FITS viewer.
In this particular case, try @option{--dthresh=0.2} and you will see that the total printed number decreases to around 6700 (recall that with @option{--dthresh=0.1}, it was roughly 7100).
So for this type of very deep HST images, we should set @option{--dthresh=0.1}.
@end cartouche
As discussed in Section 3.1.5 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}, the signal-to-noise ratio of pseudo-detections are critical to identifying/removing false detections.
For an optimal detection they are very important to get right (where you want to detect the faintest and smallest objects in the image successfully).
Let's have a look at their signal-to-noise distribution with @option{--checksn}.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--dthresh=0.1 --checkdetection --checksn
@end example
The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the pseudo-detections containing two-column tables over the undetected (@code{SKY_PSEUDODET_SN}) regions and those over detections (@code{DET_PSEUDODET_SN}).
With the first command below you can see the HDUs of this file, and with the second you can see the information of the table in the first HDU (which is the default when you do not use @option{--hdu}):
@example
$ astfits xdf-f160w_detsn.fits
$ asttable xdf-f160w_detsn.fits -i
@end example
@noindent
You can see the table columns with the first command below and get a feeling of the signal-to-noise value distribution with the second command (the two Table and Statistics programs will be discussed later in the tutorial):
@example
$ asttable xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN
$ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2
... [output truncated] ...
Histogram:
| *
| ***
| ******
| *********
| **********
| *************
| *****************
| ********************
| **************************
| ********************************
|******************************************************* * ** *
|----------------------------------------------------------------------
@end example
The correlated noise is again visible in the signal-to-noise distribution of sky pseudo-detections!
Do you see how skewed this distribution is?
In an image with less correlated noise, this distribution would be much more symmetric.
A small change in the quantile will translate into a big change in the S/N value.
For example, see the difference between the three 0.99, 0.95 and 0.90 quantiles with this command:
@example
$ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2 \
--quantile=0.99 --quantile=0.95 --quantile=0.90
@end example
We get a change of almost 2 units (which is very significant).
If you run NoiseChisel with @option{-P}, you'll see the default signal-to-noise quantile @option{--snquant} is 0.99.
In effect with this option you specify the purity level you want (contamination by false detections).
With the @command{aststatistics} command above, you see that a small number of extra false detections (impurity) in the final result causes a big change in completeness (you can detect more lower signal-to-noise true detections).
So let's loosen-up our desired purity level, remove the check-image options, and then mask the detected pixels like before to see if we have missed anything.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--dthresh=0.1 --snquant=0.95
$ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
$ det="xdf-f160w_detected.fits -hDETECTIONS"
$ astarithmetic $in $det nan where --output=mask-det.fits
@end example
Overall it seems good, but if you play a little with the color-bar and look closer in the noise, you'll see a few very sharp, but faint, objects that have not been detected.
For example, the object around pixel (456, 1662).
Despite its high valued pixels, this object was lost because erosion ignores the precise pixel values.
Losing small/sharp objects like this only happens for under-sampled datasets like HST (where the pixel size is larger than the point spread function FWHM).
So this will not happen on ground-based images.
To address this problem of sharp objects, we can use NoiseChisel's @option{--noerodequant} option.
All pixels above this quantile will not be eroded, thus allowing us to preserve small/sharp objects (that cover a small area, but have a lot of signal in it).
Check its default value, then run NoiseChisel like below and make the mask again.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--noerodequant=0.95 --dthresh=0.1 --snquant=0.95
@end example
This seems to be fine and the object above is now detected.
We will stop editing the configuration of NoiseChisel here, but please feel free to keep looking into the data to see if you can improve it even more.
Once you have found the proper configuration for the type of images you will be using you do not need to change them any more.
The same configuration can be used for any dataset that has been similarly produced (and has a similar noise pattern).
But entering all these options on every call to NoiseChisel is annoying and prone to bugs (mistakenly typing the wrong value for example).
To simplify things, we will make a configuration file in a visible @file{config} directory.
Then we will define the hidden @file{.gnuastro} directory (that all Gnuastro's programs will look into for configuration files) as a symbolic link to the @file{config} directory.
Finally, we will write the finalized values of the options into NoiseChisel's standard configuration file within that directory.
We will also put the kernel in a separate directory to keep the top directory clean of any files we later need.
@example
$ mkdir kernel config
$ ln -s config/ .gnuastro
$ mv kernel.fits kernel/noisechisel.fits
$ echo "kernel kernel/noisechisel.fits" > config/astnoisechisel.conf
$ echo "noerodequant 0.95" >> config/astnoisechisel.conf
$ echo "dthresh 0.1" >> config/astnoisechisel.conf
$ echo "snquant 0.95" >> config/astnoisechisel.conf
@end example
@noindent
We are now ready to finally run NoiseChisel on the three filters and keep the output in a dedicated directory (which we will call @file{nc} for simplicity).
@example
$ rm *.fits
$ mkdir nc
$ for f in f105w f125w f160w; do \
astnoisechisel flat-ir/xdf-$f.fits --output=nc/xdf-$f.fits; \
done
@end example
@node NoiseChisel optimization for storage, Segmentation and making a catalog, NoiseChisel optimization for detection, General program usage tutorial
@subsection NoiseChisel optimization for storage
As we showed before (in @ref{NoiseChisel and Multi-Extension FITS files}), NoiseChisel's output is a multi-extension FITS file with several images the same size as the input.
As the input datasets get larger this output can become hard to manage and waste a lot of storage space.
Fortunately there is a solution to this problem (which is also useful for Segment's outputs).
In this small section we will take a short detour to show this feature.
Please note that the outputs generated here are not needed for the rest of the tutorial.
But first, let's have a look at the contents/HDUs and volume of NoiseChisel's output from @ref{NoiseChisel optimization for detection} (fast answer, it is larger than 100 mega-bytes):
@example
$ astfits nc/xdf-f160w.fits
$ ls -lh nc/xdf-f160w.fits
@end example
Two options can drastically decrease NoiseChisel's output file size: 1) With the @option{--rawoutput} option, NoiseChisel will not create a Sky-subtracted output.
After all, it is redundant: you can always generate it by subtracting the @code{SKY} extension from the input image (which you have in your database) using the Arithmetic program.
2) With the @option{--oneelempertile}, you can tell NoiseChisel to store its Sky and Sky standard deviation results with one pixel per tile (instead of many pixels per tile).
So let's run NoiseChisel with these options, then have another look at the HDUs and the over-all file size:
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --oneelempertile --rawoutput \
--output=nc-for-storage.fits
$ astfits nc-for-storage.fits
$ ls -lh nc-for-storage.fits
@end example
@noindent
See how @file{nc-for-storage.fits} has four HDUs, while @file{nc/xdf-f160w.fits} had five HDUs?
As explained above, the missing extension is @code{INPUT-NO-SKY}.
Also, look at the sizes of the @code{SKY} and @code{SKY_STD} HDUs, unlike before, they are not the same size as @code{DETECTIONS}, they only have one pixel for each tile (group of pixels in raw input).
Finally, you see that @file{nc-for-storage.fits} is just under 8 mega bytes (while @file{nc/xdf-f160w.fits} was 100 mega bytes)!
But we are not yet finished!
You can even be more efficient in storage, archival or transferring NoiseChisel's output by compressing this file.
Try the command below to see how NoiseChisel's output has now shrunk to about 250 kilo-byes while keeping all the necessary information as the original 100 mega-byte output.
@example
$ gzip --best nc-for-storage.fits
$ ls -lh nc-for-storage.fits.gz
@end example
We can get this wonderful level of compression because NoiseChisel's output is binary with only two values: 0 and 1.
Compression algorithms are highly optimized in such scenarios.
You can open @file{nc-for-storage.fits.gz} directly in SAO DS9 or feed it to any of Gnuastro's programs without having to decompress it.
Higher-level programs that take NoiseChisel's output (for example, Segment or MakeCatalog) can also deal with this compressed image where the Sky and its Standard deviation are one pixel-per-tile.
You just have to give the ``values'' image as a separate option, for more, see @ref{Segment} and @ref{MakeCatalog}.
Segment (the program we will introduce in the next section for identifying sub-structure), also has similar features to optimize its output for storage.
Since this file was only created for a fast detour demonstration, let's keep our top directory clean and move to the next step:
@example
rm nc-for-storage.fits.gz
@end example
@node Segmentation and making a catalog, Measuring the dataset limits, NoiseChisel optimization for storage, General program usage tutorial
@subsection Segmentation and making a catalog
The main output of NoiseChisel is the binary detection map (@code{DETECTIONS} extension, see @ref{NoiseChisel optimization for detection}).
It only has two values: 1 or 0.
This is useful when studying the noise or background properties, but hardly of any use when you actually want to study the targets/galaxies in the image, especially in such a deep field where almost everything is connected.
To find the galaxies over the detections, we will use Gnuastro's @ref{Segment} program:
@example
$ mkdir seg
$ astsegment nc/xdf-f160w.fits -oseg/xdf-f160w.fits
$ astsegment nc/xdf-f125w.fits -oseg/xdf-f125w.fits
$ astsegment nc/xdf-f105w.fits -oseg/xdf-f105w.fits
@end example
Segment's operation is very much like NoiseChisel (in fact, prior to version 0.6, it was part of NoiseChisel).
For example, the output is a multi-extension FITS file (previously discussed in @ref{NoiseChisel and Multi-Extension FITS files}), it has check images and uses the undetected regions as a reference (previously discussed in @ref{NoiseChisel optimization for detection}).
Please have a look at Segment's multi-extension output to get a good feeling of what it has done.
Do not forget to flip through the extensions in the ``Cube'' window.
@example
$ astscript-fits-view seg/xdf-f160w.fits
@end example
Like NoiseChisel, the first extension is the input.
The @code{CLUMPS} extension shows the true ``clumps'' with values that are @mymath{\ge1}, and the diffuse regions labeled as @mymath{-1}.
Please flip between the first extension and the clumps extension and zoom-in on some of the clumps to get a feeling of what they are.
In the @code{OBJECTS} extension, we see that the large detections of NoiseChisel (that may have contained many galaxies) are now broken up into separate labels.
Play with the color-bar and hover your mouse of the various detections to see their different labels.
The clumps are not affected by the hard-to-deblend and low signal-to-noise diffuse regions, they are more robust for calculating the colors (compared to objects).
From this step onward, we will continue with clumps.
Having localized the regions of interest in the dataset, we are ready to do measurements on them with @ref{MakeCatalog}.
MakeCatalog is specialized and optimized for doing measurements over labeled regions of an image.
In other words, through MakeCatalog, you can ``reduce'' an image to a table (catalog of certain properties of objects in the image).
Each requested measurement (over each label) will be given a column in the output table.
To see the full set of available measurements run it with @option{--help} like below (and scroll up), note that measurements are classified by context.
@example
$ astmkcatalog --help
@end example
So let's select the properties we want to measure in this tutorial.
First of all, we need to know which measurement belongs to which object or clump, so we will start with the @option{--ids} (read as: IDs@footnote{This option is plural because we need two ID columns for identifying ``clumps'' in the clumps catalog/table: the first column will be the ID of the host ``object'', and the second one will be the ID of the clump within that object. In the ``objects'' catalog/table, only a single column will be returned for this option.}).
We also want to measure (in this order) the Right Ascension (with @option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}), and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
Furthermore, as mentioned above, we also want measurements on clumps, so we also need to call @option{--clumpscat}.
The following command will make these measurements on Segment's F160W output and write them in a catalog for each object and clump in a FITS table.
For more on the zero point, see @ref{Brightness flux magnitude}.
@example
$ mkdir cat
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
--zeropoint=25.94 --clumpscat --output=cat/xdf-f160w.fits
@end example
@noindent
From the printed statements on the command-line, you see that MakeCatalog read all the extensions in Segment's output for the various measurements it needed.
Let's look at the output of the command above:
@example
$ astfits cat/xdf-f160w.fits
@end example
You will see that the output of the MakeCatalog has two extensions.
The first extension shows the measurements over the @code{OBJECTS}, and the second extension shows the measurements over the clumps @code{CLUMPS}.
To calculate colors, we also need magnitude measurements on the other filters.
So let's repeat the command above on them, just changing the file names and zero point (which we got from the XDF survey web page):
@example
$ astmkcatalog seg/xdf-f125w.fits --ids --ra --dec --magnitude --sn \
--zeropoint=26.23 --clumpscat --output=cat/xdf-f125w.fits
$ astmkcatalog seg/xdf-f105w.fits --ids --ra --dec --magnitude --sn \
--zeropoint=26.27 --clumpscat --output=cat/xdf-f105w.fits
@end example
However, the galaxy properties might differ between the filters (which is the whole purpose behind observing in different filters!).
Also, the noise properties and depth of the datasets differ.
You can see the effect of these factors in the resulting clump catalogs, with Gnuastro's Table program.
We will go deep into working with tables in the next section, but in summary: the @option{-i} option will print information about the columns and number of rows.
To see the column values, just remove the @option{-i} option.
In the output of each command below, look at the @code{Number of rows:}, and note that they are different.
@example
$ asttable cat/xdf-f105w.fits -hCLUMPS -i
$ asttable cat/xdf-f125w.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example
Matching the catalogs is possible (for example, with @ref{Match}).
However, the measurements of each column are also done on different pixels: the clump labels can/will differ from one filter to another for one object.
Please open them and focus on one object to see for yourself.
This can bias the result, if you match catalogs.
An accurate color calculation can only be done when magnitudes are measured from the same pixels on all images and this can be done easily with MakeCatalog.
In fact this is one of the reasons that NoiseChisel or Segment do not generate a catalog like most other detection/segmentation software.
This gives you the freedom of selecting the pixels for measurement in any way you like (from other filters, other software, manually, etc.).
Fortunately in these images, the Point spread function (PSF) is very similar, allowing us to use a single labeled image output for all filters@footnote{When the PSFs between two images differ largely, you would have to PSF-match the images before using the same pixels for measurements.}.
The F160W image is deeper, thus providing better detection/segmentation, and redder, thus observing smaller/older stars and representing more of the mass in the galaxies.
We will thus use the F160W filter as a reference and use its segment labels to identify which pixels to use for which objects/clumps.
But we will do the measurements on the sky-subtracted F105W and F125W images (using MakeCatalog's @option{--valuesfile} option) as shown below:
Notice that the only difference between these calls and the call to generate the raw F160W catalog (excluding the zero point and the output name) is the @option{--valuesfile}.
@example
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
--valuesfile=nc/xdf-f125w.fits --zeropoint=26.23 \
--clumpscat --output=cat/xdf-f125w-on-f160w-lab.fits
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
--valuesfile=nc/xdf-f105w.fits --zeropoint=26.27 \
--clumpscat --output=cat/xdf-f105w-on-f160w-lab.fits
@end example
After running the commands above, look into what MakeCatalog printed on the command-line.
You can see that (as requested) the object and clump pixel labels in both were taken from the respective extensions in @file{seg/xdf-f160w.fits}.
However, the pixel values and pixel Sky standard deviation were respectively taken from @file{nc/xdf-f105w.fits} and @file{nc/xdf-f125w.fits}.
Since we used the same labeled image on all filters, the number of rows in both catalogs are now identical.
Let's have a look:
@example
$ asttable cat/xdf-f105w-on-f160w-lab.fits -hCLUMPS -i
$ asttable cat/xdf-f125w-on-f160w-lab.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example
Finally, MakeCatalog also does basic calculations on the full dataset (independent of each labeled region but related to whole data), for example, pixel area or per-pixel surface brightness limit.
They are stored as keywords in the FITS headers (or lines starting with @code{#} in plain text).
This (and other ways to measure the limits of your dataset) are discussed in the next section: @ref{Measuring the dataset limits}.
@node Measuring the dataset limits, Working with catalogs estimating colors, Segmentation and making a catalog, General program usage tutorial
@subsection Measuring the dataset limits
In @ref{Segmentation and making a catalog}, we created a catalog of the different objects with the image.
Before measuring colors, or doing any other kind of analysis on the catalogs (and detected objects), it is very important to understand the limitations of the dataset.
Without understanding the limitations of your dataset, you cannot make any physical interpretation of your results.
The theory behind the calculations discussed here is thoroughly introduced in @ref{Metameasurements on full input}.
For example, with the command below, let's sort all the detected clumps in the image by magnitude (with @option{--sort=magnitude}) and and print the magnitude and signal-to-noise ratio (S/N; with @option{-cmagnitude,sn}):
@example
$ asttable cat/xdf-f160w.fits -hclumps -cmagnitude,sn \
--sort=magnitude --noblank=magnitude
@end example
As you see, we have clumps with a total magnitude of almost 32!
This is @emph{extremely faint}!
Are these things trustable?
Let's have a look at all of those with a magnitude between 31 and 32 with the command below.
We are first using Table to only keep the relevant columns rows, and using Gnuastro's DS9 region file creation script (@code{astscript-ds9-region}) to generate DS9 region files, and open DS9:
@example
$ asttable cat/xdf-f160w.fits -hclumps -cra,dec \
--range=magnitude,31:32 \
| astscript-ds9-region -c1,2 --radius=0.5 \
--command="ds9 -mecube seg/xdf-f160w.fits -zscale"
@end example
Zoom-out a little and you will see some green circles (DS9 region files) in some regions of the image.
There actually does seem to be a true peak under the selected regions, but as you see, they are very small, diffuse and noisy.
How reliable are the measured magnitudes?
Using the S/N column from the first command above, you can see that such objects only have a signal to noise of about 2.6 (which is indeed too low for most analysis purposes)
@example
$ asttable cat/xdf-f160w.fits -hclumps -csn \
--range=magnitude,31:32 | aststatistics
@end example
This brings us to the first method of quantifying your dataset's @emph{magnitude limit}, which is also sometimes called @emph{detection limit} (see @ref{Noise based magnitude limit of image}).
To estimate the @mymath{5\sigma} detection limit of your dataset, you simply report the median magnitude of the objects that have a signal to noise of (approximately) five.
This is very easy to calculate with the command below:
@example
$ asttable cat/xdf-f160w.fits -hclumps --range=sn,4.8:5.2 -cmagnitude \
| aststatistics --median
29.9949
@end example
Let's have a look at these objects, to get a feeling of what these clump looks like:
@example
$ asttable cat/xdf-f160w.fits -hclumps --range=sn,4.8:5.2 \
-cra,dec,magnitude \
| astscript-ds9-region -c1,2 --namecol=3 \
--width=2 --radius=0.5 \
--command="ds9 -mecube seg/xdf-f160w.fits -zscale"
@end example
The number you see on top of each region is the clump's magnitude.
Please go over the objects and have a close look at them!
It is very important to have a feeling of what your dataset looks like, and how to interpret the numbers to associate an image with them.
@cindex Correlated noise
@cindex Noise (correlated)
Generally, they look very small with different levels of diffuse-ness!
Those that are sharper make more visual sense (to be @mymath{5\sigma} detections), but the more diffuse ones extend over a larger area.
Furthermore, the noise is measured on individual pixel measurements.
However, during the reduction many exposures are co-added, mixing the pixels like a small convolution (creating ``correlated noise'').
Therefore you clearly see two main issues with the detection limit as defined above: it depends on the morphology, and it does not take into account the correlated noise.
@cindex Upper-limit
A more realistic way to estimate the significance of the detection is to take its footprint, randomly place it in thousands of undetected regions of the image and use that distribution as a reference.
This is technically known as upper-limit measurements.
For a full discussion, see @ref{Upper limit measurements}).
Since it is for each separate object, the upper-limit measurements should be requested as extra columns in MakeCatalog's output.
For example, with the command below, let's generate a new catalog of the F160W filter, but with two extra columns compared to the one in @file{cat/}: the upper-limit magnitude and the upper-limit multiple of sigma.
@example
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
--zeropoint=25.94 --clumpscat --upnsigma=3 \
--upperlimit-mag --upperlimit-sigma \
--output=xdf-f160w.fits
@end example
@noindent
Let's compare the upper-limit magnitude with the measured magnitude of each clump:
@example
$ asttable xdf-f160w.fits -hclumps -cmagnitude,upperlimit_mag
@end example
As you see, in almost all of the cases, the measured magnitude is sufficiently higher than the upper-limit magnitude.
Let's subtract the latter from the former to better see this difference in a third column:
@example
$ asttable xdf-f160w.fits -hclumps -cmagnitude,upperlimit_mag \
-c'arith upperlimit_mag magnitude -'
@end example
The ones with a positive third column (difference) show that the clump has sufficiently higher brightness than the noisy background to be usable.
Let's use Table's @ref{Column arithmetic} to find only those that have a negative difference:
@example
$ asttable xdf-f160w.fits -hclumps -cra,dec --noblankend=3 \
-c'arith upperlimit_mag magnitude - set-d d d 0 gt nan where'
@end example
@noindent
From more than 3500 clumps, this command only gave @mymath{\sim150} rows (this number may slightly change on different runs due to the random nature of the upper-limit sampling@footnote{You can fix the random number generator seed, so you always get the same sampling, see @ref{Generating random numbers}.})!
Let's have a look at them:
@example
$ asttable xdf-f160w.fits -hclumps -cra,dec --noblankend=3 \
-c'arith upperlimit_mag magnitude - set-d d d 0 gt nan where' \
| astscript-ds9-region -c1,2 --namecol=3 --width=2 \
--radius=0.5 \
--command="ds9 -mecube seg/xdf-f160w.fits -zscale"
@end example
You see that they are all extremely faint and diffuse/small peaks.
Therefore, if an object's magnitude is fainter than its upper-limit magnitude, you should not use the magnitude: it is not accurate!
You should use the upper-limit magnitude instead (with an arrow in your plots to mark which ones are upper-limits).
But the main point (in relation to the magnitude limit) with the upper-limit, is the @code{UPPERLIMIT_SIGMA} column.
you can think of this as a @emph{realistic} S/N for extremely faint/diffuse/small objects).
The raw S/N column is simply calculated on a pixel-by-pixel basis, however, the upper-limit sigma is produced by actually taking the label's footprint, and randomly placing it thousands of time over undetected parts of the image and measuring the brightness of the sky.
The clump's brightness is then divided by the standard deviation of the resulting distribution to give you exactly how significant it is (accounting for inter-pixel issues like correlated noise, which are strong in this dataset).
You can actually compare the two values with the command below:
@example
$ asttable xdf-f160w.fits -hclumps -csn,upperlimit_sigma
@end example
As you see, the second column (upper-limit sigma) is almost always less than the S/N.
This clearly shows the effect of correlated noise!
If you now use this column as the reference for deriving the magnitude limit, you will see that it will shift by almost 0.5 magnitudes brighter and is now more reasonable:
@example
$ asttable xdf-f160w.fits -hclumps --range=upperlimit_sigma,4.8:5.2 \
-cmagnitude | aststatistics --median
29.6257
@end example
We see that the @mymath{5\sigma} detection limit is @mymath{\sim29.6}!
This is extremely deep!
For example, in the Legacy Survey@footnote{@url{https://www.legacysurvey.org/dr9/description}}, the @mymath{5\sigma} detection limit for @emph{point sources} is approximately 24.5 (5 magnitudes, or 100 times, shallower than this image).
As mentioned above, an important caveat in this simple calculation is that we should only be looking at point-like objects, not simply everything.
This is because the shape or radial slope of the profile has an important effect on this measurement: at the same total magnitude, a sharper object will have a higher S/N.
To be more precise, we should first perform star-galaxy separation, then do this only for the objects that are classified as stars.
A crude, first-order, method is to use the @option{--axis-ratio} option so MakeCatalog also measures the axis ratio, then call Table with @option{--range=upperlimit_sigma,,4.8:5.2} and @option{--range=axis_ratio,0.95:1} (in one command).
Please do this for yourself as an exercise to see the difference with the result above.
@noindent
Before continuing, let's remove this temporarily produced catalog:
@example
$ rm xdf-f160w.fits
@end example
Another measure of the dataset's limit is the completeness limit.
This is necessary when you are looking at populations of objects over the image.
You want to know until what magnitude you can be sure that you have detected an object (if it was present).
Due to the different morphologies of the sources the completeness limit is different from one type of object to another (for example sharper objects are easier to detect/segment).
Therefore, the best way to measure the completeness limit is with mock images: to artificially insert your desired object in random undetected areas of the image many times and see how many times they are detected (for more, see @ref{Completeness limit for certain objects}).
But a crude, zero-th order result (for the average object in the image) can be obtained from the actual image: by simply plotting the histogram of the magnitudes:
@example
$ aststatistics cat/xdf-f160w.fits -hclumps -cmagnitude
...
Histogram:
| *
| ** ****
| ***********
| *************
| ****************
| *******************
| **********************
| **************************
| *********************************
| *********************************************
|* * ** ** **********************************************************
|----------------------------------------------------------------------
@end example
@cindex Number count
This plot (the histogram of magnitudes; where fainter magnitudes are towards the right) is technically called the dataset's @emph{number count} plot.
You see that the number of objects increases with magnitude as the magnitudes get fainter (to the right).
However, beyond a certain magnitude, you see it becomes flat, and soon afterwards, the numbers suddenly drop.
Once you have your catalog, you can easily find this point with the two commands below.
First we generate a histogram with fewer bins (to have more numbers in each bin).
We then use AWK to find the magnitude bin where the number of points decrease compared to the previous bin.
But we only do this for bins that have more than 50 items (to avoid scatter in the bright end).
Finally, in Statistics, we have manually set the magnitude range and number of bins so each bin is roughly 0.5 magnitudes thick (with @option{--greaterequal=20}, @option{--lessthan=32} and @option{--numbins=24})
@example
$ aststatistics cat/xdf-f160w.fits -hclumps -cmagnitude --histogram \
--greaterequal=20 --lessthan=32 --numbins=24 \
--output=f160w-hist.txt
$ asttable f160w-hist.txt \
| awk '$2>50 && $2<prev@{print prevbin; exit@} \
@{prev=$2; prevbin=$1@}'
28.932122667631
@end example
Therefore, to first order (and very crudely!) we can say that if an object is in our field of view and has a magnitude of @mymath{\sim29} or brighter, we can be highly confident that we have detected it.
But before continuing, let's clean up behind ourselves:
@example
$ rm f160w-hist.txt
@end example
Another important limiting parameter in a processed dataset is the surface brightness limit (@ref{Surface brightness limit of image}).
The surface brightness limit of a dataset is an important measure for extended structures (for example, when you want to look at the outskirts of galaxies).
In the next tutorial, we have thoroughly described the derivation of the surface brightness limit of a dataset.
So we will just show the final result here, and encourage you to follow up with that tutorial after finishing this tutorial (see @ref{Image surface brightness limit})
By default, MakeCatalog will estimate the surface brightness limit of a given dataset, and put it in the keywords of the output (all keywords starting with @code{SBL}, which is short for surface brightness limit):
@example
$ astfits cat/xdf-f160w.fits -h1 | grep SBL
@end example
As you see, the only one with a unit of @code{mag/arcsec^2} is @code{SBL}.
It contains the surface brightness limit of the input dataset over @code{SBLAREA} arcsec@mymath{^2} with @code{SBLNSIG} multiples of @mymath{\sigma}.
In the current version of Gnuastro, @code{SBLAREA=100} and @code{SBLNSIG=3}, so the surface brightness limit of this image is 32.66 mag/arcsec@mymath{^2} (@mymath{3\sigma}, over 100 arcsec@mymath{^2}).
Therefore, if this default area and multiple of sigma are fine for you@footnote{You can change these values with the @option{--sbl-area} and @option{--sbl-sigma}} (these are the most commonly used values), you can simply read the image surface brightness limit from the catalogs produced by MakeCatalog with this command:
@example
$ astfits cat/*.fits -h1 --keyvalue=SBL
@end example
@node Working with catalogs estimating colors, Column statistics color-magnitude diagram, Measuring the dataset limits, General program usage tutorial
@subsection Working with catalogs (estimating colors)
In the previous step we generated catalogs of objects and clumps over our dataset (see @ref{Segmentation and making a catalog}).
The catalogs are available in the two extensions of the single FITS file@footnote{MakeCatalog can also output plain text tables.
However, in the plain text format you can only have one table per file.
Therefore, if you also request measurements on clumps, two plain text tables will be created (suffixed with @file{_o.txt} and @file{_c.txt}).}.
Let's see the extensions and their basic properties with the Fits program:
@example
$ astfits cat/xdf-f160w.fits # Extension information
@end example
Let's inspect the table in each extension with Gnuastro's Table program (see @ref{Table}).
We should have used @option{-hOBJECTS} and @option{-hCLUMPS} instead of @option{-h1} and @option{-h2} respectively.
The numbers are just used here to convey that both names or numbers are possible, in the next commands, we will just use names.
@example
$ asttable cat/xdf-f160w.fits -h1 --info # Objects catalog info.
$ asttable cat/xdf-f160w.fits -h1 # Objects catalog columns.
$ asttable cat/xdf-f160w.fits -h2 -i # Clumps catalog info.
$ asttable cat/xdf-f160w.fits -h2 # Clumps catalog columns.
@end example
@noindent
As you see above, when given a specific table (file name and extension), Table will print the full contents of all the columns.
To see the basic metadata about each column (for example, name, units and comments), simply append a @option{--info} (or @option{-i}) to the command.
To print the contents of special column(s), just give the column number(s) (counting from @code{1}) or the column name(s) (if they have one) to the @option{--column} (or @option{-c}) option.
For example, if you just want the magnitude and signal-to-noise ratio of the clumps (in the clumps catalog), you can get it with any of the following commands
@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --column=5,6
$ asttable cat/xdf-f160w.fits -hCLUMPS -c5,SN
$ asttable cat/xdf-f160w.fits -hCLUMPS -c5 -c6
$ asttable cat/xdf-f160w.fits -hCLUMPS -cMAGNITUDE -cSN
@end example
@noindent
Similar to HDUs, when the columns have names, always use the name: it is so common to mis-write numbers or forget the order later!
Using column names instead of numbers has many advantages:
@enumerate
@item
You do not have to worry about the order of columns in the table.
@item
It acts as a documentation in the script.
@item
Column meta-data (including a name) are not just limited to FITS tables and can also be used in plain text tables, see @ref{Gnuastro text table format}.
@end enumerate
@noindent
Table also has tools to limit the displayed rows.
For example, with the first command below only rows with a magnitude in the range of 29 to 30 will be shown.
With the second command, you can further limit the displayed rows to rows with an S/N larger than 10 (a range between 10 to infinity).
You can further sort the output rows, only show the top (or bottom) N rows, etc., see @ref{Table} for more.
@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --range=MAGNITUDE,28:29
$ asttable cat/xdf-f160w.fits -hCLUMPS \
--range=MAGNITUDE,28:29 --range=SN,10:inf
@end example
Now that you are comfortable in viewing table columns and rows, let's look into merging columns of multiple tables into one table (which is necessary for measuring the color of the clumps).
Since @file{cat/xdf-f160w.fits} and @file{cat/xdf-f105w-on-f160w-lab.fits} have exactly the same number of rows and the rows correspond to the same clump, let's merge them to have one table with magnitudes in both filters.
We can merge columns with the @option{--catcolumnfile} option like below.
You give this option a file name (which is assumed to be a table that has the same number of rows as the main input), and all the table's columns will be concatenated/appended to the main table.
Now, try it out with the commands below.
We will first look at the metadata of the first table (only the @code{CLUMPS} extension).
With the second command, we will concatenate the two tables and write them in, @file{two-in-one.fits} and finally, we will check the new catalog's metadata.
@example
$ asttable cat/xdf-f160w.fits -i -hCLUMPS
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=two-in-one.fits \
--catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
--catcolumnhdu=CLUMPS
$ asttable two-in-one.fits -i
@end example
By comparing the two metadata, we see that both tables have the same number of rows.
But what might have attracted your attention more, is that @file{two-in-one.fits} has double the number of columns (as expected, after all, you merged both tables into one file, and did not ask for any specific column).
In fact you can concatenate any number of other tables in one command, for example:
@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one.fits \
--catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
--catcolumnfile=cat/xdf-f105w-on-f160w-lab.fits \
--catcolumnhdu=CLUMPS --catcolumnhdu=CLUMPS
$ asttable three-in-one.fits -i
@end example
As you see, to avoid confusion in column names, Table has intentionally appended a @code{-1} to the column names of the first concatenated table if the column names are already present in the original table.
For example, we have the original @code{RA} column, and another one called @code{RA-1}).
Similarly a @code{-2} has been added for the columns of the second concatenated table.
However, this example clearly shows a problem with this full concatenation: some columns are identical (for example, @code{HOST_OBJ_ID} and @code{HOST_OBJ_ID-1}), or not needed (for example, @code{RA-1} and @code{DEC-1} which are not necessary here).
In such cases, you can use @option{--catcolumns} to only concatenate certain columns, not the whole table.
For example, this command:
@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=two-in-one-2.fits \
--catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
--catcolumnhdu=CLUMPS --catcolumns=MAGNITUDE
$ asttable two-in-one-2.fits -i
@end example
You see that we have now only appended the @code{MAGNITUDE} column of @file{cat/xdf-f125w-on-f160w-lab.fits}.
This is what we needed to be able to later subtract the magnitudes.
Let's go ahead and add the F105W magnitudes also with the command below.
Note how we need to call @option{--catcolumnhdu} once for every table that should be appended, but we only call @option{--catcolumn} once (assuming all the tables that should be appended have this column).
@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one-2.fits \
--catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
--catcolumnfile=cat/xdf-f105w-on-f160w-lab.fits \
--catcolumnhdu=CLUMPS --catcolumnhdu=CLUMPS \
--catcolumns=MAGNITUDE
$ asttable three-in-one-2.fits -i
@end example
But we are not finished yet!
There is a very big problem: it is not immediately clear which one of @code{MAGNITUDE}, @code{MAGNITUDE-1} or @code{MAGNITUDE-2} columns belong to which filter!
Right now, you know this because you just ran this command.
But in one hour, you'll start doubting yourself and will be forced to go through your command history, trying to figure out if you added F105W first, or F125W.
You should never torture your future-self (or your colleagues) like this!
So, let's rename these confusing columns in the matched catalog.
Fortunately, with the @option{--colmetadata} option, you can correct the column metadata of the final table (just before it is written).
It takes four values:
1) the original column name or number,
2) the new column name,
3) the column unit and
4) the column comments.
Since the comments are usually human-friendly sentences and contain space characters, you should put them in double quotations like below.
For example, by adding three calls of this option to the previous command, we write the filter name in the magnitude column name and description.
@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one-3.fits \
--catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
--catcolumnfile=cat/xdf-f105w-on-f160w-lab.fits \
--catcolumnhdu=CLUMPS --catcolumnhdu=CLUMPS \
--catcolumns=MAGNITUDE \
--colmetadata=MAGNITUDE,MAG-F160W,log,"Magnitude in F160W." \
--colmetadata=MAGNITUDE-1,MAG-F125W,log,"Magnitude in F125W." \
--colmetadata=MAGNITUDE-2,MAG-F105W,log,"Magnitude in F105W."
$ asttable three-in-one-3.fits -i
@end example
We now have all three magnitudes in one table and can start doing arithmetic on them (to estimate colors, which are just a subtraction of magnitudes).
To use column arithmetic, simply call the column selection option (@option{--column} or @option{-c}), put the value in single quotations and start the value with @code{arith} (followed by a space) like the example below.
Column arithmetic uses the same ``reverse polish notation'' as the Arithmetic program (see @ref{Reverse polish notation}), with almost all the same operators (see @ref{Arithmetic operators}), and some column-specific operators (that are not available for images).
In column-arithmetic, you can identify columns by number (prefixed with a @code{$}) or name, for more see @ref{Column arithmetic}.
So let's estimate one color from @file{three-in-one-3.fits} using column arithmetic.
All the commands below will produce the same output, try them each and focus on the differences.
Note that column arithmetic can be mixed with other ways to choose output columns (the @code{-c} option).
@example
$ asttable three-in-one-3.fits -ocolor-cat.fits \
-c1,2,3,4,'arith $5 $7 -'
$ asttable three-in-one-3.fits -ocolor-cat.fits \
-c1,2,RA,DEC,'arith MAG-F125W MAG-F160W -'
$ asttable three-in-one-3.fits -ocolor-cat.fits -c1,2 \
-cRA,DEC --column='arith MAG-F105W MAG-F160W -'
@end example
This example again highlights the important point on using column names: if you do not know the commands before, you have no way of making sense of the first command: what is in column 5 and 7? why not subtract columns 3 and 4 from each other?
Do you see how cryptic the first one is?
Then look at the last one: even if you have no idea how this table was created, you immediately understand the desired operation.
@strong{When you have column names, please use them.}
If your table does not have column names, give them names with the @option{--colmetadata} (described above) as you are creating them.
But how about the metadata for the column you just created with column arithmetic?
Have a look at the column metadata of the table produced above:
@example
$ asttable color-cat.fits -i
@end example
The name of the column produced by arithmetic column is @command{ARITH_1}!
This is natural: Arithmetic has no idea what the modified column is!
You could have multiplied two columns, or done much more complex transformations with many columns.
@emph{Metadata cannot be set automatically, your (the human) input is necessary.}
To add metadata, you can use @option{--colmetadata} like before:
@example
$ asttable three-in-one-3.fits -ocolor-cat.fits -c1,2,RA,DEC \
--column='arith MAG-F105W MAG-F160W -' \
--colmetadata=ARITH_1,F105W-F160W,log,"Magnitude difference"
$ asttable color-cat.fits -i
@end example
Sometimes, because of a particular way of storing data, you might need to take all input columns.
If there are many columns (for example hundreds!), listing them (like above) will become annoying, buggy and time-consuming.
In such cases, you can give @option{-c_all}.
Upon execution, @code{_all} will be replaced with a comma-separated list of all the input columns.
This allows you to add new columns easily, without having to worry about the number of input columns that you want anyway.
A lower-level but more customizable method is to use the @command{seq} (sequence) command with the @code{-s} (separator) option set to @code{','}).
For example, if you have 216 columns and only want to return columns 1 and 2 as well as all the columns between 12 to 58 (inclusive), you can use the command below:
@example
$ asttable table.fits -c1,2,$(seq -s',' 12 58)
@end example
We are now ready to make our final table.
We want it to have the magnitudes in all three filters, as well as the three possible colors.
Recall that by convention in astronomy colors are defined by subtracting the bluer magnitude from the redder magnitude.
In this way a larger color value corresponds to a redder object.
So from the three magnitudes, we can produce three colors (as shown below).
Also, because this is the final table we are creating here and want to use it later, we will store it in @file{cat/} and we will also give it a clear name and use the @option{--range} option to only print columns with a signal-to-noise ratio (@code{SN} column, from the F160W filter) above 5.
@example
$ asttable three-in-one-3.fits --range=SN,5,inf -c1,2,RA,DEC,SN \
-cMAG-F160W,MAG-F125W,MAG-F105W \
-c'arith MAG-F125W MAG-F160W -' \
-c'arith MAG-F105W MAG-F125W -' \
-c'arith MAG-F105W MAG-F160W -' \
--colmetadata=SN,SN-F160W,ratio,"F160W signal to noise ratio" \
--colmetadata=ARITH_1,F125W-F160W,log,"Color F125W-F160W." \
--colmetadata=ARITH_2,F105W-F125W,log,"Color F105W-F125W." \
--colmetadata=ARITH_3,F105W-F160W,log,"Color F105W-F160W." \
--output=cat/mags-with-color.fits
$ asttable cat/mags-with-color.fits -i
@end example
The table now has all the columns we need and it has the proper metadata to let us safely use it later (without frustrating over column orders!) or passing it to colleagues.
Let's finish this section of the tutorial with a useful tip on modifying column metadata.
Above, updating/changing column metadata was done with the @option{--colmetadata} in the same command that produced the newly created Table file.
But in many situations, the table is already made and you just want to update the metadata of one column.
In such cases using @option{--colmetadata} is over-kill (wasting CPU/RAM energy or time if the table is large) because it will load the full table data and metadata into memory, just change the metadata and write it back into a file.
In scenarios when the table's data does not need to be changed and you just want to set or update the metadata, it is much more efficient to use basic FITS keyword editing.
For example, in the FITS standard, column names are stored in the @code{TTYPE} header keywords, so let's have a look:
@example
$ asttable two-in-one.fits -i
$ astfits two-in-one.fits -h1 | grep TTYPE
@end example
Changing/updating the column names is as easy as updating the values to these keywords.
You do not need to touch the actual data!
With the command below, we will just update the @code{MAGNITUDE} and @code{MAGNITUDE-1} columns (which are respectively stored in the @code{TTYPE5} and @code{TTYPE11} keywords) by modifying the keyword values and checking the effect by listing the column metadata again:
@example
$ astfits two-in-one.fits -h1 \
--update=TTYPE5,MAG-F160W \
--update=TTYPE11,MAG-F125W
$ asttable two-in-one.fits -i
@end example
You can see that the column names have indeed been changed without touching any of the data.
You can do the same for the column units or comments by modifying the keywords starting with @code{TUNIT} or @code{TCOMM}.
Generally, Gnuastro's table is a very useful program in data analysis and what you have seen so far is just the tip of the iceberg.
But to avoid making the tutorial even longer, we will stop reviewing the features here, for more, please see @ref{Table}.
Before continuing, let's just delete all the temporary FITS tables we placed in the top project directory:
@example
rm *.fits
@end example
@node Column statistics color-magnitude diagram, Aperture photometry, Working with catalogs estimating colors, General program usage tutorial
@subsection Column statistics (color-magnitude diagram)
In @ref{Working with catalogs estimating colors} we created a single catalog containing the magnitudes of our desired clumps in all three filters, and their colors.
To start with, let's inspect the distribution of three colors with the Statistics program.
@example
$ aststatistics cat/mags-with-color.fits -cF105W-F125W
$ aststatistics cat/mags-with-color.fits -cF105W-F160W
$ aststatistics cat/mags-with-color.fits -cF125W-F160W
@end example
This tiny and cute ASCII histogram (and the general information printed above it) gives you a crude (but very useful and fast) feeling on the distribution.
You can later use Gnuastro's Statistics program with the @option{--histogram} option to build a much more fine-grained histogram as a table to feed into your favorite plotting program for a much more accurate/appealing plot (for example, with PGFPlots in @LaTeX{}).
If you just want a specific measure, for example, the mean, median and standard deviation, you can ask for them specifically, like below:
@example
$ aststatistics cat/mags-with-color.fits -cF105W-F160W \
--mean --median --std
@end example
@cindex Color-magnitude diagram
The basic statistics we measured above were just on one column.
In many scenarios this is fine, but things get much more exciting if you look at the correlation of two columns with each other.
For example, let's create the color-magnitude diagram for our measured targets.
@cindex Scatter plot
@cindex 2D histogram
@cindex Plot, scatter
@cindex Histogram, 2D
In many papers, the color-magnitude diagram is usually plotted as a scatter plot.
However, scatter plots have a major limitation when there are a lot of points and they cluster together in one region of the plot: the possible correlation in that dense region is lost (because the points fall over each other).
In such cases, it is much better to use a 2D histogram.
In a 2D histogram, the full range in both columns is divided into discrete 2D bins (or pixels!) and we count how many objects fall in that 2D bin.
Since a 2D histogram is a pixelated space, we can simply save it as a FITS image and view it in a FITS viewer.
Let's do this in the command below.
As is common with color-magnitude plots, we will put the redder magnitude on the horizontal axis and the color on the vertical axis.
We will set both dimensions to have 100 bins (with @option{--numbins} for the horizontal and @option{--numbins2} for the vertical).
Also, to avoid strong outliers in any of the dimensions, we will manually set the range of each dimension with the @option{--greaterequal}, @option{--greaterequal2}, @option{--lessthan} and @option{--lessthan2} options.
@example
$ aststatistics cat/mags-with-color.fits -cMAG-F160W,F105W-F160W \
--histogram2d=image --manualbinrange \
--numbins=100 --greaterequal=22 --lessthan=30 \
--numbins2=100 --greaterequal2=-1 --lessthan2=3 \
--manualbinrange --output=cmd.fits
@end example
@noindent
You can now open this FITS file as a normal FITS image, for example, with the command below.
Try hovering/zooming over the pixels: not only will you see the number of objects in catalog that fall in each bin/pixel, but you also see the @code{F160W} magnitude and color of that pixel also (in the same place you usually see RA and Dec when hovering over an astronomical image).
@example
$ astscript-fits-view cmd.fits --ds9scale=minmax
@end example
Having a 2D histogram as a FITS image with WCS has many great advantages.
For example, just like FITS images of the night sky, you can ``match'' many 2D histograms that were created independently.
You can add two histograms with each other, or you can use advanced features of FITS viewers to find structure in the correlation of your columns.
@noindent
With the first command below, you can activate the grid feature of DS9 to actually see the coordinate grid, as well as values on each line.
With the second command, DS9 will even read the labels of the axes and use them to generate an almost publication-ready plot.
@example
$ astscript-fits-view cmd.fits --ds9scale=minmax --ds9extra="-grid yes"
$ astscript-fits-view cmd.fits --ds9scale=minmax \
--ds9extra="-grid yes -grid type publication"
@end example
If you are happy with the grid and coloring and the rest, you can also use ds9 to save this as a JPEG image to directly use in your documents/slides with these extra DS9 options (DS9 will write the image to @file{cmd-2d.jpeg} and quit immediately afterwards):
@example
$ astscript-fits-view cmd.fits --ds9scale=minmax \
--ds9extra="-grid yes -grid type publication" \
--ds9extra="-saveimage cmd-2d.jpeg -quit"
@end example
@cindex PGFPlots (@LaTeX{} package)
This is good for a fast progress update.
But for your paper or more official report, you want to show something with higher quality.
For that, you can use the PGFPlots package in @LaTeX{} to add axes in the same font as your text, sharp grids and many other elegant/powerful features (like over-plotting interesting points and lines).
But to load the 2D histogram into PGFPlots first you need to convert the FITS image into a more standard format, for example, PDF.
We will use Gnuastro's @ref{ConvertType} for this, and use the @code{sls-inverse} color map (which will map the pixels with a value of zero to white):
@example
$ astconvertt cmd.fits --colormap=sls-inverse --borderwidth=0 -ocmd.pdf
@end example
Open the resulting @file{cmd.pdf} and see the PDF.
Below you can see a minimally working example of how to add axis numbers, labels and a grid to the PDF generated above.
First, let's create a new @file{report} directory to keep the @LaTeX{} outputs, then put the minimal report's source in a file called @file{report.tex}.
Notice the @code{xmin}, @code{xmax}, @code{ymin}, @code{ymax} values and how they are the same as the range specified above.
@example
$ mkdir report-cmd
$ mv cmd.pdf report-cmd/
$ cat report-cmd/report.tex
\documentclass@{article@}
\usepackage@{pgfplots@}
\dimendef\prevdepth=0
\begin@{document@}
You can write all you want here...
\begin@{tikzpicture@}
\begin@{axis@}[
enlargelimits=false,
grid,
axis on top,
width=\linewidth,
height=\linewidth,
xlabel=@{Magnitude (F160W)@},
ylabel=@{Color (F105W-F160W)@}]
\addplot graphics[xmin=22, xmax=30, ymin=-1, ymax=3] @{cmd.pdf@};
\end@{axis@}
\end@{tikzpicture@}
\end@{document@}
@end example
@noindent
Run this command to build your PDF (assuming you have @LaTeX{} and PGFPlots).
@example
$ cd report-cmd
$ pdflatex report.tex
@end example
Open the newly created @file{report.pdf} and enjoy the exquisite quality.
The improved quality, blending in with the text, vector-graphics resolution and other features make this plot pleasing to the eye, and let your readers focus on the main point of your scientific argument.
PGFPlots can also built the PDF of the plot separately from the rest of the paper/report, see @ref{2D histogram as a table for plotting} for the necessary changes in the preamble.
We will not go much deeper into the Statistics program here, but there is so much more you can do with it.
After finishing the tutorial, see @ref{Statistics}.
@node Aperture photometry, Matching catalogs, Column statistics color-magnitude diagram, General program usage tutorial
@subsection Aperture photometry
The colors we calculated in @ref{Working with catalogs estimating colors} used a different segmentation map for each object.
This might not satisfy some science cases that need the flux within a fixed area/aperture.
Fortunately Gnuastro's modular programs make it very easy do this type of measurement (photometry).
To do this, we can ignore the labeled images of NoiseChisel of Segment, we can just built our own labeled image!
That labeled image can then be given to MakeCatalog
@cindex GNU AWK
To generate the apertures catalog we will use Gnuastro's MakeProfiles (see @ref{MakeProfiles}).
But first we need a list of positions (aperture photometry needs a-priori knowledge of your target positions).
So we will first read the clump positions from the F160W catalog, then use AWK to set the other parameters of each profile to be a fixed circle of radius 5 pixels (recall that we want all apertures to have an identical size/area in this scenario).
@example
$ rm *.fits *.txt
$ asttable cat/xdf-f160w.fits -hCLUMPS -cRA,DEC \
| awk '!/^#/@{print NR, $1, $2, 5, 5, 0, 0, 1, NR, 1@}' \
> apertures.txt
$ cat apertures.txt
@end example
We can now feed this catalog into MakeProfiles using the command below to build the apertures over the image.
The most important option for this particular job is @option{--mforflatpix}, it tells MakeProfiles that the values in the magnitude column should be used for each pixel of a flat profile.
Without it, MakeProfiles would build the profiles such that the @emph{sum} of the pixels of each profile would have a @emph{magnitude} (in log-scale) of the value given in that column (what you would expect when simulating a galaxy for example).
See @ref{Invoking astmkprof} for details on the options.
@example
$ astmkprof apertures.txt --background=flat-ir/xdf-f160w.fits \
--clearcanvas --replace --type=int16 --mforflatpix \
--mode=wcs --output=apertures.fits
@end example
Open @file{apertures.fits} with a FITS image viewer (like SAO DS9) and look around at the circles placed over the targets.
Also open the input image and Segment's clumps image and compare them with the positions of these circles.
Where the apertures overlap, you will notice that one label has replaced the other (because of the @option{--replace} option).
In the future, MakeCatalog will be able to work with overlapping labels, but currently it does not.
If you are interested, please join us in completing Gnuastro with added improvements like this (see task 14750 @footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
We can now feed the @file{apertures.fits} labeled image into MakeCatalog instead of Segment's output as shown below.
In comparison with the previous MakeCatalog call, you will notice that there is no more @option{--clumpscat} option, since there is no more separate ``clump'' image now, each aperture is treated as a separate ``object''.
@example
$ astmkcatalog apertures.fits -h1 --zeropoint=26.27 \
--valuesfile=nc/xdf-f105w.fits \
--ids --ra --dec --magnitude --sn \
--output=cat/xdf-f105w-aper.fits
@end example
This catalog has the same number of rows as the catalog produced from clumps in @ref{Working with catalogs estimating colors}.
Therefore similar to how we found colors, you can compare the aperture and clump magnitudes for example.
You can also change the filter name and zero point magnitudes and run this command again to have the fixed aperture magnitude in the F160W filter and measure colors on apertures.
@node Matching catalogs, Reddest clumps cutouts and parallelization, Aperture photometry, General program usage tutorial
@subsection Matching catalogs
In the example above, we had the luxury to generate the catalogs ourselves, and where thus able to generate them in a way that the rows match.
But this is not generally the case.
In many situations, you need to use catalogs from many different telescopes, or catalogs with high-level calculations that you cannot simply regenerate with the same pixels without spending a lot of time or using heavy computation.
In such cases, when each catalog has the coordinates of its own objects, you can use the coordinates to match the rows with Gnuastro's Match program (see @ref{Match}).
As the name suggests, Gnuastro's Match program will match rows based on distance (or aperture in 2D) in one, two, or three columns.
For this tutorial, let's try matching the two catalogs that were not created from the same labeled images, recall how each has a different number of rows:
@example
$ asttable cat/xdf-f105w.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example
You give Match two catalogs (from the two different filters we derived above) as argument, and the HDUs containing them (if they are FITS files) with the @option{--hdu} and @option{--hdu2} options.
The @option{--ccol1} and @option{--ccol2} options specify the coordinate-columns which should be matched with which in the two catalogs.
With @option{--aperture} you specify the acceptable error (radius in 2D), in the same units as the columns.
@example
$ astmatch cat/xdf-f160w.fits cat/xdf-f105w.fits \
--hdu=CLUMPS --hdu2=CLUMPS \
--ccol1=RA,DEC --ccol2=RA,DEC \
--aperture=0.5/3600 \
--output=matched.fits
$ astfits matched.fits
@end example
From the second command, you see that the output has two extensions and that both have the same number of rows.
The rows in each extension are the matched rows of the respective input table: those in the first HDU come from the first input and those in the second HDU come from the second.
However, their order may be different from the input tables because the rows match: the first row in the first HDU matches with the first row in the second HDU, etc.
You can also see which objects did not match with the @option{--notmatched}, like below.
Note how each extension of now has a different number of rows.
@example
$ astmatch cat/xdf-f160w.fits cat/xdf-f105w.fits \
--hdu=CLUMPS --hdu2=CLUMPS \
--ccol1=RA,DEC --ccol2=RA,DEC \
--aperture=0.5/3600 \
--output=not-matched.fits --notmatched
$ astfits not-matched.fits
@end example
The @option{--outcols} of Match is a very convenient feature: you can use it to specify which columns from the two catalogs you want in the output (merge two input catalogs into one).
If the first character is an `@key{a}', the respective matched column (number or name, similar to Table above) in the first catalog will be written in the output table.
When the first character is a `@key{b}', the respective column from the second catalog will be written in the output.
Also, if the first character is followed by @code{_all}, then all the columns from the respective catalog will be put in the output.
@example
$ astmatch cat/xdf-f160w.fits cat/xdf-f105w.fits \
--hdu=CLUMPS --hdu2=CLUMPS \
--ccol1=RA,DEC --ccol2=RA,DEC \
--aperture=0.35/3600 \
--outcols=a_all,bMAGNITUDE,bSN \
--output=matched.fits
$ astfits matched.fits
@end example
@node Reddest clumps cutouts and parallelization, FITS images in a publication, Matching catalogs, General program usage tutorial
@subsection Reddest clumps, cutouts and parallelization
@cindex GNU AWK
As a final step, let's go back to the original clumps-based color measurement we generated in @ref{Working with catalogs estimating colors}.
We will find the objects with the strongest color and make a cutout to inspect them visually and finally, we will see how they are located on the image.
With the command below, we will select the reddest objects (those with a color larger than 1.5):
@example
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf
@end example
@noindent
You can see how many they are by piping it to @code{wc -l}:
@example
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf | wc -l
@end example
Let's crop the F160W image around each of these objects, but we first need a unique identifier for them.
We will define this identifier using the object and clump labels (with an underscore between them) and feed the output of the command above to AWK to generate a catalog.
Note that since we are making a plain text table, we will define the necessary (for the string-type first column) metadata manually (see @ref{Gnuastro text table format}).
@example
$ echo "# Column 1: ID [name, str10] Object ID" > cat/reddest.txt
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf \
| awk '@{printf("%d_%-10d %f %f\n", $1, $2, $3, $4)@}' \
>> cat/reddest.txt
@end example
@cindex DS9
@cindex SAO DS9
Let's see how these objects are positioned over the dataset.
DS9 has the ``Region''s concept for this purpose.
And you build such regions easily from a table using Gnuastro's @command{astscript-ds9-region} installed script, using the command below:
@example
$ astscript-ds9-region cat/reddest.txt -c2,3 --mode=wcs \
--command="ds9 flat-ir/xdf-f160w.fits -zscale"
@end example
We can now feed @file{cat/reddest.txt} into Gnuastro's Crop program to get separate postage stamps for each object.
To keep things clean, we will make a directory called @file{crop-red} and ask Crop to save the crops in this directory.
We will also add a @file{-f160w.fits} suffix to the crops (to remind us which filter they came from).
The width of the crops will be 15 arc-seconds (or 15/3600 degrees, which is the units of the WCS).
@example
$ mkdir crop-red
$ astcrop flat-ir/xdf-f160w.fits --mode=wcs --namecol=ID \
--catalog=cat/reddest.txt --width=15/3600,15/3600 \
--suffix=-f160w.fits --output=crop-red
@end example
Like the MakeProfiles command in @ref{Aperture photometry}, if you look at the order of the crops, you will notice that the crops are not made in order!
This is because each crop is independent of the rest, therefore crops are done in parallel, and parallel operations are asynchronous.
So the order can differ in each run, but the final output is the same!
In the command above, you can change @file{f160w} to @file{f105w} to make the crops in both filters.
You can see all the cropped FITS files in the @file{crop-red} directory with this command:
@example
$ astscript-fits-view crop-red/*.fits
@end example
To view the crops more easily (not having to open ds9 for each image), you can convert the FITS crops into the JPEG format with a shell loop like below.
@example
$ cd crop-red
$ for f in *.fits; do \
astconvertt $f --fluxlow=-0.001 --fluxhigh=0.005 --invert -ojpg; \
done
$ cd ..
$ ls crop-red/
@end example
You can now use your general graphic user interface image viewer to flip through the images more easily, or import them into your papers/reports.
@cindex GNU Make
@cindex @file{Makefile}
The @code{for} loop above to convert the images will do the job in series: each file is converted only after the previous one is complete.
But like the crops, each JPEG image is independent, so let's parallelize it.
In other words, we want to run more than one instance of the command at any moment.
To do that, we will use @url{https://en.wikipedia.org/wiki/Make_(software), Make}.
Make is a very wonderful pipeline management system, and the most common and powerful implementation is @url{https://www.gnu.org/software/make, GNU Make}, which has a complete manual just like this one.
We cannot go into the details of Make here, for a hands-on video tutorial, see this @url{https://peertube.stream/w/iJitjS3r232Z8UPMxKo6jq, video introduction}.
To do the process above in Make, please copy the contents below into a plain-text file called @file{Makefile}.
Just replace the @code{__[TAB]__} part at the start of the line with a single `@key{TAB}' button on your keyboard.
@example
jpgs=$(subst .fits,.jpg,$(wildcard *.fits))
all: $(jpgs)
$(jpgs): %.jpg: %.fits
__[TAB]__astconvertt $< --fluxlow=-0.001 --fluxhigh=0.005 \
__[TAB]__ --invert -o$@
@end example
Now that the @file{Makefile} is ready, you can run Make on 12 threads using the commands below.
Feel free to replace the 12 with any number of threads you have on your system (you can find out by running the @command{nproc} command on GNU/Linux operating systems):
@example
$ make -j12
@end example
@noindent
Did you notice how much faster this one was?
When possible, it is always very helpful to do your analysis in parallel.
You can build very complex workflows with Make, for example, see Akhlaghi @url{https://arxiv.org/abs/2006.03018,2021} so it is worth spending some time to master.
@node FITS images in a publication, Marking objects for publication, Reddest clumps cutouts and parallelization, General program usage tutorial
@subsection FITS images in a publication
In the previous section (@ref{Reddest clumps cutouts and parallelization}), we visually inspected the positions of the reddest objects using DS9.
That is very good for an interactive inspection of the objects: you can zoom-in and out, you can do measurements, etc.
Once the experimentation phase of your project is complete, you want to show these objects over the whole image in a report, paper or slides.
One solution is to use DS9 itself!
For example, run the @command{astscript-fits-view} command of the previous section to open DS9 with the regions over-plotted.
Click on the ``File'' menu and select ``Save Image''.
In the side-menu that opens, you have multiple formats to select from.
Usually for publications, we want to show the regions and text (in the colorbar) in vector graphics, so it is best to export to EPS.
Once you have made the EPS, you can then convert it to PDF with the @command{epspdf} command.
Another solution is to use Gnuastro's ConvertType program.
The main difference is that DS9 is a Graphic User Interface (GUI) program, so it takes relatively long (about a second) to load, and it requires many dependencies.
This will slow-down automatic conversion of many files, and will make your code hard to move to another operating system.
DS9 does have a command-line interface that you can use to automate the creation of each file, however, it has a very peculiar command-line interface and formats (like the ``region'' files).
However, in ConvertType, there is no graphic interface, so it has very few dependencies, it is fast, and finally, it takes normal tables (in plain-text or FITS) as input.
So in this concluding step of the analysis, let's build a nice publication-ready plot, showing the positions of the reddest objects in the image for our paper.
In @ref{Reddest clumps cutouts and parallelization}, we already used ConvertType to make JPEG postage stamps.
Here, we will use it to make a PDF image of the whole deep region.
To start, let's simply run ConvertType on the F160W image:
@example
$ astconvertt flat-ir/xdf-f160w.fits -oxdf.pdf
@end example
Open the output in a PDF viewer. You see that it is almost fully black!
Let's see why this happens!
First, with the two commands below, let's calculate the maximum value, and the standard deviation of the sky in this image (using NoiseChisel's output, which we found at the end of @ref{NoiseChisel optimization for detection}).
Note that NoiseChisel writes the median sky standard deviation @emph{before} interpolation in the @code{MEDSTD} keyword of the @code{SKY_STD} HDU.
This is more robust than the median of the Sky standard deviation image (which has gone through interpolation).
@example
$ max=$(aststatistics nc/xdf-f160w.fits -hINPUT-NO-SKY --maximum)
$ skystd=$(astfits nc/xdf-f160w.fits -hSKY_STD --keyvalue=MEDSTD -q)
$ echo $max $skystd
58.8292 0.000410282
$ echo $max $skystd | awk '@{print $1/$2@}'
143387
@end example
@noindent
In the last command above, we divided the maximum by the sky standard deviation.
You see that the maximum value is more than @mymath{140000} times larger than the noise level!
On the other hand common monitors or printers, usually have a maximum dynamic range of 8-bits, only allowing for @mymath{2^8=256} layers.
This is therefore the maximum number of ``layers'' you can have in a common display formats like JPEG, PDF or PNG!
Dividing the result above by 256, we get a layer spacing of
@example
$ echo $max $skystd | awk '@{print $1/$2/256@}'
560.106
@end example
In other words, the first layer (which is black) will contain all the pixel values below @mymath{\sim560}!
So all pixels with a signal-to-noise ratio lower than @mymath{\sim560} will have a black color since they fall in the first layer of an 8-bit PDF (or JPEG) image.
This happens because by default we are assuming a linear mapping from floating point to 8-bit integers.
@cindex Surface Brightness
To fix this, we should move to a different mapping.
A good, physically motivated, mapping is Surface Brightness (which is in log-scale, see @ref{Brightness flux magnitude}).
Fortunately this is very easy to do with Gnuastro's Arithmetic program, as shown in the commands below (using the known zero point@footnote{@url{https://archive.stsci.edu/prepds/xdf/#science-images}}, and after calculating the pixel area in units of arcsec@mymath{^2}):
@example
$ zeropoint=25.94
$ pixarcsec2=$(astfits nc/xdf-f160w.fits --pixelareaarcsec2)
$ astarithmetic nc/xdf-f160w.fits $zeropoint $pixarcsec2 counts-to-sb \
--output=xdf-f160w-sb.fits
@end example
@noindent
With the two commands below, first, let's look at the dynamic range of the image now (dividing the maximum by the minimum), and then let's open the image and have a look at it:
@example
$ aststatistics xdf-f160w-sb.fits --minimum --maximum
$ astscript-fits-view xdf-f160w-sb.fits
@end example
@noindent
The good news is that the dynamic range has now decreased to about 2!
In other words, we can distribute the 256 layers of an 8-bit display over a much smaller range of values, and therefore better visualize the data.
However, there are two important points to consider from the output of the first command and a visual inspection of the second.
@itemize
@item
The largest pixel value (faintest surface brightness level) in the image is @mymath{\sim43}!
This is far too low to be realistic, and is just due to noise.
As discussed in @ref{Measuring the dataset limits}, the @mymath{3\sigma} surface brightness limit of this image, over 100 arcsec@mymath{^2} is roughly 32.66 mag/arcsec@mymath{^2}.
@item
You see many NaN pixels in between the galaxies!
These are due to the fact that the magnitude is defined on a logarithmic scale and the logarithm of a negative number is not defined.
@end itemize
In other words, we should replace all NaN pixels, and pixels with a surface brightness value fainter than the image surface brightness limit to this limit.
With the first command below, we will first extract the surface brightness limit from the catalog headers that we calculated before, and then call Arithmetic to use this limit.
@example
$ sblimit=$(astfits cat/xdf-f160w.fits --keyvalue=SBL -q)
$ astarithmetic nc/xdf-f160w.fits $zeropoint $pixarcsec2 \
counts-to-sb set-sb \
sb sb $sblimit gt sb isblank or $sblimit where \
--output=xdf-f160w-sb.fits
@end example
@noindent
Let's convert this image into a PDF with the command below:
@example
$ astconvertt xdf-f160w-sb.fits --output=xdf-f160w-sb.pdf
@end example
It is much better now and we can visualize many features of the FITS file (from the central structures of the galaxies and stars, to a little into the noise and their low surface brightness features.
However, the image generally looks a little too gray!
This is because of that bright star in the bottom half of the image!
Stars are very sharp!
So let's manually tell ConvertType to set any pixel with a value less than (brighter than) 20 to black (and not use the minimum).
We do this with the @option{--fluxlow} option:
@example
$ astconvertt xdf-f160w-sb.fits --output=xdf-f160w-sb.pdf --fluxlow=20
@end example
We are still missing some of the diffuse flux in this PDF.
This is because of those negative pixels that were set to NaN.
To better show these structures, we should warp the image to larger pixels.
So let's warp it to a pixel grid where the new pixels are @mymath{4\times4} larger than the original pixels.
But be careful that warping should be done on the original image, not on the surface brightness image.
We should re-calculate the surface brightness image after the warping is one.
This is because @mymath{log(a+b)\ne log(a)+log(b)}.
Recall that surface brightness calculation involves a logarithm, and warping involves addition of pixel values.
@example
$ astwarp nc/xdf-f160w.fits --scale=1/4 --centeroncorner \
--output=xdf-f160w-warped.fits
$ pixarcsec2=$(astfits xdf-f160w-warped.fits --pixelareaarcsec2)
$ astarithmetic xdf-f160w-warped.fits $zeropoint $pixarcsec2 \
counts-to-sb set-sb \
sb sb $sblimit gt sb isblank or $sblimit where \
--output=xdf-f160w-sb.fits
$ astconvertt xdf-f160w-sb.fits --output=xdf-f160w-sb.pdf --fluxlow=20
@end example
Above, we needed to re-calculate the pixel area of the warpped image, but we did not need to re-calculate the surface brightness limit!
The reason is that the surface brightness limit is independent of the pixel area (in its derivation, the pixel area has been accounted for).
As a side-effect of the warping, the number of pixels in the image also dramatically decreased, therefore the volume of the output PDF (in bytes) is also smaller, making your paper/report easier to upload/download or send by email.
This visual resolution is still more than enough for including on top of a column in your paper!
@cartouche
@noindent
@strong{I do not have the zero point of my image:} The absolute value of the zero point is irrelevant for the finally produced PDF.
We used it here because it was available and makes the numbers physically understandable.
If you do not have the zero point, just set it to zero (which is also the default zero point used by MakeCatalog when it estimates the surface brightness limit).
For the value to @option{--fluxlow} above, you can simply subtract @mymath{\sim10} from the surface brightness limit.
@end cartouche
@noindent
To summarize, and to keep the image for the next section in a separate directory, here are the necessary commands:
@example
$ zeropoint=25.94
$ mkdir report-image
$ cd report-image
$ sblimit=$(astfits cat/xdf-f160w.fits --keyvalue=SBL -q)
$ astwarp nc/xdf-f160w.fits --scale=1/4 --centeroncorner \
--output=warped.fits
$ pixarcsec2=$(astfits warped.fits --pixelareaarcsec2)
$ astarithmetic warped.fits $zeropoint $pixarcsec2 \
counts-to-sb set-sb \
sb sb $sblimit gt sb isblank or $sblimit where \
--output=sb.fits
$ astconvertt sb.fits --output=sb.pdf --fluxlow=20
@end example
@noindent
Finally, let's remove all the temporary files we built in the top-level tutorial directory:
@example
$ rm *.fits *.pdf
@end example
@cartouche
@noindent
@strong{Color images:} In this tutorial we just used one of the filters and showed the surface brightness image of that single filter as a grayscale image.
But the image can also be in color (using three filters) to better convey the physical properties of the objects in your image.
To create an image that shows the full dynamic range of your data, see this dedicated tutorial @ref{Color images with full dynamic range}.
@end cartouche
@node Marking objects for publication, Writing scripts to automate the steps, FITS images in a publication, General program usage tutorial
@subsection Marking objects for publication
In @ref{FITS images in a publication} we created a ready-to-print visualization of the FITS image used in this tutorial.
However, you rarely want to show a naked image like that!
You usually want to highlight some objects (that are the target of your science) over the image and show different marks for the various types of objects you are studying.
In this tutorial, we will do just that: select a sub-set of the full catalog of clumps, and show them with different marks shapes and colors, while also adding some text under each mark.
To add coordinates on the edges of the figure in your paper, see @ref{Annotations for figure in paper}.
To start with, let's put a red plus sign over the sub-sample of reddest clumps similar to @ref{Reddest clumps cutouts and parallelization}.
First, we will need to make the table of marks.
We will choose those with a color stronger than 1.5 magnitudes and a signal-to-noise ratio (in F160W) larger than 5.
We also only need the RA, Dec, color and magnitude (in F160W) columns (recall that at the end of the previous section we were already in the @file{report-image/} directory):
@example
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5:inf \
--range=sn-f160w,5:inf -cRA,DEC,MAG-F160w,F105W-F160W \
--output=reddest-cat.fits
@end example
Gnuastro's ConvertType program also has features to add marks over the finally produced PDF.
Below, we will start with the same @command{astconvertt} command of the previous section.
The positions of the marks should be given as a table to the @option{--marks} option.
Two other options are also mandatory: @option{--markcoords} identifies the columns that contain the coordinates of each mark and @option{--mode} specifies if the coordinates are in image or WCS coordinates.
@example
$ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
--marks=reddest-cat.fits --mode=wcs \
--markcoords=RA,DEC
@end example
Open the output @file{reddest.pdf} and see the result.
You will see relatively thick red circles placed over the given coordinates.
In your PDF browser, zoom-in to one of the regions, you will see that while the pixels of the background image become larger, the lines of these regions do not degrade!
This is the concept/power of Vector Graphics: ideal for publication!
For more on raster (pixelated) and vector (infinite-resolution) graphics, see @ref{Raster and Vector graphics}.
We had planned to put a plus-sign on each object.
However, because we did not explicitly ask for a certain shape, ConvertType put a circle.
Each mark can have its own separate shape.
Shapes can be given by a name or a code.
The full list of available shapes names and codes is given in the description of @option{--markshape} option of @ref{Drawing with vector graphics}.
To use a different shape, we need to add a new column to the base table, containing the identifier of the desired shape for each mark.
For example, the code for the plus sign is @code{2}.
With the commands below, we will add a new column with this fixed value.
With the first AWK command we will make a single-column file, where all the rows have the same value.
We pipe our base table into AWK, so it has the same number of rows.
With the second command, we concatenate (or append) the new column with Table, and give this new column the name @code{SHAPE} (to easily refer to it later and not have to count).
With the third command, we clean-up behind our selves (deleting the extra @file{params.txt} file).
Finally, we use the @option{--markshape} option to tell ConvertType which column to use for the shape identifier.
@example
$ asttable reddest-cat.fits | awk '@{print 2@}' > params.txt
$ asttable reddest-cat.fits --catcolumnfile=params.txt \
--colmetadata=5,SHAPE,id,"Shape of mark" \
--output=reddest-marks.fits
$ rm params.txt
$ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
--marks=reddest-marks.fits --mode=wcs \
--markcoords=RA,DEC --markshape=SHAPE
@end example
Open the PDF and have a look!
You do see red signs over the coordinates, but the thick plus-signs only become visible after you zoom-in multiple times!
To make them larger, you can give another column to specify the size of each mark.
Let's set the full width of the plus sign to extend 3 arcseconds.
The commands are similar to above, try to follow the difference (in particular, how we use @option{--sizeinarcsec}).
@example
$ asttable reddest-cat.fits | awk '@{print 2, 3@}' > params.txt
$ asttable reddest-cat.fits --catcolumnfile=params.txt \
--colmetadata=5,SHAPE,id,"Shape of mark" \
--colmetadata=6,SIZE,arcsec,"Size in arcseconds" \
--output=reddest-marks.fits
$ rm params.txt
$ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
--marks=reddest-marks.fits --mode=wcs \
--markcoords=RA,DEC --markshape=SHAPE \
--marksize=SIZE --sizeinarcsec
@end example
The power of this methodology is that each mark can be completely different!
For example, let's show the objects with a color less than 2 magnitudes with a circle, and those with a stronger color with a plus (recall that the code for a circle was @code{1} and that of a plus was @code{2}).
You only need to replace the first command above with the one below.
Afterwards, run the rest of the commands in the last code-block.
@example
$ asttable reddest-cat.fits -cF105W-F160W \
| awk '@{if($1<2) shape=1; else shape=2; print shape, 3@}' \
> params.txt
@end example
Have a look at the resulting @file{reddest.pdf}.
You see that the circles are much larger than the plus signs.
This is because the ``size'' of a cross is defined to be its full width, but for a circle, the value in the size column is the radius.
The way each shape interprets the value of the size column is fully described under @option{--markshape} of @ref{Drawing with vector graphics}.
To make them more comparable, let's set the circle sizes to be half of the cross sizes.
@example
$ asttable reddest-cat.fits -cF105W-F160W \
| awk '@{if($1<2) @{shape=1; size=1.5@} \
else @{shape=2; size=3@} \
print shape, size@}' \
> params.txt
@end example
Let's make things a little more complex (and show more information in the visualization) by using color.
Gnuastro recognizes the full @url{https://en.wikipedia.org/wiki/Web_colors#Extended_colors, extended web colors}, for their full list (containing names and codes) see @ref{Vector graphics colors}.
But like everything else, an even easier way to view and select the color for your figure is on the command-line!
If your terminal supports 24-bit true-color, you can see all the colors by running this command (supported on modern GNU/Linux distributions):
@example
$ astconvertt --listcolors
@end example
we will give a ``Sienna'' color for the objects that are fainter than 29th magnitude and a ``deeppink'' color to the brighter ones (while keeping the same shapes definition as before)
Since there are many colors, using their codes can make the table hard to read by a human!
So let's use the color names instead of the color codes in the example below (this is useful in other columns require strings-only, like the font name).
The only intricacy is in the making of @file{params.txt}.
Recall that string columns need column metadata (@ref{Gnuastro text table format}).
In this particular case, since the string column is the last one, we can safely use AWK's @code{print} command.
But if you have multiple string columns, to be safe it is better to use AWK's @code{printf} and explicitly specify the number of characters in the string columns.
@example
$ asttable reddest-cat.fits -cF105W-F160W,MAG-F160W \
| awk 'BEGIN@{print "# Column 3: COLOR [name, str8]"@}\
@{if($1<2) @{shape=1; size=1.5@} \
else @{shape=2; size=3@} \
if($2>29) @{color="sienna"@} \
else @{color="deeppink"@} \
print shape, size, color@}' \
> params.txt
$ asttable reddest-cat.fits --catcolumnfile=params.txt \
--colmetadata=5,SHAPE,id,"Shape of mark" \
--colmetadata=6,SIZE,arcsec,"Size in arcseconds" \
--output=reddest-marks.fits
$ rm params.txt
$ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
--marks=reddest-marks.fits --mode=wcs \
--markcoords=RA,DEC --markshape=SHAPE \
--marksize=SIZE --sizeinarcsec --markcolor=COLOR
@end example
As one final example, let's write the magnitude of each object under it.
Since the magnitude is already in the @file{marks.fits} that we produced above, it is very easy to add it (just add @option{--marktext} option to ConvertType):
@example
$ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
--marks=reddest-marks.fits --mode=wcs \
--markcoords=RA,DEC --markshape=SHAPE \
--marksize=SIZE --sizeinarcsec \
--markcolor=COLOR --marktext=MAG-F160W
@end example
Open the final PDF (@file{reddest.pdf}) and you will see the magnitudes written under each mark in the same color.
In the case of magnitudes (where the magnitude error is usually much larger than 0.01 magnitudes, four decimals is not too meaningful.
By default, for printing floating point columns, we use the compiler's default precision (which is about 4 digits for 32-bit floating point numbers).
But you can over-write this (to only show two digits after the decimal point) with the @option{--marktextprecision=2} option.
You can customize the written text by specifying a different line-width (for the text, different from the main mark), or even specifying a different font for each mark!
You can see the full list of available fonts for the text under a mark with the first command below and with the second, you can actually see them in a custom PDF (to only show the fonts).
@example
$ astconvertt --listfonts
$ astconvertt --showfonts
@end example
As you see, there are many ways you can customize each mark!
The above examples were just the tip of the iceburg!
But this section has already become long so we will stop it here (see the box at the end of this section for yet another useful example).
Like above, each feature of a mark can be controlled with a column in the table of mark information.
Please see in @ref{Drawing with vector graphics} for the full list of columns/features that you can use.
@cartouche
@noindent
@strong{Drawing ellipses:} With the commands below, you can measure the elliptical properties of the objects and visualized them in a ready-to-publish PDF (we will only show the ellipses of the largest clumps):
@example
$ astmkcatalog ../seg/xdf-f160w.fits --ra --dec --semi-major \
--axis-ratio --position-angle --clumpscat \
--output=ellipseinfo.fits
$ asttable ellipseinfo.fits -hCLUMPS | awk '@{print 4@}' > params.txt
$ asttable ellipseinfo.fits -hCLUMPS --catcolumnfile=params.txt \
--range=SEMI_MAJOR,10,inf -oellipse-marks.fits \
--colmetadata=6,SHAPE,id,"Shape of mark"
$ astconvertt sb.fits --output=ellipse.pdf --fluxlow=20 \
--marks=ellipse-marks.fits --mode=wcs \
--markcoords=RA,DEC --markshape=SHAPE \
--marksize=SEMI_MAJOR,AXIS_RATIO --sizeinpix \
--markrotate=POSITION_ANGLE
@end example
@end cartouche
@cindex Point (Vector graphics; PostScript)
To conclude this section, let us highlight an important factor to consider in vector graphics.
In ConvertType, things like line width or font size are defined in units of @emph{points}.
In vector graphics standards, 72 points correspond to one inch.
Therefore, one way you can change these factors for all the objects is to assign a larger or smaller print size to the image.
The print size is just a meta-data entry, and will not affect the file's volume in bytes!
You can do this with the @option{--widthincm} option.
Try adding this option and giving it very different values like @code{5} or @code{30}.
@node Writing scripts to automate the steps, Citing Gnuastro, Marking objects for publication, General program usage tutorial
@subsection Writing scripts to automate the steps
In the previous sub-sections, we went through a series of steps like downloading the necessary datasets (in @ref{Setup and data download}), detecting the objects in the image, and finally selecting a particular subset of them to inspect visually (in @ref{Reddest clumps cutouts and parallelization}).
To benefit most effectively from this subsection, please go through the previous sub-sections, and if you have not actually done them, we recommended to do/run them before continuing here.
@cindex @command{history}
@cindex Shell history
Each sub-section/step of the sub-sections above involved several commands on the command-line.
Therefore, if you want to reproduce the previous results (for example, to only change one part, and see its effect), you'll have to go through all the sections above and read through them again.
If you have ran the commands recently, you may also have them in the history of your shell (command-line environment).
You can see many of your previous commands on the shell (even if you have closed the terminal) with the @command{history} command, like this:
@example
$ history
@end example
@cindex GNU Bash
Try it in your teminal to see for yourself.
By default in GNU Bash, it shows the last 500 commands.
You can also save this ``history'' of previous commands to a file using shell redirection (to have it after your next 500 commands), with this command
@example
$ history > my-previous-commands.txt
@end example
This is a good way to temporarily keep track of every single command you ran.
But in the middle of all the useful commands, you will have many extra commands, like tests that you did before/after the good output of a step (that you decided to continue working on), or an unrelated job you had to do in the middle of this project.
Because of these impurities, after a few days (that you have forgot the context: tests you did not end-up using, or unrelated jobs) reading this full history will be very frustrating.
Keeping the final commands that were used in each step of an analysis is a common problem for anyone who is doing something serious with the computer.
But simply keeping the most important commands in a text file is not enough, the small steps in the middle (like making a directory to keep the outputs of one step) are also important.
In other words, the only way you can be sure that you are under control of your processing (and actually understand how you produced your final result) is to run the commands automatically.
@cindex Shell script
@cindex Script, shell
Fortunately, typing commands interactively with your fingers is not the only way to operate the shell.
The shell can also take its orders/commands from a plain-text file, which is called a @emph{script}.
When given a script, the shell will read it line-by-line as if you have actually typed it manually.
@cindex Redirection in shell
@cindex Shell redirection
Let's continue with an example: try typing the commands below in your shell.
With these commands we are making a text file (@code{a.txt}) containing a simple @mymath{3\times3} matrix, converting it to a FITS image and computing its basic statistics.
After the first three commands open @file{a.txt} with a text editor to actually see the values we wrote in it, and after the fourth, open the FITS file to see the matrix as an image.
@file{a.txt} is created through the shell's redirection feature: `@code{>}' overwrites the existing contents of a file, and `@code{>>}' appends the new contents after the old contents.
@example
$ echo "1 1 1" > a.txt
$ echo "1 2 1" >> a.txt
$ echo "1 1 1" >> a.txt
$ astconvertt a.txt --output=a.fits
$ aststatistics a.fits
@end example
To automate these series of commands, you should put them in a text file.
But that text file must have two special features:
1) It should tell the shell what program should interpret the script.
2) The operating system should know that the file can be directly executed.
@cindex Shebang
@cindex Hashbang
For the first, Unix-like operating systems define the @emph{shebang} concept (also known as @emph{sha-bang} or @emph{hashbang}).
In the shebang convention, the first two characters of a file should be `@code{#!}'.
When confronted with these characters, the script will be interpreted with the program that follows them.
In this case, we want to write a shell script and the most common shell program is GNU Bash which is installed in @file{/bin/bash}.
So the first line of your script should be `@code{#!/bin/bash}'@footnote{
When the script is to be run by the same shell that is calling it (like this script), the shebang is optional.
But it is still recommended, because it ensures that even if the user is not using GNU Bash, the script will be run in GNU Bash: given the differences between various shells, writing truly portable shell scripts, that can be run by many shell programs/implementations, is not easy (sometimes not possible!).}.
It may happen (rarely) that GNU Bash is in another location on your system.
In other cases, you may prefer to use a non-standard version of Bash installed in another location (that has higher priority in your @code{PATH}, see @ref{Installation directory}).
In such cases, you can use the `@code{#!/usr/bin/env bash}' shebang instead.
Through the @code{env} program, this shebang will look in your @code{PATH} and use the first @command{bash} it finds to run your script.
But for simplicity in the rest of the tutorial, we will continue with the `@code{#!/bin/bash}' shebang.
Using your favorite text editor, make a new empty file, let's call it @file{my-first-script.sh}.
Write the GNU Bash shebang (above) as its first line.
After the shebang, copy the series of commands we ran above.
Just note that the `@code{$}' sign at the start of every line above is the prompt of the interactive shell (you never actually typed it, remember?).
Therefore, commands in a shell script should not start with a `@code{$}'.
Once you add the commands, close the text editor and run the @command{cat} command to confirm its contents.
It should look like the example below.
Recall that you should only type the line that starts with a `@code{$}', the lines without a `@code{$}', are printed automatically on the command-line (they are the contents of your script).
@example
$ cat my-first-script.sh
#!/bin/bash
echo "1 1 1" > a.txt
echo "1 2 1" >> a.txt
echo "1 1 1" >> a.txt
astconvertt a.txt --output=a.fits
aststatistics a.fits
@end example
@cindex File flags
@cindex Flags, file
The script contents are now ready, but to run it, you should activate the script file's @emph{executable flag}.
In Unix-like operating systems, every file has three types of flags: @emph{read} (or @code{r}), @emph{write} (or @code{w}) and @emph{execute} (or @code{x}).
To toggle a file's flags, you should use the @command{chmod} (for ``change mode'') command.
To activate a flag, you put a `@code{+}' before the flag character (for example, @code{+x}).
To deactivate it, you put a `@code{-}' (for example, @code{-x}).
In this case, you want to activate the script's executable flag, so you should run
@example
$ chmod +x my-first-script.sh
@end example
Your script is now ready to run/execute the series of commands.
To run it, you should call it while specifying its location in the file system.
Since you are currently in the same directory as the script, it is easiest to use relative addressing like below (where `@code{./}' means the current directory).
But before running your script, first delete the two @file{a.txt} and @file{a.fits} files that were created when you interactively ran the commands.
@example
$ rm a.txt a.fits
$ ls
$ ./my-first-script.sh
$ ls
@end example
@noindent
The script immediately prints the statistics while doing all the previous steps in the background.
With the last @command{ls}, you see that it automatically re-built the @file{a.txt} and @file{a.fits} files, open them and have a look at their contents.
An extremely useful feature of shell scripts is that the shell will ignore anything after a `@code{#}' character.
You can thus add descriptions/comments to the commands and make them much more useful for the future.
For example, after adding comments, your script might look like this:
@example
$ cat my-first-script.sh
#!/bin/bash
# This script is my first attempt at learning to write shell scripts.
# As a simple series of commands, I am just building a small FITS
# image, and calculating its basic statistics.
# Write the matrix into a file.
echo "1 1 1" > a.txt
echo "1 2 1" >> a.txt
echo "1 1 1" >> a.txt
# Convert the matrix to a FITS image.
astconvertt a.txt --output=a.fits
# Calculate the statistics of the FITS image.
aststatistics a.fits
@end example
@noindent
Is Not this much more easier to read now?
Comments help to provide human-friendly context to the raw commands.
At the time you make a script, comments may seem like an extra effort and slow you down.
But in one year, you will forget almost everything about your script and you will appreciate the effort so much!
Think of the comments as an email to your future-self and always put a well-written description of the context/purpose (most importantly, things that are not directly clear by reading the commands) in your scripts.
The example above was very basic and mostly redundant series of commands, to show the basic concepts behind scripts.
You can put any (arbitrarily long and complex) series of commands in a script by following the two rules: 1) add a shebang, and 2) enable the executable flag.
In fact, as you continue your own research projects, you will find that any time you are dealing with more than two or three commands, keeping them in a script (and modifying that script, and running it) is much more easier, and future-proof, then typing the commands directly on the command-line and relying on things like @command{history}. Here are some tips that will come in handy when you are writing your scripts:
As a more realistic example, let's have a look at a script that will do the steps of @ref{Setup and data download} and @ref{Dataset inspection and cropping}.
In particular note how often we are using variables to avoid repeating fixed strings of characters (usually file/directory names).
This greatly helps in scaling up your project, and avoiding hard-to-find bugs that are caused by typos in those fixed strings.
@example
$ cat gnuastro-tutorial-1.sh
#!/bin/bash
# Download the input datasets
# ---------------------------
#
# The default file names have this format (where `FILTER' differs for
# each filter):
# hlsp_xdf_hst_wfc3ir-60mas_hudf_FILTER_v1_sci.fits
# To make the script easier to read, a prefix and suffix variable are
# used to sandwich the filter name into one short line.
dldir=download
xdfsuffix=_v1_sci.fits
xdfprefix=hlsp_xdf_hst_wfc3ir-60mas_hudf_
xdfurl=http://archive.stsci.edu/pub/hlsp/xdf
# The file name and full URLs of the input data.
f105w_in=$xdfprefix"f105w"$xdfsuffix
f160w_in=$xdfprefix"f160w"$xdfsuffix
f105w_url=$xdfurl/$f105w_in
f160w_url=$xdfurl/$f160w_in
# Go into the download directory and download the images there,
# then come back up to the top running directory.
mkdir $dldir
cd $dldir
wget $f105w_url
wget $f160w_url
cd ..
# Only work on the deep region
# ----------------------------
#
# To help in readability, each vertice of the deep/flat field is stored
# as a separate variable. They are then merged into one variable to
# define the polygon.
flatdir=flat-ir
vertice1="53.187414,-27.779152"
vertice2="53.159507,-27.759633"
vertice3="53.134517,-27.787144"
vertice4="53.161906,-27.807208"
f105w_flat=$flatdir/xdf-f105w.fits
f160w_flat=$flatdir/xdf-f160w.fits
deep_polygon="$vertice1:$vertice2:$vertice3:$vertice4"
mkdir $flatdir
astcrop --mode=wcs -h0 --output=$f105w_flat \
--polygon=$deep_polygon $dldir/$f105w_in
astcrop --mode=wcs -h0 --output=$f160w_flat \
--polygon=$deep_polygon $dldirdir/$f160w_in
@end example
The first thing you may notice is that even if you already have the downloaded input images, this script will always try to re-download them.
Also, if you re-run the script, you will notice that @command{mkdir} prints an error message that the download directory already exists.
Therefore, the script above is not too useful and some modifications are necessary to make it more generally useful.
Here are some general tips that are often very useful when writing scripts:
@table @strong
@item Stop script if a command crashes
By default, if a command in a script crashes (aborts and fails to do what it was meant to do), the script will continue onto the next command.
In GNU Bash, you can tell the shell to stop a script in the case of a crash by adding this line at the start of your script:
@example
set -e
@end example
@item Check if a file/directory exists to avoid re-creating it
Conditionals are a very useful feature in scripts.
One common conditional is to check if a file exists or not.
Assuming the file's name is @file{FILENAME}, you can check its existance (to avoid re-doing the commands that build it) like this:
@example
if [ -f FILENAME ]; then
echo "FILENAME exists"
else
# Some commands to generate the file
echo "done" > FILENAME
fi
@end example
To check the existance of a directory instead of a file, use @code{-d} instead of @code{-f}.
To negate a conditional, use `@code{!}' and note that conditionals can be written in one line also (useful for when it is short).
One common scenario that you'll need to check the existance of directories is when you are making them: the default @command{mkdir} command will crash if the desired directory already exists.
On some systems (including GNU/Linux distributions), @code{mkdir} has options to deal with such cases. But if you want your script to be portable, it is best to check yourself like below:
@example
if ! [ -d DIRNAME ]; then mkdir DIRNAME; fi
@end example
@item Avoid changing directories (with `@code{cd}') within the script
You can directly read and write files within other directories.
Therefore using @code{cd} to enter a directory (like what we did above, around the @code{wget} commands), running command there and coming out is extra, and not good practice.
This is because the running directory is part of the environment of a command.
You can simply give the directory name before the input and output file names to use them from anywhere on the file system.
See the same @code{wget} commands below for an example.
@end table
@cartouche
@noindent
@strong{Copyright notice:} A very important thing to put @emph{at the top} of your script is a one-line description of what it does and its copyright information (see the example below).
Here, we specify who is the author(s) of this script, in which years, and under what license others are allowed to use this file.
Without it, your script does not credibility or identity, and others cannot trust, use or acknowledge your work on it.
Since Gnuastro is itself licensed under a @url{https://en.wikipedia.org/wiki/Copyleft,copyleft} license (see @ref{Your rights} and @ref{GNU General Public License} or GNU GPL, the license finishes with a template on how to add it), any script that uses Gnuastro should also have a copyleft license: we recommend the same GNU GPL v3+ like below.
@end cartouche
Taking the above points into consideration, we can write a better version of the script above.
Please compare this script with the previous one carefully to spot the differences.
These are very important points that you will definitely encouter during your own research, and knowing them can greatly help your productiveity, so pay close attention (even in the comments).
@example
#!/bin/bash
# Script to download and keep the deep region of the XDF survey.
#
# Copyright (C) 2025 Your Name <yourname@@email.company>
# Copyright (C) 2021-2025 Initial Author <incase@@there-is.any>
#
# This script is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This script is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Gnuastro. If not, see <http://www.gnu.org/licenses/>.
# Abort the script in case of an error.
set -e
# Download the input datasets
# ---------------------------
#
# The default file names have this format (where `FILTER' differs for
# each filter):
# hlsp_xdf_hst_wfc3ir-60mas_hudf_FILTER_v1_sci.fits
# To make the script easier to read, a prefix and suffix variable are
# used to sandwich the filter name into one short line.
dldir=download
xdfsuffix=_v1_sci.fits
xdfprefix=hlsp_xdf_hst_wfc3ir-60mas_hudf_
xdfurl=http://archive.stsci.edu/pub/hlsp/xdf
# The file name and full URLs of the input data.
f105w_in=$xdfprefix"f105w"$xdfsuffix
f160w_in=$xdfprefix"f160w"$xdfsuffix
f105w_url=$xdfurl/$f105w_in
f160w_url=$xdfurl/$f160w_in
# Make sure the download directory exists, and download the images.
if ! [ -d $dldir ]; then mkdir $dldir; fi
if ! [ -f $f105w_in ]; then wget $f105w_url -O $dldir/$f105w_in; fi
if ! [ -f $f160w_in ]; then wget $f160w_url -O $dldir/$f160w_in; fi
# Crop out the deep region
# ------------------------
#
# To help in readability, each vertice of the deep/flat field is stored
# as a separate variable. They are then merged into one variable to
# define the polygon.
flatdir=flat-ir
vertice1="53.187414,-27.779152"
vertice2="53.159507,-27.759633"
vertice3="53.134517,-27.787144"
vertice4="53.161906,-27.807208"
f105w_flat=$flatdir/xdf-f105w.fits
f160w_flat=$flatdir/xdf-f160w.fits
deep_polygon="$vertice1:$vertice2:$vertice3:$vertice4"
if ! [ -d $flatdir ]; then mkdir $flatdir; fi
if ! [ -f $f105w_flat ]; then
astcrop --mode=wcs -h0 --output=$f105w_flat \
--polygon=$deep_polygon $dldir/$f105w_in
fi
if ! [ -f $f160w_flat ]; then
astcrop --mode=wcs -h0 --output=$f160w_flat \
--polygon=$deep_polygon $dldir/$f160w_in
fi
@end example
@node Citing Gnuastro, , Writing scripts to automate the steps, General program usage tutorial
@subsection Citing Gnuastro
After your exciting project is done and you are ready to publish/present your paper, poster or slides you should cite all the software that you have used.
Armed with citations, the developers of the research software that you have used will be able to secure future funding to continue working on those software and improving them for you and the community.
So please take a few minutes to visit the software's webpage and find what resources the authors have listed for you to cite.
In Gnuastro, we have tried to make things as simple as possible for you: all Gnuastro programs have a @option{--cite} option that describes the relevant resources to cite along with BibTeX entries that you can simply copy-paste into your publication.
For example try the commands below (and read the outputs carefully):
@example
$ asttable --cite
$ astmkcatalog --cite
$ astnoisechisel --cite
@end example
@noindent
You will see that while the first two entries are the same in all, the last one differs between MakeCatalog and NoiseChisel while Table only has two entries.
This is because MakeCatalog and NoiseChisel have dedicated papers that should be cited along with the main one (for all Gnuastro users), but we haven't had time to write a dedicated paper for Table.
So please read the output of all the programs you have used carefully and cite the relevant resources in your publications.
For the full library of Gnuastro's citable resources, see @url{https://ui.adsabs.harvard.edu/public-libraries/0QdYMuVCQdmygEh0Vs_4Ew} and to see Gnuastro's own acknowledgments (institutes, grants and people who made Gnuastro possible), see @ref{Acknowledgments and short history}.
@node Detecting large extended targets, Building the extended PSF, General program usage tutorial, Tutorials
@section Detecting large extended targets
The outer wings of large and extended objects can sink into the noise very gradually and can have a large variety of shapes (for example, due to tidal interactions).
Therefore separating the outer boundaries of the galaxies from the noise can be particularly tricky.
Besides causing an under-estimation in the total estimated brightness of the target, failure to detect such faint wings will also cause a bias in the noise measurements, thereby hampering the accuracy of any measurement on the dataset.
Therefore even if they do not constitute a significant fraction of the target's light, or are not your primary target, these regions must not be ignored.
In this tutorial, we will walk you through the strategy of detecting such targets using @ref{NoiseChisel}.
@cartouche
@noindent
@strong{Do not start with this tutorial:} If you have not already completed @ref{General program usage tutorial}, we strongly recommend going through that tutorial before starting this one.
Basic features like access to this book on the command-line, the configuration files of Gnuastro's programs, benefiting from the modular nature of the programs, viewing multi-extension FITS files, or using NoiseChisel's outputs are discussed in more detail there.
@end cartouche
@cindex M51
@cindex NGC5195
@cindex SDSS, Sloan Digital Sky Survey
@cindex Sloan Digital Sky Survey, SDSS
We will try to detect the faint tidal wings of the beautiful M51 group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
We will use a dataset/image from the public @url{http://www.sdss.org/, Sloan Digital Sky Survey}, or SDSS.
Due to its more peculiar low surface brightness structure/features, we will focus on the dwarf companion galaxy of the group (or NGC 5195).
@menu
* Downloading and validating input data:: How to get and check the input data.
* NoiseChisel optimization:: Detect the extended and diffuse wings.
* Skewness caused by signal and its measurement:: How signal changes the distribution.
* Image surface brightness limit:: Standards to quantify the noise level.
* Achieved surface brightness level:: Calculate the outer surface brightness.
* Extract clumps and objects:: Find sub-structure over the detections.
@end menu
@node Downloading and validating input data, NoiseChisel optimization, Detecting large extended targets, Detecting large extended targets
@subsection Downloading and validating input data
To get the image, you can use the @url{https://skyserver.sdss.org/dr18/SearchTools/IQS, Imaging search} tool of SDSS.
As long as it is covered by the SDSS, you can find an image containing your desired target either by providing a standard name (if it has one), or its coordinates.
To access the dataset we will use here, under ``position constraints'' section, select ``cone''.
Fill the declination (47.194444) and right ascension (202.467917) of NGC5195 in the respective box and click on ``Submit''.
@cartouche
@noindent
@strong{Type the example commands:} Try to type the example commands on your terminal and use the history feature of your command-line (by pressing the ``up'' button to retrieve previous commands).
Do not simply copy and paste the commands shown here.
This will help simulate future situations when you are processing your own datasets.
@end cartouche
@cindex GNU Wget
You can see the list of available filters by clicking on the ``Get Wget'' file button at the bottom of the page.
For this demonstration, we will use the r-band filter image.
The original version of this tutorial was written for the previous (DR12) version of SDSS's interface.
Fortunately while their web interface has changed, the data sets from DR12 have not been deleted/removed.
To ensure reproducibility, let's use the same data set that this tutorial was initially written.
You can do that by running the following command to download it with GNU Wget@footnote{To make the command easier to view on screen or in a page, we have defined the top URL of the image as the @code{topurl} shell variable.
You can just replace the value of this variable with @code{$topurl} in the @command{wget} command.}.
To keep things clean, let's also put it in a directory called @file{ngc5195}.
With the @option{-O} option, we are asking Wget to save the downloaded file with a more manageable name: @file{r.fits.bz2} (this is an r-band image of NGC 5195, which was the directory name).
@example
$ mkdir ngc5195
$ cd ngc5195
$ topurl=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
$ wget $topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
@end example
When you want to reproduce a previous result (a known analysis, on a known dataset, to get a known result: like the case here!) it is important to verify that the file is correct: that the input file has not changed (on the remote server, or in your own archive), or there was no downloading problem.
Otherwise, if the data have changed in your server/archive, and you use the same script, you will get a different result, causing a lot of confusion!
@cindex Checksum
@cindex SHA-1 checksum
@cindex Verification, checksum
One good way to verify the contents of a file is to store its @emph{Checksum} in your analysis script and check it before any other operation.
The @emph{Checksum} algorithms look into the contents of a file and calculate a fixed-length string from them.
If any change (even in a bit or byte) is made within the file, the resulting string will change, for more see @url{https://en.wikipedia.org/wiki/Checksum, Wikipedia}.
There are many common algorithms, but a simple one is the @url{https://en.wikipedia.org/wiki/SHA-1, SHA-1 algorithm} (Secure Hash Algorithm 1) that you can calculate easily with the command below (the second line is the output, and the checksum is the first/long string: it is independent of the file name)
@example
$ sha1sum r.fits.bz2
5fb06a572c6107c72cbc5eb8a9329f536c7e7f65 r.fits.bz2
@end example
If the checksum on your computer is different from this, either the file has been incorrectly downloaded (most probable), or it has changed on SDSS servers (very unlikely@footnote{If your checksum is different, try uncompressing the file with the @command{bunzip2} command after this, and open the resulting FITS file.
If it opens and you see the image of M51 and NGC5195, then there was no download problem, and the file has indeed changed on the SDSS servers!
In this case, please contact us at @code{bug-gnuastro@@gnu.org}.}).
To get a better feeling of checksums open your favorite text editor and make a test file by writing something in it.
Save it and calculate the text file's SHA-1 checksum with @command{sha1sum}.
Try renaming that file, and you'll see the checksum has not changed (checksums only look into the contents, not the name/location of the file).
Then open the file with your text editor again, make a change and re-calculate its checksum, you'll see the checksum string has changed.
Its always good to keep this short checksum string with your project's scripts and validate your input data before using them.
You can do this with a shell conditional like this:
@example
filename=r.fits.bz2
expected=5fb06a572c6107c72cbc5eb8a9329f536c7e7f65
sum=$(sha1sum $filename | awk '@{print $1@}')
if [ $sum = $expected ]; then
echo "$filename: validated"
else
echo "$filename: wrong checksum!"
exit 1
fi
@end example
@cindex Bzip2
@noindent
Now that we know you have the same data that we wrote this tutorial with, let's continue.
The SDSS server keeps the files in a Bzip2 compressed file format (that have a @code{.bz2} suffix).
So we will first decompress it with the following command to use it as a normal FITS file.
By convention, compression programs delete the original file (compressed when uncompressing, or uncompressed when compressing).
To keep the original file, you can use the @option{--keep} or @option{-k} option which is available in most compression programs for this job.
Here, we do not need the compressed file any more, so we will just let @command{bunzip} delete it for us and keep the directory clean.
@example
$ bunzip2 r.fits.bz2
@end example
@node NoiseChisel optimization, Skewness caused by signal and its measurement, Downloading and validating input data, Detecting large extended targets
@subsection NoiseChisel optimization
In @ref{Downloading and validating input data} we downloaded the single exposure SDSS image.
Let's see how NoiseChisel operates on it with its default parameters:
@example
$ astnoisechisel r.fits -h0
@end example
As described in @ref{NoiseChisel and Multi-Extension FITS files}, NoiseChisel's default output is a multi-extension FITS file.
Open the output @file{r_detected.fits} file and have a look at the extensions, the 0-th extension is only meta-data and contains NoiseChisel's configuration parameters.
The rest are the Sky-subtracted input, the detection map, Sky values and Sky standard deviation.
@example
$ ds9 -mecube r_detected.fits -zscale -zoom to fit
@end example
Flipping through the extensions in a FITS viewer, you will see that the first image (Sky-subtracted image) looks reasonable: there are no major artifacts due to bad Sky subtraction compared to the input.
The second extension also seems reasonable with a large detection map that covers the whole of NGC5195, but also extends towards the bottom of the image where we actually see faint and diffuse signal in the input image.
Now try flipping between the @code{DETECTIONS} and @code{SKY} extensions.
In the @code{SKY} extension, you'll notice that there is still significant signal beyond the detected pixels.
You can tell that this signal belongs to the galaxy because the far-right side of the image (away from M51) is dark (has lower values) and the brighter parts in the Sky image (with larger values) are just under the detections and follow a similar pattern.
The fact that signal from the galaxy remains in the @code{SKY} HDU shows that NoiseChisel can be optimized for a much better result.
The @code{SKY} extension must not contain any light around the galaxy.
Generally, any time your target is much larger than the tile size and the signal is very diffuse and extended at low signal-to-noise values (like this case), this @emph{will} happen.
Therefore, when there are large objects in the dataset, @strong{the best place} to check the accuracy of your detection is the estimated Sky image.
When dominated by the background, noise has a symmetric distribution.
However, signal is not symmetric (we do not have negative signal).
Therefore when non-constant@footnote{by constant, we mean that it has a single value in the region we are measuring.} signal is present in a noisy dataset, the distribution will be positively skewed.
For a demonstration, see Figure 1 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
This skewness is a good measure of how much faint signal we have in the distribution.
The skewness can be accurately measured by the difference in the mean and median (assuming no strong outliers): the more distant they are, the more skewed the dataset is.
This important concept will be discussed more extensively in the next section (@ref{Skewness caused by signal and its measurement}).
However, skewness is only a proxy for signal when the signal has structure (varies per pixel).
Therefore, when it is approximately constant over a whole tile, or sub-set of the image, the constant signal's effect is just to shift the symmetric center of the noise distribution to the positive and there will not be any skewness (major difference between the mean and median).
This positive@footnote{In processed images, where the Sky value can be over-estimated, this constant shift can be negative.} shift that preserves the symmetric distribution is the Sky value.
When there is a gradient over the dataset, different tiles will have different constant shifts/Sky-values, for example, see Figure 11 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
To make this very large diffuse/flat signal detectable, you will therefore need a larger tile to contain a larger change in the values within it (and improve number statistics, for less scatter when measuring the mean and median).
So let's play with the tessellation a little to see how it affects the result.
In Gnuastro, you can see the option values (@option{--tilesize} in this case) by adding the @option{-P} option to your last command.
Try running NoiseChisel with @option{-P} to see its default tile size.
You can clearly see that the default tile size is indeed much smaller than this (huge) galaxy and its tidal features.
As a result, NoiseChisel was unable to identify the skewness within the tiles under the outer parts of M51 and NGC 5159 and the threshold has been over-estimated on those tiles.
To see which tiles were used for estimating the quantile threshold (no skewness was measured), you can use NoiseChisel's @option{--checkqthresh} option:
@example
$ astnoisechisel r.fits -h0 --checkqthresh
@end example
Did you see how NoiseChisel aborted after finding and applying the quantile thresholds?
When you call any of NoiseChisel's @option{--check*} options, by default, it will abort as soon as all the check steps have been written in the check file (a multi-extension FITS file).
This allows you to focus on the problem you wanted to check as soon as possible (you can disable this feature with the @option{--continueaftercheck} option).
To optimize the threshold-related settings for this image, let's play with this quantile threshold check image a little.
Do not forget that ``@emph{Good statistical analysis is not a purely routine matter, and generally calls for more than one pass through the computer}'' (Anscombe 1973, see @ref{Science and its tools}).
A good scientist must have a good understanding of her tools to make a meaningful analysis.
So do not hesitate in playing with the default configuration and reviewing the manual when you have a new dataset (from a new instrument) in front of you.
Robust data analysis is an art, therefore a good scientist must first be a good artist.
So let's open the check image as a multi-extension cube:
@example
$ ds9 -mecube r_qthresh.fits -zscale -cmap sls -zoom to fit
@end example
The first extension (called @code{CONVOLVED}) of @file{r_qthresh.fits} is the convolved input image where the threshold(s) is(are) defined (and later applied to).
For more on the effect of convolution and thresholding, see Sections 3.1.1 and 3.1.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
The second extension (@code{QTHRESH_ERODE}) has a blank/white value for all the pixels of any tile that was identified as having significant signal.
The other tiles have the measured threshold over them.
The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND}) are the other two quantile thresholds that are necessary in NoiseChisel's later steps.
Every step in this file is repeated on the three thresholds.
Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you clearly see how the non-blank tiles around NGC 5195 have a gradient.
As one line of attack against discarding too much signal below the threshold, NoiseChisel rejects outlier tiles.
Go forward by three extensions to @code{VALUE1_NO_OUTLIER} and you will see that many of the tiles over the galaxy have been removed in this step.
For more on the outlier rejection algorithm, see the latter half of @ref{Quantifying signal in a tile}.
Even though much of the galaxy's footprint has been rejected as outliers, there are still tiles with signal remaining:
play with the DS9 color-bar and you still see a gradient near the outer tidal feature of the galaxy.
Before trying to correct this, let's look at the other extensions of this check image.
We will use a @code{*} as a wild-card that can be 1, 2 or 3.
In the @code{THRESH*_INTERP} extensions, you see that all the blank tiles have been interpolated using their nearest neighbors (the relevant option here is @option{--interpnumngb}).
In the following @code{THRESH*_SMOOTH} extensions, you can see the tile values after smoothing (configured with @option{--smoothwidth} option).
Finally, in @code{QTHRESH-APPLIED}, you see the thresholded image: pixels with a value of 1 will be eroded later, but pixels with a value of 2 will pass the erosion step un-touched.
Let's get back to the problem of optimizing the result.
You have two strategies for detecting the outskirts of the merging galaxies:
1) Increase the tile size to get more accurate measurements of skewness.
2) Strengthen the outlier rejection parameters to discard more of the tiles with signal (primarily by increasing @option{--outliernumngb}).
Fortunately in this image we have a sufficiently large region on the right side of the image that the galaxy does not extend to.
So we can use the more robust first solution.
In situations where this does not happen (for example, if the field of view in this image was shifted to the left to have more of M51 and less sky) you are limited to a combination of the two solutions or just to the second solution.
@cartouche
@noindent
@strong{Skipping convolution for faster tests:} The slowest step of NoiseChisel is the convolution of the input dataset.
Therefore when your dataset is large (unlike the one in this test), and you are not changing the input dataset or kernel in multiple runs (as in the tests of this tutorial), it is faster to do the convolution separately once (using @ref{Convolve}) and use NoiseChisel's @option{--convolved} option to directly feed the convolved image and avoid convolution.
For more on @option{--convolved}, see @ref{NoiseChisel input}.
@end cartouche
To better identify the skewness caused by the flat NGC 5195 and M51 tidal features on the tiles under it, we have to choose a larger tile size.
Let's try a tile size of 100 by 100 pixels and inspect the check image.
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --checkqthresh
$ ds9 -mecube r_qthresh.fits -zscale -cmap sls -zoom to fit
@end example
You can clearly see the effect of this increased tile size: the tiles are much larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that all the tiles are nicely grouped on the right side of the image (the farthest from M51, where we do not see a gradient in @code{QTHRESH_ERODE}).
Things look good now, so let's remove @option{--checkqthresh} and let NoiseChisel proceed with its detection.
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example
The detected pixels of the @code{DETECTIONS} extension have expanded a little, but not as much.
Also, the gradient in the @code{SKY} image is almost fully removed (and does not fall over M51 anymore).
However, on the bottom-right of the m51 detection, we see many holes gradually increasing in size.
This hints that there is still signal out there.
Let's check the next series of detection steps by adding the @code{--checkdetection} option this time:
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --checkdetection
$ ds9 -mecube r_detcheck.fits -zscale -cmap sls -zoom to fit
@end example
@cindex Erosion (image processing)
The output now has 16 extensions, showing every step that is taken by NoiseChisel.
The first and second (@code{INPUT} and @code{CONVOLVED}) are clear from their names.
The third (@code{THRESHOLDED}) is the thresholded image after finding the quantile threshold (last extension of the output of @code{--checkqthresh}).
The fourth HDU (@code{ERODED}) is new: it is the name-stake of NoiseChisel, or eroding pixels that are above the threshold.
By erosion, we mean that all pixels with a value of @code{1} (above the threshold) that are touching a pixel with a value of @code{0} (below the threshold) will be flipped to zero (or ``carved'' out)@footnote{Pixels with a value of @code{2} are very high signal-to-noise pixels, they are not eroded, to preserve sharp and bright sources.}.
You can see its effect directly by going back and forth between the @code{THRESHOLDED} and @code{ERODED} extensions.
@cindex Dilation (image processing)
In the fifth extension (@code{OPENED-AND-LABELED}) the image is ``opened'', which is a name for eroding once, then dilating (dilation is the inverse of erosion).
This is good to remove thin connections that are only due to noise.
Each separate connected group of pixels is also given its unique label here.
Do you see how just beyond the large M51 detection, there are many smaller detections that get smaller as you go more distant?
This hints at the solution: the default number of erosions is too much.
Let's see how many erosions take place by default (by adding @command{-P | grep erode} to the previous command)
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 -P | grep erode
@end example
@noindent
We see that the value of @code{erode} is @code{2}.
The default NoiseChisel parameters are primarily targeted to processed images (where there is correlated noise due to all the processing that has gone into the warping and coadding of raw images, see @ref{NoiseChisel optimization for detection}).
In those scenarios 2 erosions are commonly necessary.
But here, we have a single-exposure image where there is no correlated noise (the pixels are not mixed).
So let's see how things change with only one erosion:
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
--checkdetection
$ ds9 -mecube r_detcheck.fits -zscale -cmap sls -zoom to fit
@end example
Looking at the @code{OPENED-AND-LABELED} extension again, we see that the main/large detection is now much larger than before.
While the immediately-outer connected regions are still present, they have decreased dramatically, so we can pass this step.
After the @code{OPENED-AND-LABELED} extension, NoiseChisel goes onto finding false detections using the undetected pixels.
The process is fully described in Section 3.1.5. (Defining and Removing False Detections) of Akhlaghi and Ichikawa @url{https://arxiv.org/pdf/1505.01664.pdf,2015}.
Please compare the extensions to what you read there and things will be very clear.
In the last HDU (@code{DETECTION-FINAL}), we have the final detected pixels that will be used to estimate the Sky and its Standard deviation.
We see that the main detection has indeed been detected very far out, so let's see how the full NoiseChisel will estimate the Sky and its standard deviation (by removing @code{--checkdetection}):
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example
The @code{DETECTIONS} extension of @code{r_detected.fits} closely follows what the @code{DETECTION-FINAL} of the check image (looks good!).
If you go ahead to the @code{SKY} extension, things still look good.
But it can still be improved.
Look at the @code{DETECTIONS} again, you will see the right-ward edges of M51's detected pixels have many ``holes'' that are fully surrounded by signal (value of @code{1}) and the signal stretches out in the noise very thinly (the size of the holes increases as we go out).
This suggests that there is still undetected signal and that we can still dig deeper into the noise.
With the @option{--detgrowquant} option, NoiseChisel will ``grow'' the detections in to the noise.
Its value is the ultimate limit of the growth in units of quantile (between 0 and 1).
Therefore @option{--detgrowquant=1} means no growth and @option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is usually too much and will cover the whole image!).
See Figure 2 of Akhlaghi @url{https://arxiv.org/pdf/1909.11230.pdf,2019} for more on this option.
Try running the previous command with various values (from 0.6 to higher values) to see this option's effect on this dataset.
For this particularly huge galaxy (with signal that extends very gradually into the noise), we will set it to @option{0.75}:
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
--detgrowquant=0.75
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example
Beyond this level (smaller @option{--detgrowquant} values), you see many of the smaller background galaxies (towards the right side of the image) starting to create thin spider-leg-like features, showing that we are following correlated noise for too much.
Please try it for yourself by changing it to @code{0.6} for example.
When you look at the @code{DETECTIONS} extension of the command shown above, you see the wings of the galaxy being detected much farther out, But you also see many holes which are clearly just caused by noise.
After growing the objects, NoiseChisel also allows you to fill such holes when they are smaller than a certain size through the @option{--detgrowmaxholesize} option.
In this case, a maximum area/size of 10,000 pixels seems to be good:
@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
--detgrowquant=0.75 --detgrowmaxholesize=10000
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example
When looking at the raw input image (which is very ``shallow'': less than a minute exposure!), you do not see anything so far out of the galaxy.
You might just think to yourself that ``this is all noise, I have just dug too deep and I'm following systematics''!
If you feel like this, have a look at the deep images of this system in Watkins @url{https://arxiv.org/abs/1501.04599,2015}, or a 12 hour deep image of this system (with a 12-inch telescope):
@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from this Reddit discussion: @url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
In these deeper images you clearly see how the outer edges of the M51 group follow this exact structure, below in @ref{Achieved surface brightness level}, we will measure the exact level.
As the gradient in the @code{SKY} extension shows, and the deep images cited above confirm, the galaxy's signal extends even beyond this.
But this is already far deeper than what most (if not all) other tools can detect.
Therefore, we will stop configuring NoiseChisel at this point in the tutorial and let you play with the other options a little more, while reading more about it in the papers: Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and @url{https://arxiv.org/abs/1909.11230,2019} and @ref{NoiseChisel}.
When you do find a better configuration feel free to contact us for feedback.
Do not forget that good data analysis is an art, so like a sculptor, master your chisel for a good result.
To avoid typing all these options every time you run NoiseChisel on this image, you can use Gnuastro's configuration files, see @ref{Configuration files}.
For an applied example of setting/using them, see @ref{Option management and configuration files}.
@cartouche
@noindent
@strong{This NoiseChisel configuration is NOT GENERIC:} Do not use the configuration derived above, on another instrument's image @emph{blindly}.
If you are unsure, just use the default values.
As you saw above, the reason we chose this particular configuration for NoiseChisel to detect the wings of the M51 group was strongly influenced by the noise properties of this particular image.
Remember @ref{NoiseChisel optimization for detection}, where we looked into the very deep XDF image which had strong correlated noise?
As long as your other images have similar noise properties (from the same data-reduction step of the same instrument), you can use your configuration on any of them.
But for images from other instruments, please follow a similar logic to what was presented in these tutorials to find the optimal configuration.
@end cartouche
@cartouche
@noindent
@strong{Smart NoiseChisel:} As you saw during this section, there is a clear logic behind the optimal parameter value for each dataset.
Therefore, we plan to add capabilities to (optionally) automate some of the choices made here based on the actual dataset, please join us in doing this if you are interested.
However, given the many problems in existing ``smart'' solutions, such automatic changing of the configuration may cause more problems than they solve.
So even when they are implemented, we would strongly recommend quality checks for a robust analysis.
@end cartouche
@node Skewness caused by signal and its measurement, Image surface brightness limit, NoiseChisel optimization, Detecting large extended targets
@subsection Skewness caused by signal and its measurement
In the previous section (@ref{NoiseChisel optimization}) we showed how to customize NoiseChisel for a single-exposure SDSS image of the M51 group.
During the customization, we also discussed the skewness caused by signal.
In the next section (@ref{Image surface brightness limit}), we will use this to measure the surface brightness limit of the image.
However, to better understand NoiseChisel and also, the image surface brightness limit, understanding the skewness caused by signal, and how to measure it properly are very important.
Therefore now that we have separated signal from noise, let's pause for a moment and look into skewness, how signal creates it, and find the best way to measure it.
Let's start masking all the detected pixels found at the end of the previous section (@ref{NoiseChisel optimization}) and having a look at the noise distribution with Gnuastro's Arithmetic and Statistics programs as shown below (while visually inspecting the masked image with DS9 in the middle).
@example
$ astarithmetic r_detected.fits -hINPUT-NO-SKY set-in \
r_detected.fits -hDETECTIONS set-det \
in det nan where -odet-masked.fits
$ ds9 det-masked.fits
$ aststatistics det-masked.fits
@end example
@noindent
You will see that Gnuastro's Statistics program prints an ASCII histogram when no option is given (it is shown below).
This is done to give you a fast and easy view of the distribution of values in the dataset (pixels in an image, or rows in a table's column).
@example
-------
Input: det-masked.fits (hdu: 1)
-------
Number of elements: 903920
Minimum: -0.113543
Maximum: 0.130339
Median: -0.00216306
Mean: -0.0001893073877
Standard deviation: 0.02569057188
-------
Histogram:
| ** *
| * ** * *
| ** ** * *
| * ** ** ** *
| ** ** ** ** * **
| ** ** ** ** * ** *
| * ** ** ** ** * ** **
| ** ** ** ** **** ** ** *
| ** ** ** ** ** **** ** ** ** *
| ** ** ** ** ** ** ******* ** ** ** *
|*********** ** ** ** ******************* ** ** ** ** ***** ** ***** **
|----------------------------------------------------------------------
@end example
@noindent
@cindex Skewness
This histogram shows a roughly symmetric noise distribution, so let's have a look at its skewness.
The most commonly used definition of skewness is known as the ``Pearson's first skewness coefficient''.
It measures the difference between the mean and median, in units of the standard deviation (STD):
@dispmath{\rm{Skewness}\equiv\frac{(\rm{mean}-\rm{median})}{\rm{STD}}}
The logic behind this definition is simple: as more signal is added to the same pixels that originally only have raw noise (skewness is increased), the mean shifts to the positive faster than the median, so the distance between the mean and median should increase.
Let's measure the skewness (as defined above) over the image without any signal.
Its very easy with Gnuastro's Statistics program (and piping the output to AWK):
@verbatim
$ aststatistics det-masked.fits --mean --median --std \
| awk '{print ($1-$2)/$3}'
0.0768279
@end verbatim
@noindent
We see that the mean and median are only @mymath{0.08\sigma} (rounded) away from each other (which is very close)!
All pixels with significant signal are masked, so this is expected, and everything is fine.
Now, let's check the pixel distribution of the sky-subtracted input (where pixels with significant signal remain, and are not masked):
@example
$ ds9 r_detected.fits
$ aststatistics r_detected.fits -hINPUT-NO-SKY
-------
Input: r_detected.fits (hdu: INPUT-NO-SKY)
Unit: nanomaggy
-------
Number of elements: 3049472
Minimum: -0.113543
Maximum: 159.25
Median: 0.0241158
Mean: 0.1057885317
Standard deviation: 0.698167489
-------
Histogram:
|*
|*
|*
|*
|*
|*
|*
|*
|*
|*
|******************************************* *** ** **** * * * * *
|----------------------------------------------------------------------
@end example
@noindent
Comparing the distributions above, you can see that the @emph{minimum} value of the image has not changed because we have not masked the minimum values.
However, as expected, the @emph{maximum} value of the image has changed (from @mymath{0.13} to @mymath{159.25}).
This is clearly evident from the ASCII histogram: the distribution is very elongated because the galaxy inside the image is extremely bright.
Now, let's limit the displayed information with the @option{--lessthan=0.13} option of Statistics as shown below (to only use values less than 0.13; the maximum of the image where all signal is masked).
@example
$ aststatistics r_detected.fits -hINPUT-NO-SKY --lessthan=0.13
-------
Input: r_detected.fits (hdu: INPUT-NO-SKY)
Range: up to (exclusive) 0.13.
Unit: nanomaggy
-------
Number of elements: 2531949
Minimum: -0.113543
Maximum: 0.126233
Median: 0.0137138
Mean: 0.01735551527
Standard deviation: 0.03590550597
-------
Histogram:
| *
| * ** **
| * * ** ** **
| * * ** ** ** *
| * ** * ** ** ** *
| ** ** * ** ** ** * *
| * ** ** * ** ** ** * *
| ** ** ** * ** ** ** ** * ** *
| * ** ** **** ** ** ** **** ** ** **
| * ** ** ** **** ** ** ** ******* ** ** ** * ** ** **
|***** ** ********** ** ** ********** ** ********** ** ************* **
|----------------------------------------------------------------------
@end example
The improvement is obvious: the ASCII histogram better shows the pixel values near the noise level.
We can now compare with the distribution of @file{det-masked.fits} that we found earlier.
The ASCII histogram of @file{det-masked.fits} was approximately symmetric, while this is asymmetric in this range, especially in outer (to the right, or positive) direction.
The heavier right-side tail is a clear visual demonstration of skewness that is caused by the signal in the un-masked image.
Having visually confirmed the skewness, let's quantify it with Pearson's first skewness coefficient.
Like before, we can simply use Gnuastro's Statistics and AWK for the measurement and calculation:
@verbatim
$ aststatistics r_detected.fits --mean --median --std \
| awk '{print ($1-$2)/$3}'
0.116982
@end verbatim
The difference between the mean and median is now approximately @mymath{0.12\sigma}.
This is larger than the skewness of the masked image (which was approximately @mymath{0.08\sigma}).
At a glance (only looking at the numbers), it seems that there is not much difference between the two distributions.
However, visually looking at the non-masked image, or the ASCII histogram, you would expect the quantified skewness to be much larger than that of the masked image, but that has not happened!
Why is that?
The reason is that the presence of signal does not only shift the mean and median, it @emph{also} increases the standard deviation!
To see this for yourself, compare the standard deviation of @file{det-masked.fits} (which was approximately @mymath{0.025}) to @file{r_detected.fits} (without @option{--lessthan}; which was approximately @mymath{0.699}).
The latter is almost 28 times larger!
This happens because the standard deviation is defined only in a symmetric (and Gaussian) distribution.
In a non-Gaussian distribution, the standard deviation is poorly defined and is not a good measure of ``width''.
Since Pearson's first skewness coefficient is defined in units of the standard deviation, this very large increase in the standard deviation has hidden the much increased distance between the mean and median after adding signal.
@cindex Quantile
We therefore need a better unit or scale to quantify the distance between the mean and median.
A unit that is less affected by skewness or outliers.
One solution that we have found to be very useful is the quantile units or quantile scale.
The quantile scale is defined by first sorting the dataset (which has @mymath{N} elements).
If we want the quantile of a value @mymath{V} in a distribution, we first find the nearest data element to @mymath{V} in the sorted dataset.
Let's assume the nearest element is the @mymath{i}-th element, counting from 0, after sorting.
The quantile of V in that distribution is then defined as @mymath{i/(N-1)} (which will have a value between 0 and 1).
The quantile of the median is obvious from its definition: 0.5.
This is because the median is defined to be the middle element of the distribution after sorting.
We can therefore define skewness as the quantile of the mean (@mymath{q_m}).
If @mymath{q_m\sim0.5} (the median), then the distribution (of signal blended in noise) is symmetric (possibly Gaussian, but the functional form is irrelevant here).
A larger value for @mymath{|q_m-0.5|} quantifies a more skewed the distribution.
Furthermore, a @mymath{q_m>0.5} signifies a positive skewness, while @mymath{q_m<0.5} signifies a negative skewness.
Let's put this definition to a test on the same two images we have already created.
Fortunately Gnuastro's Statistics program has the @option{--quantofmean} option to easily calculate @mymath{q_m} for you.
So testing is easy:
@example
$ aststatistics det-masked.fits --quantofmean
0.51295636
$ aststatistics r_detected.fits -hINPUT-NO-SKY --quantofmean
0.8105163158
@end example
The two quantiles of mean are now very distinctly different (@mymath{0.51} and @mymath{0.81}): differing by about @mymath{0.3} (on a scale of 0 to 1)!
Recall that when defining skewness with Pearson's first skewness coefficient, their difference was negligible (@mymath{0.04\sigma})!
You can now better appreciate why we discussed quantile so extensively in @ref{NoiseChisel optimization}.
In case you would like to know more about the usage of the quantile of the mean in Gnuastro, please see @ref{Quantifying signal in a tile}, or watch this video demonstration: @url{https://peertube.stream/w/35b7c398-9fd7-4bcf-8911-1e01c5124585}.
@node Image surface brightness limit, Achieved surface brightness level, Skewness caused by signal and its measurement, Detecting large extended targets
@subsection Image surface brightness limit
@cindex Surface brightness limit
@cindex Limit, surface brightness
When your science is related to extended emission (like the example here) and you are presenting your results in a scientific conference, usually the first thing that someone will ask (if you do not explicitly say it!), is the dataset's @emph{surface brightness limit} (a standard measure of the noise level), and your target's surface brightness (a measure of the signal, either in the center or outskirts, depending on context).
For more on the basics of these important concepts please see @ref{Metameasurements on full input}.
So in this section of the tutorial, we will measure these values for the single-exposure SDSS image of the M51 group that we downloaded in @ref{Downloading and validating input data}.
@noindent
Before measuring the surface brightness limit, let's see how reliable our detection was.
In other words, let's see how ``clean'' our noise is (after masking all detections, as described previously in @ref{Skewness caused by signal and its measurement}):
@example
$ aststatistics det-masked.fits --quantofmean
0.5111848629
@end example
@noindent
Showing that the mean is indeed very close to the median, although just about 1 quantile larger.
As we saw in @ref{NoiseChisel optimization}, a very small residual signal still remains in the undetected regions and this very small difference is a quantitative measure of that undetected signal.
It was up to you as an exercise to improve it, so we will continue with this dataset.
The surface brightness limit of the image can be measured from the masked image and the equation in @ref{Metameasurements on full input}.
Let's do it for a @mymath{3\sigma} surface brightness limit over an area of @mymath{25 \rm{arcsec}^2}:
@example
$ nsigma=3
$ zeropoint=22.5
$ areaarcsec2=25
$ std=$(aststatistics det-masked.fits --sigclip-std)
$ pixarcsec2=$(astfits det-masked.fits --pixelscale --quiet \
| awk '@{print $3*3600*3600@}')
$ astarithmetic --quiet $nsigma $std x \
$areaarcsec2 $pixarcsec2 x \
sqrt / $zeropoint counts-to-mag
26.0241
@end example
The customizable steps above are good for any type of mask.
For example, your field of view may contain a very deep part so you need to mask all the shallow parts @emph{as well as} the detections before these steps.
But when your image is flat (like this), there is a much simpler method to obtain the same value through MakeCatalog (when the standard deviation image is made by NoiseChisel).
NoiseChisel has already calculated the minimum (@code{MINSTD}), maximum (@code{MAXSTD}) and median (@code{MEDSTD}) standard deviation within the tiles during its processing and has stored them as FITS keywords within the @code{SKY_STD} HDU.
You can see them by piping all the keywords in this HDU into @command{grep}.
In Grep, each @samp{.} represents one character that can be anything so @code{M..STD} will match all three keywords mentioned above.
@example
$ astfits r_detected.fits --hdu=SKY_STD | grep 'M..STD'
@end example
The @code{MEDSTD} value is very similar to the standard deviation derived above, so we can safely use it instead of having to mask and run Statistics.
@cartouche
@noindent
@strong{@code{MEDSTD} is more reliable than the standard deviation of masked pixels:} it may happen that differences between these two become more significant than the experiment above.
In such cases, the @code{MEDSTD} is more reliable because NoiseChisel estimates it within the tiles and after several steps of outlier rejection (for example due to undetected signal) and before interpolation.
Whereas the standard deviation of the masked image is calculated based on the final detection, does no higher-level outlier rejection and is based on the interpolated image.
Therefore, it can be easily biased by signal or artifacts in the image and besides being easier to measure, @code{MEDSTD} is also more statistically robust.
@end cartouche
Fortunately, MakeCatalog will use this keyword and will report the dataset's @mymath{n\sigma} surface brightness limit over a certain area as keywords in the output.
However, as we saw above, these types of measurements are generic to the whole image, not to any specific object.
In other words, they are measurements that are used to evaluate other measurements.
Therefore, in MakeCatalog, we call these ``meta-measures'' which are more completely described in @ref{Metameasurements on full input}.
In summary, to activate these types of measurements, you need to use the @option{--meta-measures} option:
@example
$ astmkcatalog r_detected.fits -hDETECTIONS --output=sbl.fits \
--ids --meta-measures
@end example
@noindent
Before looking into the measured surface brightness limits, let's review an important point about this call to MakeCatalog:
We are only concerned with the noise (not the signal), so we do not ask for any further measurements, because they can un-necessarily slow it down.
However, MakeCatalog requires at least one column, so we will only ask for the @option{--ids} column (which does not need any measurement!).
The output catalog will therefore have a single row and a single column, with 1 as its value@footnote{Recall that NoiseChisel's output is a binary image: 0-valued pixels are noise and 1-valued pixel are signal.
NoiseChisel does not identify sub-structure over the signal, this is the job of Segment, see @ref{Extract clumps and objects}.}.
Since all the meta-measurements get written in the first (0th) HDU, with the command below you can see all the keywords that were measured with the table.
Notice the group of keywords that are under the ``Noise-based metameasures'' title.
@example
$ astfits sbl.fits -h0
@end example
Above the noise-based meta-measure group, you see another group of keywords under the title ``Input pixel grid and value properties''.
Notice how the @code{MMSTD} has the same value as NoiseChisel's @code{MEDSTD} above: this is used directly for both the measured surface brightness limit and the noise-based magnitude limit.
Using @code{MMSTD}, MakeCatalog has determined the @mymath{n\sigma} surface brightness limiting magnitude in these header keywords.
The multiple of @mymath{\sigma}, or @mymath{n}, and area used are parameters that you should set at run-time as command-line options.
Therefore you should look for those under the ``Option values'' group of keywords. is the value of the @code{SBL-SIGMA} keyword which you can change with @option{--sbl-sigma}.
The surface brightness limiting magnitude within a pixel (@code{SBL-SIGMA}) and within a shape-agnostic area of @code{SBL-AREA} arcsec@mymath{^2} are stored in @code{SBL}.
@cindex SDSS
@cindex Nanomaggy
@cindex Zero point magnitude
You will notice that the two surface brightness limiting magnitudes above have values around 3 and 4 (which is not correct!).
This is because we have not given a zero point magnitude to MakeCatalog, so it uses the default value of @code{0}.
SDSS image pixel values are calibrated in units of ``nanomaggy'' which are defined to have a zero point magnitude of 22.5@footnote{From @url{https://www.sdss.org/dr12/algorithms/magnitudes}}.
So with the first command below we give the zero point value and with the second we can see the surface brightness limiting magnitudes with the correct values (around 25 and 26)
@example
$ astmkcatalog r_detected.fits -hDETECTIONS --output=sbl.fits \
--ids --meta-measures --zeropoint=22.5
$ astfits sbl.fits -h0 | grep SBL
@end example
As you see from @code{SBL-SIGMA} and @code{SBL-AREA}, the default multiple of sigma is 3 and the default area is 100 arcsec@mymath{^2} (which is commonly used in the literature).
But as mentioned above, you can easily change those in every run.
In the manual example above, we assumed the multiple of sigma to be 3 and the area to be 25 arcsec@mymath{^2}, so we just need to modify the area:
@example
$ astmkcatalog r_detected.fits -hDETECTIONS --output=sbl.fits \
--ids --meta-measures --zeropoint=22.5 --sbl-area=25
$ astfits sbl.fits -h0 | awk '$1=="SBL" @{print $3@}'
26.02296
@end example
You see that the value is identical to the custom surface brightness limiting magnitude we measured above (a difference of @mymath{0.00114} magnitudes is negligible and hundreds of times larger than the typical errors in the zero point magnitude or magnitude measurements).
But it is much more easier to have MakeCatalog do this measurement, because these values will be appended (as keywords) into your final catalog of objects within that image.
@cartouche
@noindent
@strong{Custom STD for MakeCatalog's Surface brightness limit:} You can manually change/set the value of the @code{MEDSTD} keyword in your the input standard deviation HDU with @ref{Fits}:
@example
$ std=$(aststatistics masked.fits --sigclip-std)
$ astfits noisechisel.fits -hSKY_STD --update=MEDSTD,$std
@end example
With this change, MakeCatalog will use your custom standard deviation for the surface brightness limit.
This can be necessary in scenarios where your image has multiple depths and NoiseChisel is given the full image (including the shallow parts).
In such cases, the shallow parts can bias the measured surface brightness to brighter values.
But generally, in such cases, it is recommended to set the shallow parts before running NoiseChisel, not to manually edit the header keywords.
@end cartouche
We have successfully measured the image's @mymath{3\sigma} surface brightness limiting magnitude over 25 arcsec@mymath{^2}.
However, as discussed in @ref{Metameasurements on full input} this value is just an extrapolation of the per-pixel standard deviation.
Issues like correlated noise will cause the real noise over a large area to be different.
So for a more robust measurement, let's use the upper-limit magnitude of similarly sized region.
For more on the upper-limit magnitude, see the respective item in @ref{Metameasurements on full input}.
In summary, the upper-limit measurements involve randomly placing the footprint of an object in undetected parts of the image many times.
This results in a random distribution of brightness measurements, the standard deviation of that distribution is then converted into magnitudes.
To be comparable with the results above, let's make a circular aperture that has an area of 25 arcsec@mymath{^2} (thus with a radius of @mymath{2.82095} arcsec).
@example
zeropoint=22.5
r_arcsec=2.82095
## Convert the radius (in arcseconds) to pixels.
r_pixel=$(astfits r_detected.fits --pixelscale -q \
| awk '@{print '$r_arcsec'/($1*3600)@}')
## Make circular aperture at pixel (100,100) position is irrelevant.
echo "1 100 100 5 $r_pixel 0 0 1 1 1" \
| astmkprof --background=r_detected.fits \
--clearcanvas --mforflatpix --type=uint8 \
--output=lab.fits
## Do the upper-limit measurement, ignoring all NoiseChisel's
## detections as a mask for the upper-limit measurements.
astmkcatalog lab.fits -h1 --zeropoint=$zeropoint -osbl.fits \
--sbl-area=25 --sbl-sigma=3 --meta-measures \
--valuesfile=r_detected.fits --valueshdu=INPUT-NO-SKY \
--upmaskfile=r_detected.fits --upmaskhdu=DETECTIONS \
--upnsigma=3 --checkuplim=1 --upnum=1000 \
--ids --upperlimit-sb
@end example
The @file{sbl.fits} catalog now contains the upper-limit surface brightness for a circle with an area of 25 arcsec@mymath{^2}.
You can check the value with the command below, but the great thing is that now you have both of the measured surface brightness limit in the headers (discussed above), and the upper-limit surface brightness within the table.
You can also add more profiles with different shapes and sizes if necessary (just make sure they do not overlap: their position is irrelevant!).
Of course, you can also use @option{--upperlimit-sb} in your actual science objects and clumps to get an object-specific or clump-specific value.
@example
$ asttable sbl.fits -cUPPERLIMIT_SB
25.9119
@end example
@cindex Random number generation
@cindex Seed, random number generator
@noindent
You will get a slightly different value from the command above.
In fact, if you run the MakeCatalog command again and look at the measured upper-limit surface brightness, it will be slightly different with your first trial!
Please try exactly the same MakeCatalog command above a few times to see how it changes.
This is because of the @emph{random} factor in the upper-limit measurements: every time you run it, different random points will be checked, resulting in a slightly different distribution.
You can increase the random scatter by increasing the number of random checks (for example, setting @option{--upnum=100000}, compared to 1000 in the command above).
But this will be slower and the results will not be exactly reproducible.
The only way to ensure you get an identical result later is to fix the random number generator function and seed like the command below@footnote{You can use any integer for the seed. One recommendation is to run MakeCatalog without @option{--envseed} once and use the randomly generated seed that is printed on the terminal.}.
This is a very important point for the reproducibility of any statistical process involving random numbers, please see @ref{Generating random numbers}.
@example
export GSL_RNG_TYPE=ranlxs1
export GSL_RNG_SEED=1616493518
astmkcatalog lab.fits -h1 --zeropoint=$zeropoint -osbl.fits \
--sbl-area=25 --sbl-sigma=3 --meta-measures \
--valuesfile=r_detected.fits --valueshdu=INPUT-NO-SKY \
--upmaskfile=r_detected.fits --upmaskhdu=DETECTIONS \
--upnsigma=3 --checkuplim=1 --upnum=1000 \
--ids --upperlimit-sb --envseed
@end example
But where do all the random apertures of the upper-limit measurement fall on the image?
It is good to actually inspect their location to get a better understanding for the process and also detect possible bugs/biases.
When MakeCatalog is run with the @option{--checkuplim} option, it will save all the random locations and their measured brightness as an extra table HDU in the output (called @code{CHECK-UPPERLIMIT}).
With the first command below you can use Gnuastro's @command{asttable} and @command{astscript-ds9-region} to convert the successful aperture locations into a DS9 region file, and with the second can load the region file into the detections and sky-subtracted image to visually see where they are.
@example
## Create a DS9 region file from the check table (activated
## with '--checkuplim')
$ asttable sbl.fits -hCHECK-UPPERLIMIT --noblank=RANDOM_SUM \
| astscript-ds9-region -c1,2 --mode=img \
--radius=$r_pixel \
--output=apers.reg
## Have a look at the regions in relation with NoiseChisel's
## detections.
$ astscript-fits-view r_detected.fits --ds9region=apers.reg
@end example
In this example, we were looking at a single-exposure image that has no correlated noise.
Because of this, the surface brightness limit and the upper-limit surface brightness are very close.
They will have a bigger difference on deep datasets with stronger correlated noise (that are the result of coadding many individual exposures).
As an exercise, please try measuring the upper-limit surface brightness level and surface brightness limit for the deep HST data that we used in the previous tutorial (@ref{General program usage tutorial}).
@node Achieved surface brightness level, Extract clumps and objects, Image surface brightness limit, Detecting large extended targets
@subsection Achieved surface brightness level
In @ref{NoiseChisel optimization} we customized NoiseChisel for a single-exposure SDSS image of the M51 group and in @ref{Image surface brightness limit} we measured the surface brightness limit and the upper-limit surface brightness level (which are both measures of the noise level).
In this section, let's do some measurements on the outer-most edges of the M51 group to see how they relate to the noise measurements found in the previous section.
@cindex Opening
For this measurement, we will need to estimate the average flux on the outer edges of the detection.
Fortunately all this can be done with a few simple commands using @ref{Arithmetic} and @ref{MakeCatalog}.
First, let's separate each detected region, or give a unique label/counter to all the connected pixels of NoiseChisel's detection map with the command below.
Recall that with the @code{set-} operator, the popped operand will be given a name (@code{det} in this case) for easy usage later.
@example
$ astarithmetic r_detected.fits -hDETECTIONS set-det \
det 2 connected-components -olabeled.fits
@end example
You can find the label of the main galaxy visually (by opening the image and hovering your mouse over the M51 group's label).
But to have a little more fun, let's do this automatically (which is necessary in a general scenario).
The M51 group detection is by far the largest detection in this image, this allows us to find its ID/label easily.
We will first run MakeCatalog to find the area of all the labels, then we will use Table to find the ID of the largest object and keep it as a shell variable (@code{id}):
@example
# Run MakeCatalog to find the area of each label.
$ astmkcatalog labeled.fits --ids --geo-area -h1 -ocat.fits
## Sort the table by the area column.
$ asttable cat.fits --sort=AREA_FULL
## The largest object, is the last one, so we will use '--tail'.
$ asttable cat.fits --sort=AREA_FULL --tail=1
## We only want the ID, so let's only ask for that column:
$ asttable cat.fits --sort=AREA_FULL --tail=1 --column=OBJ_ID
## Now, let's put this result in a variable (instead of printing)
$ id=$(asttable cat.fits --sort=AREA_FULL --tail=1 --column=OBJ_ID)
## Just to confirm everything is fine.
$ echo $id
@end example
@noindent
We can now use the @code{id} variable to reject all other detections:
@example
$ astarithmetic labeled.fits $id eq -oonly-m51.fits
@end example
Open the image and have a look.
To separate the outer edges of the detections, we will need to ``erode'' the M51 group detection.
So in the same Arithmetic command as above, we will erode three times (to have more pixels and thus less scatter), using a maximum connectivity of 2 (8-connected neighbors).
We will then save the output in @file{eroded.fits}.
@example
$ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
-oeroded.fits
@end example
@noindent
In @file{labeled.fits}, we can now set all the 1-valued pixels of @file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the previous command.
We will need the pixels of the M51 group in @code{labeled.fits} two times: once to do the erosion, another time to find the outer pixel layer.
To do this (and be efficient and more readable) we will use the @code{set-i} operator (to give this image the name `@code{i}').
In this way we can use it any number of times afterwards, while only reading it from disk and finding M51's pixels once.
@example
$ astarithmetic labeled.fits $id eq set-i i \
i 2 erode 2 erode 2 erode 0 where -oedge.fits
@end example
Open the image and have a look.
You'll see that the detected edge of the M51 group is now clearly visible.
You can use @file{edge.fits} to mark (set to blank) this boundary on the input image and get a visual feeling of how far it extends:
@example
$ astarithmetic r.fits -h0 edge.fits nan where -oedge-masked.fits
@end example
To quantify how deep we have detected the low-surface brightness regions (in units of signal to-noise ratio), we will use the command below.
In short it just divides all the non-zero pixels of @file{edge.fits} in the Sky subtracted input (first extension of NoiseChisel's output) by the pixel standard deviation of the same pixel.
This will give us a signal-to-noise ratio image.
The mean value of this image shows the level of surface brightness that we have achieved.
You can also break the command below into multiple calls to Arithmetic and create temporary files to understand it better.
However, if you have a look at @ref{Reverse polish notation} and @ref{Arithmetic operators}, you should be able to easily understand what your computer does when you run this command@footnote{@file{edge.fits} (extension @code{1}) is a binary (0 or 1 valued) image.
Applying the @code{not} operator on it, just flips all its pixels (from @code{0} to @code{1} and vice-versa).
Using the @code{where} operator, we are then setting all the newly 1-valued pixels (pixels that are not on the edge) to NaN/blank in the sky-subtracted input image (@file{r_detected.fits}, extension @code{INPUT-NO-SKY}, which we call @code{skysub}).
We are then dividing all the non-blank pixels (only those on the edge) by the sky standard deviation (@file{r_detected.fits}, extension @code{SKY_STD}, which we called @code{skystd}).
This gives the signal-to-noise ratio (S/N) for each of the pixels on the boundary.
Finally, with the @code{meanvalue} operator, we are taking the mean value of all the non-blank pixels and reporting that as a single number.}.
@example
$ astarithmetic edge.fits -h1 set-edge \
r_detected.fits -hSKY_STD set-skystd \
r_detected.fits -hINPUT-NO-SKY set-skysub \
skysub skystd / edge not nan where meanvalue --quiet
@end example
@cindex Surface brightness
We have thus detected the wings of the M51 group down to roughly 1/3rd of the noise level in this image which is a very good achievement!
But the per-pixel S/N is a relative measurement.
Let's also measure the depth of our detection in absolute surface brightness units; or magnitudes per square arc-seconds (see @ref{Brightness flux magnitude}).
We will also ask for the S/N and magnitude of the full edge we have defined.
Fortunately doing this is very easy with Gnuastro's MakeCatalog:
@example
$ astmkcatalog edge.fits -h1 --valuesfile=r_detected.fits \
--zeropoint=22.5 --ids --sb --sn --magnitude
$ asttable edge_cat.fits
1 25.6971 55.2406 15.8994
@end example
We have thus reached an outer surface brightness of @mymath{25.70} magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this single exposure SDSS image!
This is very similar to the surface brightness limit measured in @ref{Image surface brightness limit} (which is a big achievement!).
But another point in the result above is very interesting: the total S/N of the edge is @mymath{55.24} with a total edge magnitude@footnote{You can run MakeCatalog on @file{only-m51.fits} instead of @file{edge.fits} to see the full magnitude of the M51 group in this image.} of 15.90!!!
This is very large for such a faint signal (recall that the mean S/N per pixel was 0.32) and shows a very important point in the study of galaxies:
While the per-pixel signal in their outer edges may be very faint (and invisible to the eye in noise), a lot of signal hides deeply buried in the noise.
In interpreting this value, you should just have in mind that NoiseChisel works based on the contiguity of signal in the pixels.
Therefore the larger the object, the deeper NoiseChisel can carve it out of the noise (for the same outer surface brightness).
In other words, this reported depth, is the depth we have reached for this object in this dataset, processed with this particular NoiseChisel configuration.
If the M51 group in this image was larger/smaller than this (the field of view was smaller/larger), or if the image was from a different instrument, or if we had used a different configuration, we would go deeper/shallower.
@node Extract clumps and objects, , Achieved surface brightness level, Detecting large extended targets
@subsection Extract clumps and objects (Segmentation)
In @ref{NoiseChisel optimization} we found a good detection map over the image, so pixels harboring signal have been differentiated from those that do not.
For noise-related measurements like the surface brightness limit, this is fine.
However, after finding the pixels with signal, you are most likely interested in knowing the sub-structure within them.
For example, how many star forming regions (those bright dots along the spiral arms) of M51 are within this image?
What are the colors of each of these star forming regions?
In the outer most wings of M51, which pixels belong to background galaxies and foreground stars?
And many more similar questions.
To address these questions, you can use @ref{Segment} to identify all the ``clumps'' and ``objects'' over the detection.
@example
$ astsegment r_detected.fits --output=r_segmented.fits
$ ds9 -mecube r_segmented.fits -cmap sls -zoom to fit -scale limits 0 2
@end example
@cindex DS9
@cindex SAO DS9
Open the output @file{r_segmented.fits} as a multi-extension data cube with the second command above and flip through the first and second extensions, zoom-in to the spiral arms of M51 and see the detected clumps (all pixels with a value larger than 1 in the second extension).
To optimize the parameters and make sure you have detected what you wanted, we recommend to visually inspect the detected clumps on the input image.
For visual inspection, you can make a simple shell script like below.
It will first call MakeCatalog to estimate the positions of the clumps, then make an SAO DS9 region file and open ds9 with the image and region file.
Recall that in a shell script, the numeric variables (like @code{$1}, @code{$2}, and @code{$3} in the example below) represent the arguments given to the script.
But when used in the AWK arguments, they refer to column numbers.
To create the shell script, using your favorite text editor, put the contents below into a file called @file{check-clumps.sh}.
Recall that everything after a @code{#} is just comments to help you understand the command (so read them!).
Also note that if you are copying from the PDF version of this book, fix the single quotes in the AWK command.
@example
#! /bin/bash
set -e # Stop execution when there is an error.
set -u # Stop execution when a variable is not initialized.
# Run MakeCatalog to write the coordinates into a FITS table.
# Default output is `$1_cat.fits'.
astmkcatalog $1.fits --clumpscat --ids --ra --dec
# Use Gnuastro's Table and astscript-ds9-region to build the DS9
# region file (a circle of radius 1 arcseconds on each point).
asttable $1"_cat.fits" -hCLUMPS -cRA,DEC \
| astscript-ds9-region -c1,2 --mode=wcs --radius=1 \
--output=$1.reg
# Show the image (with the requested color scale) and the region file.
ds9 -geometry 1800x3000 -mecube $1.fits -zoom to fit \
-scale limits $2 $3 -regions load all $1.reg
# Clean up (delete intermediate files).
rm $1"_cat.fits" $1.reg
@end example
@noindent
Finally, you just have to activate the script's executable flag with the command below.
This will enable you to directly/easily call the script as a command.
@example
$ chmod +x check-clumps.sh
@end example
@cindex AWK
@cindex GNU AWK
This script does not expect the @file{.fits} suffix of the input's filename as the first argument.
Because the script produces intermediate files (a catalog and DS9 region file, which are later deleted).
However, we do not want multiple instances of the script (on different files in the same directory) to collide (read/write to the same intermediate files).
Therefore, we have used suffixes added to the input's name to identify the intermediate files.
Note how all the @code{$1} instances in the commands (not within the AWK command@footnote{In AWK, @code{$1} refers to the first column, while in the shell script, it refers to the first argument.}) are followed by a suffix.
If you want to keep the intermediate files, put a @code{#} at the start of the last line.
The few, but high-valued, bright pixels in the central parts of the galaxies can hinder easy visual inspection of the fainter parts of the image.
With the second and third arguments to this script, you can set the numerical values of the color map (first is minimum/black, second is maximum/white).
You can call this script with any@footnote{Some modifications are necessary based on the input dataset: depending on the dynamic range, you have to adjust the second and third arguments.
But more importantly, depending on the dataset's world coordinate system, you have to change the region @code{width}, in the AWK command.
Otherwise the circle regions can be too small/large.} output of Segment (when @option{--rawoutput} is @emph{not} used) with a command like this:
@example
$ ./check-clumps.sh r_segmented -0.1 2
@end example
Go ahead and run this command.
You will see the intermediate processing being done and finally it opens SAO DS9 for you with the regions superimposed on all the extensions of Segment's output.
The script will only finish (and give you control of the command-line) when you close DS9.
If you need your access to the command-line before closing DS9, add a @code{&} after the end of the command above.
@cindex Purity
@cindex Completeness
While DS9 is open, slide the dynamic range (values for black and white, or minimum/maximum values in different color schemes) and zoom into various regions of the M51 group to see if you are satisfied with the detected clumps.
Do not forget that through the ``Cube'' window that is opened along with DS9, you can flip through the extensions and see the actual clumps also.
The questions you should be asking yourself are these: 1) Which real clumps (as you visually @emph{feel}) have been missed? In other words, is the @emph{completeness} good? 2) Are there any clumps which you @emph{feel} are false? In other words, is the @emph{purity} good?
Note that completeness and purity are not independent of each other, they are anti-correlated: the higher your purity, the lower your completeness and vice-versa.
You can see this by playing with the purity level using the @option{--snquant} option.
Run Segment as shown above again with @code{-P} and see its default value.
Then increase/decrease it for higher/lower purity and check the result as before.
You will see that if you want the best purity, you have to sacrifice completeness and vice versa.
One interesting region to inspect in this image is the many bright peaks around the central parts of M51.
Zoom into that region and inspect how many of them have actually been detected as true clumps.
Do you have a good balance between completeness and purity? Also look out far into the wings of the group and inspect the completeness and purity there.
An easier way to inspect completeness (and only completeness) is to mask all the pixels detected as clumps and visually inspecting the rest of the pixels.
You can do this using Arithmetic in a command like below.
For easy reading of the command, we will define the shell variable @code{i} for the image name and save the output in @file{masked.fits}.
@example
$ in="r_segmented.fits -hINPUT-NO-SKY"
$ clumps="r_segmented.fits -hCLUMPS"
$ astarithmetic $in $clumps 0 gt nan where -oclumps-masked.fits
@end example
Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks that have been missed, especially as you go farther away from the group center and into the diffuse wings.
This is due to the fact that with this configuration, we have focused more on the sharper clumps.
To put the focus more on diffuse clumps, you can use a wider convolution kernel.
Using a larger kernel can also help in detecting the existing clumps to fainter levels (thus better separating them from the surrounding diffuse signal).
You can make any kernel easily using the @option{--kernel} option in @ref{MakeProfiles}.
But note that a larger kernel is also going to wash-out many of the sharp/small clumps close to the center of M51 and also some smaller peaks on the wings.
Please continue playing with Segment's configuration to obtain a more complete result (while keeping reasonable purity).
We will finish the discussion on finding true clumps at this point.
The properties of the clumps within M51, or the background objects can then easily be measured using @ref{MakeCatalog}.
To measure the properties of the background objects (detected as clumps over the diffuse region), you should not mask the diffuse region.
When measuring clump properties with @ref{MakeCatalog} and using the @option{--clumpscat}, the ambient flux (from the diffuse region) is calculated and subtracted.
If the diffuse region is masked, its effect on the clump brightness cannot be calculated and subtracted.
To keep this tutorial short, we will stop here.
See @ref{Segmentation and making a catalog} and @ref{Segment} for more on using Segment, producing catalogs with MakeCatalog and using those catalogs.
@node Building the extended PSF, Sufi simulates a detection, Detecting large extended targets, Tutorials
@section Building the extended PSF
Deriving the extended PSF of an image is very important in many aspects of the analysis of the objects within it.
Gnuastro has a set of installed scripts, designed to simplify the process following the recipes that were initially introduced in Infante-Sainz et al. @url{https://arxiv.org/abs/1911.01430,2020} and further improved in Eskandarlou et al. @url{https://arxiv.org/abs/2510.12940,2025} with more visualizations to help follow the process as they are implemented now in Gnuastro.
This tutorial is for those who are not yet familiar with the process and would like to start from scratch on a pre-defined dataset and lots of extra information on how to interpret the result.
For more advanced users (who have already read this), the full documentation of each of the scripts is available in @ref{PSF construction and subtraction} and an overview of the process is given in @ref{Overview of the PSF scripts}.
@menu
* Preparing input for extended PSF:: Which stars should be used?
* Saturated pixels and Segment's clumps:: Saturation is a major hurdle!
* One object for the whole detection:: Avoiding over-segmentation in objects.
* Building outer part of PSF:: Building the outermost PSF wings.
* Inner part of the PSF:: Going towards the PSF center.
* Uniting the different PSF components:: Merging all the components into one PSF.
* Subtracting the PSF:: Having the PSF, we now want to subtract it.
@end menu
@node Preparing input for extended PSF, Saturated pixels and Segment's clumps, Building the extended PSF, Building the extended PSF
@subsection Preparing input for extended PSF
We will use an image of the M51 galaxy group in the r (SDSS) band of the Javalambre Photometric Local Universe Survey (J-PLUS) to extract its extended PSF.
For more information on J-PLUS, and its unique features visit: @url{http://www.j-plus.es}.
First, let's download the image from the J-PLUS web page using @code{wget}.
But to have a generalize-able, and easy to read command, we will define some base variables (in all-caps) first.
After the download is complete, open the image with SAO DS9 (or any other FITS viewer you prefer!) to have a feeling of the data (and of course, enjoy the beauty of M51 in such a wide field of view):
@example
$ urlend="jplus-dr2/get_fits?id=67510"
$ urlbase="http://archive.cefca.es/catalogues/vo/siap/"
$ mkdir jplus-dr2
$ wget $urlbase$urlend -O jplus-dr2/67510.fits.fz
$ astscript-fits-view jplus-dr2/67510.fits.fz
@end example
After enjoying the large field of view, have a closer look at the edges of the image.
Please zoom in to the corners.
You will see that on the edges, the pixel values are either zero or with significantly different values than the main body of the image.
This is due to the dithering pattern that was used to make this image and happens in all imaging surveys@footnote{Recall the cropping in a previous tutorial for a similar reason (varying ``depth'' across the image): @ref{Dataset inspection and cropping}.}.
To avoid potential issues or problems that these regions may cause, we will first crop out the main body of the image with the command below.
To keep the top-level directory clean, let's also put the crop in a directory called @file{flat}.
@example
$ mkdir flat
$ astcrop jplus-dr2/67510.fits.fz --section=225:9275,150:9350 \
--mode=img -oflat/67510.fits
$ astscript-fits-view flat/67510.fits
@end example
@noindent
Please zoom into the edges again, you will see that they now have the same noise-level as the rest of the image (the problematic parts are now gone).
@node Saturated pixels and Segment's clumps, One object for the whole detection, Preparing input for extended PSF, Building the extended PSF
@subsection Saturated pixels and Segment's clumps
A constant-depth (flat) image was created in the previous section (@ref{Preparing input for extended PSF}).
As explained in @ref{Overview of the PSF scripts}, an important step when building the PSF is to mask other sources in the image.
Therefore, before going onto selecting stars, let's detect all significant signal, and identify the clumps of background objects over the wings of the extended PSF.
There is a problem however: the saturated pixels of the bright stars are going to cause problems in the segmentation phase.
To see this problem, let's make a @mymath{1000\times1000} crop around a bright star to speed up the test (and its solution).
Afterwards we will apply the solution to the whole image.
@example
$ astcrop flat/67510.fits --mode=wcs --widthinpix --width=1000 \
--center=203.3916736,46.7968652 --output=saturated.fits
$ astnoisechisel saturated.fits --output=sat-nc.fits
$ astsegment sat-nc.fits --output=sat-seg.fits
$ astscript-fits-view sat-seg.fits
@end example
Have a look at the @code{CLUMPS} extension.
You will see that instead of a single clump at the center of the bright star, we have many clumps!
This has happened because of the saturated pixels!
When saturation occurs, the sharp peak of the profile is lost (like cutting off the tip of a mountain to build a telescope!) and all saturated pixels get a noisy value close to the saturation level.
To see this saturation noise run the last command again and in SAO DS9, set the ``Scale'' to ``min max'' and zoom into the center.
You will see the noisy saturation pixels at the center of the star in red.
This noise-at-the-peak disrupts Segment's assumption to expand clumps from a local maxima: each noisy peak is being treated as a separate local maxima and thus a separate clump.
For more on how Segment defines clumps, see Section 3.2.1 and Figure 8 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
To have the center identified as a single clump, we should mask these saturated pixels in a way that suites Segment's non-parametric methodology.
First we need to find the saturation level!
The saturation level is usually fixed for any survey or input data that you receive from a certain database, so you will usually have to do this only once (the first time you get data from that database).
Let's make a smaller crop of @mymath{50\times50} pixels around the star with the first command below.
With the next command, please look at the crop with DS9 to visually understand the problem.
You will see the saturated pixels as the noisy red pixels in the center of the image.
A non-saturated star will have a single pixel as the maximum and will not have such a large area covered by a noisy constant value (find a few stars in the image and see for yourself).
Visual and qualitative inspection of the process is very important for understanding the solution.
@example
$ astcrop saturated.fits --mode=wcs --widthinpix --width=50 \
--center=203.3916736,46.7968652 --output=sat-center.fits
$ astscript-fits-view sat-center.fits --ds9scale=minmax
@end example
@noindent
To quantitatively identify the saturation level in this image, let's have a look at the distribution of pixels with a value larger than 100 (above the noise level):
@example
$ aststatistics sat-center.fits --greaterequal=100
Histogram:
|*
|*
|*
|*
|* *
|** *
|*** **
|**** **
|****** ****
|********** * * * ******
|************************* ************ * *** ******* *** ************
|----------------------------------------------------------------------
@end example
The peak you see in the right end (larger values) of the histogram shows the saturated pixels (a constant level, with some scatter due to the large Poisson noise).
If there was no saturation, the number of pixels should have decreased at increasing values; until reaching the maximum value of the profile in one pixel.
But that is not the case here.
Please try this experiment on a non-saturated (fainter) star to see what we mean.
If you still have not experimented on a non-saturated star, please stop reading this tutorial!
Please open @file{flat/67510.fits} in DS9, select a fainter/smaller star and repeat the last three commands (with a different center).
After you have confirmed the point above (visually, and with the histogram), please continue with the rest of this tutorial.
Finding the saturation level is easy with Statistics (by using the @option{--lessthan} option until the histogram becomes as expected: only decreasing).
First, let's try @option{--lessthan=3000}:
@example
$ aststatistics sat-center.fits --greaterequal=100 --lessthan=3000
-------
Histogram:
|*
|*
|*
|*
|*
|**
|*** *
|**** *
|******* **
|*********** * * * * * * * ****
|************************* * ***** ******* ***** ** ***** * ********
|----------------------------------------------------------------------
@end example
@noindent
We still see an increase in the histogram around 3000.
Let's try a threshold of 2500:
@example
$ aststatistics sat-center.fits --greaterequal=100 --lessthan=2500
-------
Histogram:
|*
|*
|**
|**
|**
|**
|****
|*****
|*********
|************* * * * *
|********************************* ** ** ** *** ** * **** ** *****
|----------------------------------------------------------------------
@end example
The peak at the large end of the histogram has gone!
But let's have a closer look at the values (the resolution of an ASCII histogram is limited!).
To do this, we will ask Statistics to save the histogram into a table with the @option{--histogram} option, then look at the last 20 rows:
@example
$ aststatistics sat-center.fits --greaterequal=100 --lessthan=2500 \
--histogram --output=sat-center-hist.fits
$ asttable sat-center-hist.fits --tail=20
2021.1849112701 1
2045.0495397186 0
2068.9141681671 1
2092.7787966156 1
2116.6434250641 0
2140.5080535126 0
2164.3726819611 0
2188.2373104095 0
2212.101938858 1
2235.9665673065 1
2259.831195755 2
2283.6958242035 0
2307.560452652 0
2331.4250811005 1
2355.289709549 1
2379.1543379974 1
2403.0189664459 2
2426.8835948944 1
2450.7482233429 2
2474.6128517914 2
@end example
Since the number of points at the extreme end are increasing (from 1 to 2), We therefore see that a value 2500 is still above the saturation level (the number of pixels has started to increase)!
A more reasonable saturation level for this image would be 2200!
As an exercise, you can try automating this selection with AWK.
Therefore, we can set the saturation level in this image@footnote{In raw exposures, this value is usually around 65000 (close to @mymath{2^{16}}, since most CCDs have 16-bit pixels; see @ref{Numeric data types}). But that is not the case here, because this is a processed/coadded image that has been calibrated.} to be 2200.
Let's mask all such pixels with the command below:
@example
$ astarithmetic saturated.fits set-i i i 2200 gt nan where \
--output=sat-masked.fits
$ astscript-fits-view sat-masked.fits --ds9scale=minmax
@end example
Please see the peaks of several bright stars, not just the central very bright star.
Zoom into each of the peaks you see.
Besides the central very bright one that we were looking at closely until now, only one other star is saturated (its center is NaN, or Not-a-Number).
Try to find it.
But we are not done yet!
Please zoom-in to that central bright star and have another look on the edges of the vertical ``bleeding'' saturated pixels, there are strong positive/negative values touching it (almost like ``waves'').
These will also cause problems and have to be masked!
So with a small addition to the previous command, let's dilate the saturated regions (with 2-connectivity, or 8-connected neighbors) four times and have another look:
@example
$ astarithmetic saturated.fits set-i i i 2200 gt \
2 dilate 2 dilate 2 dilate 2 dilate \
nan where --output=sat-masked.fits
$ astscript-fits-view sat-masked.fits --ds9scale=minmax
@end example
Now that saturated pixels (and their problematic neighbors) have been masked, we can convolve the image (recall that Segment will use the convolved image for identifying clumps) with the command below.
However, we will use the Spatial Domain convolution which can account for blank pixels (for more on the pros and cons of spatial and frequency domain convolution, see @ref{Spatial vs. Frequency domain}).
We will also create a Gaussian kernel with @mymath{\rm{FWHM}=2} pixels, truncated at @mymath{5\times\rm{FWHM}}.
@example
$ astmkprof --kernel=gaussian,2,5 --oversample=1 -okernel.fits
$ astconvolve sat-masked.fits --kernel=kernel.fits --domain=spatial \
--output=sat-masked-conv.fits
$ astscript-fits-view sat-masked-conv.fits --ds9scale=minmax
@end example
@noindent
Please zoom-in to the star and look closely to see how after spatial-domain convolution, the problematic pixels are still NaN.
But Segment requires the profile to start with a maximum value and decrease.
So before feeding into Segment, let's fill the blank values with the maximum value of the neighboring pixels in both the input and convolved images (see @ref{Interpolation operators}):
@example
$ astarithmetic sat-masked.fits 2 2 interpolate-maxofregion \
--output=sat-fill.fits
$ astarithmetic sat-masked-conv.fits 2 2 interpolate-maxofregion \
--output=sat-fill-conv.fits
$ astscript-fits-view sat-fill* --ds9scale=minmax
@end example
@noindent
Have a closer look at the opened images.
Please zoom-in (you will notice that they are already matched and locked, so they will both zoom-in together).
Go to the centers of the saturated stars and confirm how they are filled with the largest non-blank pixel.
We can now feed this image to NoiseChisel and Segment as the convolved image:
@example
$ astnoisechisel sat-fill.fits --convolved=sat-fill-conv.fits \
--output=sat-nc.fits
$ astsegment sat-nc.fits --convolved=sat-fill-conv.fits \
--output=sat-seg.fits --rawoutput
$ ds9 -mecube sat-seg.fits -zoom to fit -scale limits -1 1
@end example
@noindent
See the @code{CLUMPS} extension.
Do you see how the whole center of the star has indeed been identified as a single clump?
We thus achieved our aim and did not let the saturated pixels harm the identification of the center!
If the issue was only clumps (like in a normal deep image processing), this was the end of Segment's special considerations.
However, in the scenario here, with the very extended wings of the bright stars, it usually happens that background objects become ``clumps'' in the outskirts and will rip the bright star outskirts into separate ``objects''.
In the next section (@ref{One object for the whole detection}), we will describe how you can modify Segment to avoid this issue.
@node One object for the whole detection, Building outer part of PSF, Saturated pixels and Segment's clumps, Building the extended PSF
@subsection One object for the whole detection
In @ref{Saturated pixels and Segment's clumps}, we described how you can run Segment such that saturated pixels do not interfere with its clumps.
However, due to the very extended wings of the PSF, the default definition of ``objects'' should also be modified for the scenario here.
To better see the problem, let's inspect now the @code{OBJECTS} extension, focusing on those objects with a label between 50 to 150 (which include the main star):
@example
$ astscript-fits-view sat-seg.fits -hOBJECTS --ds9scale="limits 50 150"
@end example
We can see that the detection corresponding to the star has been broken into different objects.
This is not a good object segmentation image for our scenario here.
Since those objects in the outer wings of the bright star's detection harbor a lot of the extended PSF.
We want to keep them with the same ``object'' label as the star (we only need to mask the ``clumps'' of the background sources).
To do this, we will make the following changes to Segment's options (see @ref{Segmentation options} for more on this options):
@itemize
@item
Since we want the extended diffuse flux of the PSF to be taken as a single object, we want all the grown clumps to touch.
Therefore, it is necessary to decrease @option{--gthresh} to very low values, like @mymath{-10}.
Recall that its value is in units of the input standard deviation, so @option{--gthresh=-10} corresponds to @mymath{-10\sigma}.
The default value is not for such extended sources that dominate all background sources.
@item
Since we want all connected grown clumps to be counted as a single object in any case, we will set @option{--objbordersn=0} (its smallest possible value).
@end itemize
@noindent
Let's make these changes and check if the star has been kept as a single object in the @code{OBJECTS} extension or not:
@example
$ astsegment sat-nc.fits --convolved=sat-fill-conv.fits \
--gthresh=-10 --objbordersn=0 \
--output=sat-seg.fits --rawoutput
$ astscript-fits-view sat-seg.fits -hOBJECTS --ds9scale="limits 50 150"
@end example
Now we can extend these same steps to the whole image.
To detect signal, we can run NoiseChisel using the command below.
We modified the default value to two of the options, below you can see the reason for these changes.
See @ref{Detecting large extended targets} for more on optimizing NoiseChisel.
@itemize
@item
Since the image is so large, we have increased @option{--outliernumngb} to get better outlier statistics on the tiles.
The default value is primarily for small images, so this is usually the first thing you should do when running NoiseChisel on a real/large image.
@item
Since the image is not too deep (made from few exposures), it does not have strong correlated noise, so we will decrease @option{--detgrowquant} and increase @option{--detgrowmaxholesize} to better extract signal.
@end itemize
@noindent
Furthermore, since both NoiseChisel and Segment need a convolved image, we will do the convolution before and feed it to both (to save running time).
But in the first command below, let's delete all the temporary files we made above.
Since the image is large (+300 MB), to avoid wasting storage, any temporary file that is no longer necessary for later processing is deleted after it is used.
You can visually check each of them with DS9 before deleting them (or not delete them at all!).
Generally, within a pipeline it is best to remove such large temporary files, because space runs out much faster than you think (for example, once you get good results and want to use more fields).
@example
$ rm *.fits
$ mkdir label
$ astmkprof --kernel=gaussian,2,5 --oversample=1 \
-olabel/kernel.fits
$ astarithmetic flat/67510.fits set-i i i 2200 gt \
2 dilate 2 dilate 2 dilate 2 dilate nan where \
--output=label/67510-masked-sat.fits
$ astconvolve label/67510-masked-sat.fits --kernel=label/kernel.fits \
--domain=spatial --output=label/67510-masked-conv.fits
$ rm label/kernel.fits
$ astarithmetic label/67510-masked-sat.fits 2 2 interpolate-maxofregion \
--output=label/67510-fill.fits
$ astarithmetic label/67510-masked-conv.fits 2 2 interpolate-maxofregion \
--output=label/67510-fill-conv.fits
$ rm label/67510-masked-conv.fits
$ astnoisechisel label/67510-fill.fits --outliernumngb=100 \
--detgrowquant=0.8 --detgrowmaxholesize=100000 \
--convolved=label/67510-fill-conv.fits \
--output=label/67510-nc.fits
$ rm label/67510-fill.fits
$ astsegment label/67510-nc.fits --output=label/67510-seg-raw.fits \
--convolved=label/67510-fill-conv.fits --rawoutput \
--gthresh=-10 --objbordersn=0
$ rm label/67510-fill-conv.fits
$ astscript-fits-view label/67510-seg-raw.fits
@end example
We see that the saturated pixels have not caused any problem and the central clumps/objects of bright stars are now a single clump/object.
We can now proceed to estimating the outer PSF.
But before that, let's make a ``standard'' segment output: one that can safely be fed into MakeCatalog for measurements and can contain all necessary outputs of this whole process in a single file (as multiple extensions).
The main problem is again the saturated pixels: we interpolated them to be the maximum of their nearby pixels.
But this will cause problems in any measurement that is done over those regions.
To let MakeCatalog know that those pixels should not be used, the first extension of the file given to MakeCatalog should have blank values on those pixels.
We will do this with the commands below:
@example
## First HDU of Segment (Sky-subtracted input)
$ astarithmetic label/67510-nc.fits -hINPUT-NO-SKY \
label/67510-masked-sat.fits isblank nan where \
--output=label/67510-seg.fits
$ astfits label/67510-seg.fits --update=EXTNAME,INPUT-NO-SKY
## Second and third HDUs: CLUMPS and OBJECTS
$ astfits label/67510-seg-raw.fits --copy=CLUMPS --copy=OBJECTS \
--output=label/67510-seg.fits
## Fourth HDU: Sky standard deviation (from NoiseChisel):
$ astfits label/67510-nc.fits --copy=SKY_STD \
--output=label/67510-seg.fits
## Clean up all the un-necessary files:
$ rm label/67510-masked-sat.fits label/67510-nc.fits \
label/67510-seg-raw.fits
@end example
@noindent
You can now simply run MakeCatalog on this image and be sure that saturated pixels will not affect the measurements.
As one example, you can use MakeCatalog to find the clumps containing saturated pixels: recall that the @option{--area} column only calculates the area of non-blank pixels, while @option{--geo-area} calculates the area of the label (independent of their blank-ness in the values image):
@example
$ astmkcatalog label/67510-seg.fits --ids --ra --dec --area \
--geo-area --clumpscat --output=cat.fits
@end example
The information of the clumps that have been affected by saturation can easily be found by selecting those with a differing value in the @code{AREA} and @code{AREA_FULL} columns:
@example
## With AWK (second command, counts the number of rows)
$ asttable cat.fits -hCLUMPS | awk '$5!=$6'
$ asttable cat.fits -hCLUMPS | awk '$5!=$6' | wc -l
## Using Table arithmetic (compared to AWK, you can use column
## names, save as FITS, and be faster):
$ asttable cat.fits -hCLUMPS -cRA,DEC --noblankend=3 \
-c'arith AREA AREA AREA_FULL eq nan where'
## Remove the table (which was just for a demo)
$ rm cat.fits
@end example
@noindent
We are now ready to start building the outer parts of the PSF in @ref{Building outer part of PSF}.
@node Building outer part of PSF, Inner part of the PSF, One object for the whole detection, Building the extended PSF
@subsection Building outer part of PSF
In @ref{Saturated pixels and Segment's clumps}, and @ref{One object for the whole detection}, we described how to create a Segment clump and object map, while accounting for saturated stars and not having over-fragmentation of objects in the outskirts of stars.
We are now ready to start building the extended PSF.
First we will build the outer parts of the PSF, so we want the brightest stars.
You will see we have several bright stars in this very large field of view, but we do not yet have a feeling how many they are, and at what magnitudes.
So let's use Gnuastro's Query program to find the magnitudes of the brightest stars (those brighter than g-magnitude 10 in Gaia data release 3, or DR3).
For more on Query, see @ref{Query}.
@example
$ astquery gaia --dataset=dr3 --overlapwith=flat/67510.fits \
--range=phot_g_mean_mag,-inf,10 \
--output=flat/67510-bright.fits
@end example
Now, we can easily visualize the magnitude and positions of these stars using @command{astscript-ds9-region} and the command below (for more on this script, see @ref{SAO DS9 region files from table})
@example
$ astscript-ds9-region flat/67510-bright.fits -cra,dec \
--namecol=phot_g_mean_mag \
--command="ds9 flat/67510.fits -zoom to fit -zscale"
@end example
You can see that we have several stars between magnitudes 6 to 10.
Let's use @file{astscript-psf-select-stars} in the command below to select the relevant stars in the image (the brightest; with a magnitude between 6 to 10).
The advantage of using this script (instead of a simple @option{--range} in Table), is that it will also check distances to nearby stars and reject those that are too close (and not good for constructing the PSF).
Since we have very bright stars in this very wide-field image, we will also increase the distance to nearby neighbors with brighter or similar magnitudes (the default value is 1 arcmin).
To do this, we will set @option{--mindistdeg=0.02}, which corresponds to 1.2 arcmin.
The details of the options for this script are discussed in @ref{Invoking astscript-psf-select-stars}.
@example
$ mkdir outer
$ astscript-psf-select-stars flat/67510.fits \
--magnituderange=6,10 --mindistdeg=0.02 \
--output=outer/67510-6-10.fits
@end example
@noindent
Let's have a look at the selected stars in the image (it is very important to visually check every step when you are first discovering a new dataset).
@example
$ astscript-ds9-region outer/67510-6-10.fits -cra,dec \
--namecol=phot_g_mean_mag \
--command="ds9 flat/67510.fits -zoom to fit -zscale"
@end example
Now that the catalog of good stars is ready, it is time to construct the individual stamps from the catalog above.
To create stamps, first, we need to crop a fixed-size box around each isolated star in the catalog.
The contaminant objects in the crop should be masked and finally, the fluxes in these cropped images should be normalized.
To do these, we will use @file{astscript-psf-stamp} (for more on this script see @ref{Invoking astscript-psf-stamp}).
One of the most important parameters for this script is the normalization radii @code{--normradii}.
This parameter defines a ring for the flux normalization of each star stamp.
The normalization of the flux is necessary because each star has a different brightness, and consequently, it is crucial for having all the stamps with the same flux level in the same region.
Otherwise the final coadd of the different stamps would have no sense.
Depending on the PSF shape, internal reflections, ghosts, saturated pixels, and other systematics, it would be necessary to choose the @code{--normradii} appropriately.
The selection of the normalization radii is something that requires a good understanding of the data.
To do that, let's use two useful parameters that will help us in the checking of the data: @code{--tmpdir} and @code{--keeptmp};
@itemize
@item
With @code{--tmpdir=checking-normradii} all temporary files, including the radial profiles, will be save in that directory (instead of an internally-created name).
@item
With @code{--keeptmp} we will not remove the temporal files, so it is possible to have a look at them (by default the temporary directory gets deleted at the end).
It is necessary to specify the @code{--normradii} even if we do not know yet the final values.
Otherwise the script will not generate the radial profile.
@end itemize
@noindent
As a consequence, in this step we put the normalization radii equal to the size of the stamps.
By doing this, the script will generate the radial profile of the entire stamp.
In this particular step we set it to @code{--normradii=500,510}.
We also use the @option{--nocentering} option to disable sub-pixel warping in this phase (it is only relevant for the central part of the PSF).
Furthermore, since there are several stars, we iterate over each row of the catalog using a while loop.
@example
$ counter=1
$ mkdir finding-normradii
$ asttable outer/67510-6-10.fits \
| while read -r ra dec mag; do
astscript-psf-stamp label/67510-seg.fits \
--mode=wcs \
--nocentering \
--center=$ra,$dec \
--normradii=500,510 \
--widthinpix=1000,1000 \
--segment=label/67510-seg.fits \
--output=finding-normradii/$counter.fits \
--tmpdir=finding-normradii --keeptmp; \
counter=$((counter+1)); \
done
@end example
First let's have a look at all the masked postage stamps of the cropped stars.
Once they all open, feel free to zoom-in, they are all matched and locked.
It is always good to check the different stamps to ensure the quality and possible two dimensional features that are difficult to detect from the radial profiles (such as ghosts and internal reflections).
@example
$ astscript-fits-view finding-normradii/cropped-masked*.fits
@end example
@noindent
If everything looks good in the image, let's open all the radial profiles and visually check those with the command below.
Note that @command{astscript-fits-view} calls the @command{topcat} graphic user interface (GUI) program to visually inspect (plot) tables.
If you do not already have it, see @ref{TOPCAT}.
@example
$ astscript-fits-view finding-normradii/rprofile*.fits
@end example
After some study of this data, we could say that a good normalization ring is those pixels between R=20 and R=30 pixels.
Such a ring ensures having a high number of pixels so the estimation of the flux normalization will be robust.
Also, at such distance from the center the signal to noise is high and there are not obvious features that can affect the normalization.
Note that the profiles are different because we are considering a wide range of magnitudes, so the fainter stars are much more noisy.
However, in this tutorial we will keep these stars in order to have a higher number of stars for the outer part.
In a real case scenario, we should look for stars with a much more similar brightness (smaller range of magnitudes) to not lose signal to noise as a consequence of the inclusion of fainter stars.
@example
$ rm -r finding-normradii
$ counter=1
$ mkdir outer/stamps
$ asttable outer/67510-6-10.fits \
| while read -r ra dec mag; do
astscript-psf-stamp label/67510-seg.fits \
--mode=wcs \
--nocentering \
--center=$ra,$dec \
--normradii=20,30 \
--widthinpix=1000,1000 \
--segment=label/67510-seg.fits \
--output=outer/stamps/67510-$counter.fits; \
counter=$((counter+1)); \
done
@end example
After the stamps are created, we need to coadd them together with a simple Arithmetic command (see @ref{Coadding operators}).
The coadd is done using the sigma-clipped mean operator that will preserve more of the signal, while rejecting outliers (more than @mymath{3\sigma} with a tolerance of @mymath{0.2}, for more on sigma-clipping see @ref{Sigma clipping}).
Just recall that we need to specify the number of inputs into the coadding operators, so we are reading the list of images and counting them as separate variables before calling Arithmetic.
@c We are not using something like 'ls' because on some systems, 'ls'
@c always prints outputs on a one-file-per-line format and this will cause
@c a problem in Arithmetic, see:
@c https://lists.gnu.org/archive/html/bug-gnuastro/2023-01/msg00007.html
@example
$ numimgs=$(echo outer/stamps/*.fits | wc -w)
$ astarithmetic outer/stamps/*.fits $numimgs 3 0.2 sigclip-mean \
-g1 --output=outer/coadd.fits --wcsfile=none \
--writeall
@end example
@noindent
Did you notice the @option{--wcsfile=none} option above?
With it, the coadded image no longer has any WCS information.
This is natural, because the coadded image does not correspond to any specific region of the sky any more.
Also note the @code{--writeall} option: it is necessary because @code{sigclip-mean} will return two datasets: the actual coadded image and an image showing how many images were ultimately used for each pixel.
Because of this, the output has two HDUs, containing both these images respectively.
This is a good check to help improve/debug your results.
Let's compare this coadded PSF with the images of the individual stars that were used to create it.
You can clearly see that the number of masked pixels is significantly decreased and the PSF is much more ``cleaner''.
@example
$ astscript-fits-view outer/coadd.fits outer/stamps/*.fits
@end example
However, the saturation in the center still remains.
Also, because we did not have too many images, some regions still are very noisy.
If we had more bright stars in our selected magnitude range, we could have filled those outer remaining patches.
In a large survey like J-PLUS (that we are using here), you can simply look into other fields that were observed soon before/after the image ID 67510 that we used here (to have a similar PSF) and get more stars in those images to add to these.
In fact, the J-PLUS DR2 image ID of the field above was intentionally preserved during the steps above to show how easy it is to use images from other fields and blend them all into the output PSF.
@node Inner part of the PSF, Uniting the different PSF components, Building outer part of PSF, Building the extended PSF
@subsection Inner part of the PSF
In @ref{Building outer part of PSF}, we were able to create a coadd of the outer-most behavior of the PSF in a J-PLUS survey image.
But the central part that was affected by saturation and non-linearity is still remaining, and we still do not have a ``complete'' PSF!
In this section, we will use the same steps before to make coadds of more inner regions of the PSF to ultimately unite them all into a single PSF in @ref{Uniting the different PSF components}.
For the outer PSF, we selected stars in the magnitude range of 6 to 10.
So let's have a look and see how many stars we have in the magnitude range of 12 to 13 with a more relaxed condition on the minimum distance for neighbors, @option{--mindistdeg=0.01} (36 arcsec, since these stars are fainter), and use the ds9 region script to visually inspect them:
@example
$ mkdir inner
$ astscript-psf-select-stars flat/67510.fits \
--magnituderange=12,13 --mindistdeg=0.01 \
--output=inner/67510-12-13.fits
$ astscript-ds9-region inner/67510-12-13.fits -cra,dec \
--namecol=phot_g_mean_mag \
--command="ds9 flat/67510.fits -zoom to fit -zscale"
@end example
We have 41 stars, but if you zoom into their centers, you will see that they do not have any major bleeding-vertical saturation any more.
Only the very central core of some of the stars is saturated.
We can therefore use these stars to fill the strong bleeding footprints that were present in the outer coadd of @file{outer/coadd.fits}.
Similar to before, let's build ready-to-coadd crops of these stars.
To get a better feeling of the normalization radii, follow the same steps of @ref{Building outer part of PSF} (setting @option{--tmpdir} and @option{--keeptmp}).
In this case, since the stars are fainter, we can set a smaller size for the individual stamps, @option{--widthinpix=500,500}, to speed up the calculations:
@c For 'numimgs' and the 'astarithmetic' commands, see the comments above
@c the same step for the outer part.
@example
$ counter=1
$ mkdir inner/stamps
$ asttable inner/67510-12-13.fits \
| while read -r ra dec mag; do
astscript-psf-stamp label/67510-seg.fits \
--mode=wcs \
--normradii=5,10 \
--center=$ra,$dec \
--widthinpix=500,500 \
--segment=label/67510-seg.fits \
--output=inner/stamps/67510-$counter.fits; \
counter=$((counter+1)); \
done
$ numimgs=$(echo inner/stamps/*.fits | wc -w)
$ astarithmetic inner/stamps/*.fits $numimgs 3 0.2 sigclip-mean \
-g1 --output=inner/coadd.fits --wcsfile=none \
--writeall
$ astscript-fits-view inner/coadd.fits inner/stamps/*.fits
@end example
@noindent
We are now ready to unite the two coadds we have constructed: the outer and the inner parts.
@node Uniting the different PSF components, Subtracting the PSF, Inner part of the PSF, Building the extended PSF
@subsection Uniting the different PSF components
In @ref{Building outer part of PSF} we built the outer part of the extended PSF and the inner part was built in @ref{Inner part of the PSF}.
The outer part was built with very bright stars, and the inner part using fainter stars to not have saturation in the core of the PSF.
The next step is to join these two parts in order to have a single PSF.
First of all, let's have a look at the two coadds and also to their radial profiles to have a good feeling of the task.
Note that you will need to have TOPCAT to run the last command and plot the radial profile (see @ref{TOPCAT}).
@example
$ astscript-fits-view outer/coadd.fits inner/coadd.fits
$ astscript-radial-profile outer/coadd.fits -o outer/profile.fits
$ astscript-radial-profile inner/coadd.fits -o inner/profile.fits
$ astscript-fits-view outer/profile.fits inner/profile.fits
@end example
From the visual inspection of the images and the radial profiles, it is clear that we have saturation in the center for the outer part.
Note that the absolute flux values of the PSFs are meaningless since they depend on the normalization radii we used to obtain them.
The uniting step consists in scaling up (or down) the inner part of the PSF to have the same flux at the junction radius, and then, use that flux-scaled inner part to fill the center of the outer PSF.
To get a feeling of the process, first, let's open the two radial profiles and find the factor manually first:
@enumerate
@item
Run this command to open the two tables in @ref{TOPCAT}:
@example
$ astscript-fits-view outer/profile.fits inner/profile.fits
@end example
@item
On the left side of the screen, under ``Table List'', you will see the two imported tables.
Click on the first one (profile of the outer part) so it is shown first.
@item
Under the ``Graphics'' menu item, click on ``Plane plot''.
A new window will open with the plot of the first two columns: @code{RADIUS} on the horizontal axis and @code{MEAN} on the vertical.
The rest of the steps are done in this window.
@item
In the bottom settings, within the left panel, click on the ``Axes'' item.
This will allow customization of the plot axes.
@item
In the bottom-right panel, click on the box in front of ``Y Log'' to make the vertical axis logarithmic-scaled.
@item
On the ``Layers'' menu, select ``Add Position Control'' to allow adding the profile of the inner region.
After it, you will see that a new red-blue scatter plot icon opened on the bottom-left menu (with a title of @code{<no table>}).
@item
On the bottom-right panel, in the drop-down menu in front of @code{Table:}, select @code{2: profile.fits}.
Afterwards, you will see the radial profile of the inner coadd as the newly added blue plot.
Our goal here is to find the factor that is necessary to multiply with the inner profile so it matches the outer one.
@item
On the bottom-right panel, in front of @code{Y:}, you will see @code{MEAN}.
Click in the white-space after it, and type this: @code{*100}.
This will display the @code{MEAN} column of the inner profile, after multiplying it by 100.
Afterwards, you will see that the inner profile (blue) matches more cleanly with the outer (red); especially in the smaller radii.
At larger radii, it does not drop like the red plot.
This is because of the extremely low signal-to-noise ratio at those regions in the fainter stars used to make this coadd.
@item
Take your mouse cursor over the profile, in particular over the bump around a radius of 100 pixels.
Scroll your mouse down-ward to zoom-in to the profile and up-ward to zoom-out.
You can click-and-hold any part of the profile and if you move your cursor (while still holding the mouse-button) to look at different parts of the profile.
This is particular helpful when you have zoomed-in to the profile.
@item
Zoom-in to the bump around a radius of 100 pixels until the horizontal axis range becomes around 50 to 130 pixels.
@item
You clearly see that the inner coadd (blue) is much more noisy than the outer (red) coadd.
By ``noisy'', we mean that the scatter of the points is much larger.
If you further zoom-out, you will see that the shallow slope at the larger radii of the inner (blue) profile has also affected the height of this bump in the inner profile.
This is a @emph{very important} point: this clearly shows that the inner profile is too noisy at these radii.
@item
Click-and-hold your mouse to see the inner parts of the two profiles (in the range 0 to 80).
You will see that for radii less than 40 pixels, the inner profile (blue) points loose their scatter (and thus have a good signal-to-noise ratio).
@item
Zoom-in to the plot and follow the profiles until smaller radii (for example, 10 pixels).
You see that for each radius, the inner (blue) points are consistently above the outer (red) points.
This shows that the @mymath{\times100} factor we selected above was too much.
@item
In the bottom-right panel, change the @code{100} to @code{80} and zoom-in to the same region.
At each radius, the blue points are now below the red points, so the scale-factor 80 is not enough.
So let's increase it and try @code{90}.
After zooming-in, you will notice that in the inner-radii (less than 30 pixels), they are now very similar.
The ultimate aim of the steps below is to find this factor automatically.
@cindex Saturation (CCDs)
@cindex Non-linearity (CCDs)
@item
But before continuing, let's focus on another important point about the central regions: non-linearity and saturation.
While you are zoomed-in (from the step above), follow (click-and-drag) the profile towards smaller radii.
You will see that smaller than a radius of 10, they start to diverge.
But this time, the outer (red) profile is getting a shallower slope and diverges significantly from about the radius of 8.
We had masked all saturated pixels before, so this divergence for radii smaller than 10 shows the effect of the CCD's non-linearity (where the number of electrons will not be linearly correlated with the number of incident photons).
This is present in all CCDs and pixels beyond this level should not be used in measurements (or properly corrected).
@end enumerate
The items above were only listed so you get a good mental/visual understanding of the logic behind the operation of the next script (and to learn how to tune its parameters where necessary): @file{astscript-psf-scale-factor}.
This script is more general than this particular problem, but can be used for this special case also.
Its job is to take a model of an object (PSF, or inner coadd in this case) and the position of an instance of that model (a star, or the outer coadd in this case) in a larger image.
Instead of dealing with radial profiles (that enforce a certain shape), this script will put the centers of the inner and outer PSFs over each other and divide the outer by the inner.
Let's have a look with the command below.
Just note that we are running it with @option{--keeptmp} so the temporary directory with all the intermediate files remain for further clarification:
@example
$ astscript-psf-scale-factor outer/coadd.fits \
--psf=inner/coadd.fits --center=501,501 \
--mode=img --normradii=10,15 --keeptmp
$ astscript-fits-view coadd_psfmodelscalefactor/cropped-*.fits \
coadd_psfmodelscalefactor/for-factor-*.fits
@end example
With the second command, you see the four steps of the process: the first two images show the cropped outer and inner coadds (to same width image).
The third shows the radial position of each pixel (which is used to only keep the pixels within the desired radial range).
The fourth shows the per-pixel division of the outer by the inner within the requested radii.
The sigma-clipped median and standard deviation of these pixels is finally reported.
The standard deviation is useful, for example, if you consider different center positions (with small offsets) and consider that one that provides the lowest standard deviation value.
Unlike the radial profile method (which averages over a circular/elliptical annulus for each radius), this method imposes no a-priori shape on the PSF.
This makes it very useful for complex PSFs (like the case here).
To continue, let's remove the temporary directory and re-run the script but with @option{--quiet} mode so we can put the outputs in a shell variable.
@example
$ rm -r coadd_psfmodelscalefactor
$ values=$(astscript-psf-scale-factor outer/coadd.fits \
--psf=inner/coadd.fits --center=501,501 \
--mode=img --normradii=10,15 --quiet)
$ scale=$(echo $values | awk '@{print $1@}')
$ stdval=$(echo $values | awk '@{print $2@}')
$ echo "$scale $stdval"
@end example
Now that we know the scaling factor, we are ready to unite the outer and the inner part of the PSF.
To do that, we will use the script @file{astscript-psf-unite} with the command below (for more on this script, see @ref{Invoking astscript-psf-unite}).
The basic parameters are the inner part of the PSF (given to @option{--inner}), the inner part's scale factor (@option{--scale}), and the junction radius (@option{--radius}).
The inner part is first scaled, and all the pixels of the outer image within the given radius are replaced with the pixels of the inner image.
Since the flux factor was computed for a ring of pixels between 10 and 15 pixels, let's set the junction radius to be 12 pixels (roughly in between 10 and 15):
@example
$ astscript-psf-unite outer/coadd.fits \
--inner=inner/coadd.fits --radius=12 \
--scale=$scale --output=psf.fits
@end example
@noindent
Let's have a look at the outer coadd and the final PSF with the command below.
Since we want several other DS9 settings to help you directly see the main point, we are using @option{--ds9extra}.
After DS9 is opened, you can see that the center of the PSF has now been nicely filled.
You can click on the ``Edit'' button and then the ``Colorbar'' and hold your cursor over the image and move it.
You can see that besides filling the inner regions nicely, there is also no major discontinuity in the 2D image around the union radius of 12 pixels around the center.
@example
$ astscript-fits-view outer/coadd.fits psf.fits --ds9scale=minmax \
--ds9extra="-scale limits 0 22000 -match scale" \
--ds9extra="-lock scale yes -zoom 4 -scale log"
@end example
Nothing demonstrates the effect of a bad analysis than actually seeing a bad result!
So let's choose a bad normalization radial range (50 to 60 pixels) and unite the inner and outer parts based on that.
The last command will open the two PSFs together in DS9, you should be able to immediately see the discontinuity in the union radius.
@example
$ values=$(astscript-psf-scale-factor outer/coadd.fits \
--psf=inner/coadd.fits --center=501,501 \
--mode=img --normradii=50,60 --quiet)
$ scale=$(echo $values | awk '@{print $1@}')
$ astscript-psf-unite outer/coadd.fits \
--inner=inner/coadd.fits --radius=55 \
--scale=$scale --output=psf-bad.fits
$ astscript-fits-view psf-bad.fits psf.fits --ds9scale=minmax \
--ds9extra="-scale limits 0 50 -match scale" \
--ds9extra="-lock scale yes -zoom 4 -scale log"
@end example
As you see, the selection of the normalization radii and the unite radius are very important.
The first time you are trying to build the PSF of a new dataset, it has to be explored with a visual inspection of the images and radial profiles.
Once you have found a good normalization radius for a certain part of the PSF in a survey, you can generally use it comfortably without change.
But for a new survey, or a different part of the PSF, be sure to repeat the visual checks above to choose the best radii.
As a summary, a good junction radius is one that:
@itemize
@item
Is large enough to not let saturation and non-linearity (from the outer profile) into the inner region.
@item
Is small enough to have a sufficiently high signal to noise ratio (from the inner profile) to avoid adding noise in the union radius.
@end itemize
Now that the complete PSF has been obtained, let's remove that bad-looking PSF, and stick with the nice and clean PSF for the next step in @ref{Subtracting the PSF}.
@example
$ rm -rf psf-bad.fits
@end example
@node Subtracting the PSF, , Uniting the different PSF components, Building the extended PSF
@subsection Subtracting the PSF
Previously (in @ref{Uniting the different PSF components}) we constructed a full PSF, from the central pixel to a radius of 500 pixels.
Now, let's use the PSF to subtract the scattered light from each individual star in the image.
By construction, the pixel values of the PSF came from the normalization of the individual stamps (that were created for stars of different magnitudes).
As a consequence, it is necessary to compute a scale factor to fit that PSF image to each star.
This is done with the same @file{astscript-psf-scale-factor} command that we used previously in @ref{Uniting the different PSF components}.
The difference is that now we are not aiming to join two different PSF parts but looking for the necessary scale factor to match the star with the PSF.
Afterwards, we will use @file{astscript-psf-subtract} for placing the PSF image at the desired coordinates within the same pixel grid as the image.
Finally, once the stars have been modeled by the PSF, we will subtract it.
First, let's start with a single star.
Later, when the basic idea has been explained, we will generalize the method for any number of stars.
With the following command we obtain the coordinates (RA and DEC) and magnitude of the brightest star in the image (which is on the top edge of the image):
@example
$ mkdir single-star
$ center=$(asttable flat/67510-bright.fits --sort phot_g_mean_mag \
--column=ra,dec --head 1 \
| awk '@{printf "%s,%s", $1, $2@}')
$ echo $center
@end example
With the center position of that star, let's obtain the flux factor using the same normalization ring we used for the creation of the outer part of the PSF.
Remember that two values are computed: the median and the standard deviation values, see @ref{Invoking astscript-psf-scale-factor}.
@example
$ values=$(astscript-psf-scale-factor label/67510-seg.fits \
--mode=wcs --quiet \
--psf=psf.fits \
--center=$center \
--normradii=10,15 \
--segment=label/67510-seg.fits)
$ scale=$(echo $values | awk '@{print $1@}')
@end example
Now we have all the information necessary to model the star using the PSF: the position on the sky and the flux factor.
Let's use this data with the script @file{astscript-psf-subtract} for modeling this star and have a look with DS9.
@example
$ astscript-psf-subtract label/67510-seg.fits \
--mode=wcs \
--psf=psf.fits \
--scale=$scale \
--center=$center \
--output=single-star/subtracted.fits
$ astscript-fits-view label/67510-seg.fits single-star/subtracted.fits \
--ds9center=$center --ds9mode=wcs --ds9extra="-zoom 4"
@end example
You will notice that there is something wrong with this ``subtraction''!
The box of the extended PSF is clearly visible!
The sky noise under the box is clearly larger than the rest of the noise in the image.
Before reading on, please try to think about the cause of this yourself.
To understand the cause, let's look at the scale factor, the number of stamps used to build the outer part (and its square root):
@example
$ echo $scale
$ ls outer/stamps/*.fits | wc -l
$ ls outer/stamps/*.fits | wc -l | awk '@{print sqrt($1)@}'
@end example
You see that the scale is almost 19!
As a result, the PSF has been multiplied by 19 before being subtracted.
However, the outer part of the PSF was created with only a handful of star stamps.
When you coadd @mymath{N} images, the coadd's signal-to-noise ratio (S/N) improves by @mymath{\sqrt{N}}.
We had 8 images for the outer part, so the S/N has only improved by a factor of just under 3!
When we multiply the final coadded PSF with 19, we are also scaling up the noise by that same factor (most importantly: in the outer most regions where there is almost no signal).
So the coadded image's noise-level is @mymath{19/3=6.3} times larger than the noise of the input image.
This terrible noise-level is what you clearly see as the footprint of the PSF.
To confirm this, let's use the commands below to subtract the faintest of the bright-stars catalog (note the use of @option{--tail} when finding the central position).
You will notice that the scale factor (@mymath{\sim1.3}) is now smaller than 3.
So when we multiply the PSF with this factor, the PSF's noise level is lower than our input image and we should not see any footprint like before.
Note also that we are using a larger zoom factor, because this star is smaller in the image.
@example
$ center=$(asttable flat/67510-bright.fits --sort phot_g_mean_mag \
--column=ra,dec --tail 1 \
| awk '@{printf "%s,%s", $1, $2@}')
$ values=$(astscript-psf-scale-factor label/67510-seg.fits \
--mode=wcs --quiet \
--psf=psf.fits \
--center=$center \
--normradii=10,15 \
--segment=label/67510-seg.fits)
$ scale=$(echo $values | awk '@{print $1@}')
$ echo $scale
$ astscript-psf-subtract label/67510-seg.fits \
--mode=wcs \
--psf=psf.fits \
--scale=$scale \
--center=$center \
--output=single-star/subtracted.fits
$ astscript-fits-view label/67510-seg.fits single-star/subtracted.fits \
--ds9center=$center --ds9mode=wcs --ds9extra="-zoom 10"
@end example
In a large survey like J-PLUS, it is easy to use more and more bright stars from different pointings (ideally with similar FWHM and similar telescope properties@footnote{For example, in J-PLUS, the baffle of the secondary mirror was adjusted in 2017 because it produced extra spikes in the PSF. So all images after that date have a PSF with 4 spikes (like this one), while those before it have many more spikes.}) to improve the S/N of the PSF.
As explained before, we designed the output files of this tutorial with the @code{67510} (which is this image's pointing label in J-PLUS) where necessary so you see how easy it is to add more pointings to use in the creation of the PSF.
Let's consider now more than one single star.
We should have two things in mind:
@itemize
@item
The brightest (subtract-able, see the point below) star should be the first star to be subtracted.
This is because of its extended wings which may affect the scale factor of nearby stars.
So we should sort the catalog by magnitude and come down from the brightest.
@item
We should only subtract stars where the scale factor is less than the S/N of the PSF (in relation to the data).
@end itemize
Since it can get a little complex, it is easier to implement this step as a script (that is heavily commented for you to easily understand every step; especially if you put it in a good text editor with color-coding!).
You will notice that script also creates a @file{.log} file, which shows which star was subtracted and which one was not (this is important, and will be used below!).
@example
#!/bin/bash
# Abort the script on first error.
set -e
# ID of image to subtract stars from.
imageid=67510
# Get S/N level of the final PSF in relation to the actual data:
snlevel=$(ls outer/stamps/*.fits | wc -l | awk '@{print sqrt($1)@}')
# Put a copy of the image we want to subtract the PSF from in the
# final file (this will be over-written after each subtraction).
subtracted=subtracted/$imageid.fits
cp label/$imageid-seg.fits $subtracted
# Name of log-file to keep status of the subtraction of each star.
logname=subtracted/$imageid.log
echo "# Column 1: RA [deg, f64] Right ascension of star." > $logname
echo "# Column 2: Dec [deg, f64] Declination of star." >> $logname
echo "# Column 3: Stat [deg, f64] Status (1: subtracted)" >> $logname
# Go over each item in the bright star catalog:
asttable flat/67510-bright.fits -cra,dec --sort phot_g_mean_mag \
| while read -r ra dec; do
# Put a comma between the RA/Dec to pass to options.
center=$(echo $ra $dec | awk '@{printf "%s,%s", $1, $2@}')
# Calculate the scale value
values=$(astscript-psf-scale-factor label/67510-seg.fits \
--mode=wcs --quiet\
--psf=psf.fits \
--center=$center \
--normradii=10,15 \
--segment=label/67510-seg.fits)
scale=$(echo $values | awk '@{print $1@}')
# Subtract this star if the scale factor is less than the S/N
# level calculated above.
check=$(echo $snlevel $scale \
| awk '@{if($1>$2) c="good"; else c="bad"; print c@}')
if [ $check = good ]; then
# A temporary file to subtract this star.
subtmp=subtracted/$imageid-tmp.fits
# Subtract this star from the image where all previous stars
# were subtracted.
astscript-psf-subtract $subtracted \
--mode=wcs \
--psf=psf.fits \
--scale=$scale \
--center=$center \
--output=$subtmp
# Rename the temporary subtracted file to the final one:
mv $subtmp $subtracted
# Keep the status for this star.
status=1
else
# Let the user know this star did not work, and keep the status
# for this star.
echo "$center: $scale is larger than $snlevel"
status=0
fi
# Keep the status in a log file.
echo "$ra $dec $status" >> $logname
done
@end example
Copy the contents above into a file called @file{subtract-psf-from-cat.sh} and run the following commands.
Just note that in the script above, we assumed the output is written in the @file{subtracted/}, directory, so we will first make that.
@example
$ mkdir subtracted
$ chmod +x subtract-psf-from-cat.sh
$ ./subtract-psf-from-cat.sh
$ astscript-fits-view label/67510-seg.fits subtracted/67510.fits
@end example
Can you visually find the stars that have been subtracted?
Its a little hard, is not it?
This shows that you done a good job this time (the sky-noise is not significantly affected)!
So let's subtract the actual image from the PSF-subtracted image to see the scattered light field of the subtracted stars.
With the second command below we will zoom into the brightest subtracted star, but of course feel free to zoom-out and inspect the others also.
@example
$ astarithmetic label/67510-seg.fits subtracted/67510.fits - \
--output=scattered-light.fits -g1
$ center=$(asttable subtracted/67510.log --equal=Stat,1 --head=1 \
-cra,dec | awk '@{printf "%s,%s", $1, $2@}')
$ astscript-fits-view label/67510-seg.fits subtracted/67510.fits \
scattered-light.fits \
--ds9center=$center --ds9mode=wcs \
--ds9extra="-scale limits -0.5 1.5 -match scale" \
--ds9extra="-lock scale yes -zoom 10" \
--ds9extra="-tile mode column"
## We can always make it easily, so let's remove this.
$ rm scattered-light.fits
@end example
You will probably have noticed that in the scattered light field there are some patches that correspond to the saturation of the stars.
Since we obtained the scattered light field by subtracting PSF-subtracted image from the original image, it is natural that we have such saturated regions.
To solve such inconvenience, this script also has an option to not make the subtraction of the PSF but to give as output the modeled star.
For doing that, it is necessary to run the script with the option @option{--modelonly}.
We encourage the reader to obtain such scattered light field model.
In some scenarios it could be interesting having such way of correcting the PSF.
For example, if there are many faint stars that can be modeled at the same time because their flux do not affect each other.
In such situation, the task could be easily parallelized without having to wait to model the brighter stars before the fainter ones.
At the end, once all stars have been modeled, a simple Arithmetic command could be used to sum the different modeled-PSF stamps to obtain the entire scattered light field.
In general you see that the subtraction has been done nicely and almost all the extended wings of the PSF have been subtracted.
The central regions of the stars are not perfectly subtracted:
@itemize
@item
Some may get too dark at the center.
This may be due to the non-linearity of the CCD counting (as discussed previously in @ref{Uniting the different PSF components}).
@item
Others may have a strong gradient: one side is too positive and one side is too negative (only in the very central few pixels).
This is due to the non-accurate positioning: most probably this happens because of imperfect astrometry.
@end itemize
Note also that during this process we assumed that the PSF does not vary with the CCD position or any other parameter.
In other words, we are obtaining an averaged PSF model from a few star stamps that are naturally different, and this also explains the residuals on each subtracted star.
We let as an interesting exercise the modeling and subtraction of other stars, for example, the non saturated stars of the image.
By doing this, you will notice that in the core region the residuals are different compared to the residuals of brighter stars that we have obtained.
In general, in this tutorial we have showed how to deal with the most important challenges for constructing an extended PSF.
Each image or dataset will have its own particularities that you will have to take into account when constructing the PSF.
@node Sufi simulates a detection, Detecting lines and extracting spectra in 3D data, Building the extended PSF, Tutorials
@section Sufi simulates a detection
@cindex Azophi
@cindex Abd al-rahman Sufi
@cindex Sufi, Abd al-rahman
It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986 A.D.)@footnote{In Latin Sufi is known as Azophi.
He was an Iranian astronomer.
His manuscript ``Book of fixed stars'' contains the first recorded observations of the Andromeda galaxy, the Large Magellanic Cloud and seven other non-stellar or `nebulous' objects.} is in Shiraz as a guest astronomer.
He had come there to use the advanced 123 centimeter astrolabe for his studies on the ecliptic.
However, something was bothering him for a long time.
While mapping the constellations, there were several non-stellar objects that he had detected in the sky, one of them was in the Andromeda constellation.
During a trip he had to Yemen, Sufi had seen another such object in the southern skies looking over the Indian ocean.
He was not sure if such cloud-like non-stellar objects (which he was the first to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or if they were only the result of some bias in his observations.
Could such diffuse objects actually be detected at all with his detection technique?
@cindex Almagest
@cindex Claudius Ptolemy
@cindex Ptolemy, Claudius
He still had a few hours left until nightfall (when he would continue his studies on the ecliptic) so he decided to find an answer to this question.
He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D) Almagest and had made lots of corrections to it, in particular in measuring the brightness.
Using his same experience, he was able to measure a magnitude for the objects and wanted to simulate his observation to see if a simulated object with the same brightness and size could be detected in simulated noise with the same detection technique.
The general outline of the steps he wants to take are:
@enumerate
@item
Make some mock profiles in an over-sampled image.
The initial mock image has to be over-sampled prior to convolution or other forms of transformation in the image.
Through his experiences, Sufi knew that this is because the image of heavenly bodies is actually transformed by the atmosphere or other sources outside the atmosphere (for example, gravitational lenses) prior to being sampled on an image.
Since that transformation occurs on a continuous grid, to best approximate it, he should do all the work on a finer pixel grid.
In the end he can resample the result to the initially desired grid size.
@item
@cindex PSF
Convolve the image with a point spread function (PSF, see @ref{PSF}) that is over-sampled to the same resolution as the mock image.
Since he wants to finish in a reasonable time and the PSF kernel will be very large due to oversampling, he has to use frequency domain convolution which has the side effect of dimming the edges of the image.
So in the first step above he also has to build the image to be larger by at least half the width of the PSF convolution kernel on each edge.
@item
With all the transformations complete, the image should be resampled to the same size of the pixels in his detector.
@item
He should remove those extra pixels on all edges to remove frequency domain convolution artifacts in the final product.
@item
He should add noise to the (until now, noise-less) mock image.
After all, all observations have noise associated with them.
@end enumerate
Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in Isfahan (where he worked) and had installed it on his computer a year before.
It had tools to do all the steps above.
He had used MakeProfiles before, but was not sure which columns he had chosen in his user or system-wide configuration files for which parameters, see @ref{Configuration files}.
So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option to make sure what columns in a catalog MakeProfiles currently recognizes, and confirm the output image parameters.
In particular, Sufi is interested in the recognized columns (shown below).
@example
$ astmkprof -P
[[[ ... Truncated lines ... ]]]
# Output:
type float32 # Type of output: e.g., int16, float32, etc.
mergedsize 1000,1000 # Number of pixels along first FITS axis.
oversample 5 # Scale of oversampling (>0 and odd).
[[[ ... Truncated lines ... ]]]
# Columns, by info (see `--searchin'), or number (starting from 1):
ccol 2 # Coord. columns (one call for each dim.).
ccol 3 # Coord. columns (one call for each dim.).
fcol 4 # sersic (1), moffat (2), gaussian (3), point
# (4), flat (5), circumference (6), distance
# (7), custom-prof (8), azimuth (9),
# custom-img (10).
rcol 5 # Effective radius or FWHM in pixels.
ncol 6 # Sersic index or Moffat beta.
pcol 7 # Position angle.
qcol 8 # Axis ratio.
mcol 9 # Magnitude.
tcol 10 # Truncation in units of radius or pixels.
[[[ ... Truncated lines ... ]]]
@end example
@noindent
In Gnuastro, column counting starts from 1, so the columns are ordered such that the first column (number 1) can be an ID he specifies for each object (and MakeProfiles ignores), each subsequent column is used for another property of the profile.
It is also possible to use column names for the values of these options and change these defaults, but Sufi preferred to stick to the defaults.
Fortunately MakeProfiles has the capability to also make the PSF which is to be used on the mock image and using the @option{--prepforconv} option, he can also make the mock image to be larger by the correct amount and all the sources to be shifted by the correct amount.
For his initial check he decides to simulate the nebula in the Andromeda constellation.
The night he was observing, the PSF had roughly a FWHM of about 5 pixels, so as the first row (profile) in the table below, he defines the PSF parameters.
Sufi sets the radius column (@code{rcol} above, fifth column) to @code{5.000}, he also chooses a Moffat function for its functional form.
Remembering how diffuse the nebula in the Andromeda constellation was, he decides to simulate it with a mock S@'{e}rsic index 1.0 profile.
He wants the output to be 499 pixels by 499 pixels, so he can put the center of the mock profile in the central pixel of the image which is the 250th pixel along both dimensions (note that an even number does not have a ``central'' pixel).
Looking at his drawings of it, he decides a reasonable effective radius for it would be 40 pixels on this image pixel scale (second row, 5th column below).
He also sets the axis ratio (0.4) and position angle (-25 degrees) to approximately correct values too, and finally he sets the total magnitude of the profile to 3.44 which he had measured.
Sufi also decides to truncate both the mock profile and PSF at 5 times the respective radius parameters.
In the end he decides to put four stars on the four corners of the image at very low magnitudes as a visual scale.
While he was preparing the catalog, one of his students approached him and was also following the steps.
@noindent
As described above, the catalog of profiles to build will be a table (multiple columns of numbers) like below:
@example
0 0.000 0.000 2 5 4.7 0.0 1.0 30.0 5.0
1 250.0 250.0 1 40 1.0 -25 0.4 3.44 5.0
2 50.00 50.00 4 0 0.0 0.0 0.0 6.00 0.0
3 450.0 50.00 4 0 0.0 0.0 0.0 6.50 0.0
4 50.00 450.0 4 0 0.0 0.0 0.0 7.00 0.0
5 450.0 450.0 4 0 0.0 0.0 0.0 7.50 0.0
@end example
This contains all the ``data'' to build the profile, and you can easily pass it to Gnuastro's MakeProfiles: since Sufi already knows the columns and expected values very good, he has placed the information in the proper columns.
However, when the student sees this, he just sees a mumble-jumble of numbers!
Generally, Sufi explains to the student that even if you know the number positions and values very nicely today, in a couple of months you will forget!
It will then be very hard for you to interpret the numbers properly.
So you should never use naked data (or data without any extra information).
@cindex Metadata
Data (or information) that describes other data is called ``metadata''!
One common example is column names (the name of a column is itself a data element, but data that describes the lower-level data within that column: how to interpret the numbers within it).
Sufi explains to his student that Gnuastro has a convention for adding metadata within a plain-text file; and guides him to @ref{Gnuastro text table format}.
Because we do not want metadata to be confused with the actual data, in a plain-text file, we start lines containing metadata with a `@code{#}'.
For example, see the same data above, but this time with metadata for every column:
@example
# Column 1: ID [counter, u8] Identifier
# Column 2: X [pix, f32] Horizontal position
# Column 3: Y [pix, f32] Vertical position
# Column 4: PROFILE [name, u8] Radial profile function
# Column 5: R [pix, f32] Effective radius
# Column 6: N [n/a, f32] Sersic index
# Column 7: PA [deg, f32] Position angle
# Column 8: Q [n/a, f32] Axis ratio
# Column 9: MAG [log, f32] Magnitude
# Column 10: TRUNC [n/a, f32] Truncation (multiple of R)
0 0.000 0.000 2 5 4.7 0.0 1.0 30.0 5.0
1 250.0 250.0 1 40 1.0 -25 0.4 3.44 5.0
2 50.00 50.00 4 0 0.0 0.0 0.0 6.00 0.0
3 450.0 50.00 4 0 0.0 0.0 0.0 6.50 0.0
4 50.00 450.0 4 0 0.0 0.0 0.0 7.00 0.0
5 450.0 450.0 4 0 0.0 0.0 0.0 7.50 0.0
@end example
@noindent
The numbers now make much more sense for the student!
Before continuing, Sufi reminded the student that even though metadata may not be strictly/technically necessary (for the computer programs), metadata are critical for human readers!
Therefore, a good scientist should never forget to keep metadata with any data that they create, use or archive.
To start simulating the nebula, Sufi creates a directory named @file{simulationtest} in his home directory.
Note that the @command{pwd} command will print the ``parent working directory'' of the current directory (its a good way to confirm/check your current location in the full file system: it always starts from the root, or `@code{/}').
@example
$ mkdir ~/simulationtest
$ cd ~/simulationtest
$ pwd
/home/rahman/simulationtest
@end example
@cindex Redirection
@cindex Standard output
It is possible to use a plain-text editor to manually put the catalog contents above into a plain-text file.
But to easily automate catalog production (in later trials), Sufi decides to fill the input catalog with the redirection features of the command-line (or shell).
Sufi's student was not familiar with this feature of the shell!
So Sufi decided to do a fast demo; giving the following explanations while running the commands:
Shell redirection allows you to ``re-direct'' the ``standard output'' of a program (which is usually printed by the program on the command-line during its execution; like the output of @command{pwd} above) into a file.
For example, let's simply ``echo'' (or print to standard output) the line ``This is a test.'':
@example
$ echo "This is a test."
This is a test.
@end example
@noindent
As you see, our statement was simply ``echo''-ed to the standard output!
To redirect this sentence into a file (instead of simply printing it on the standard output), we can simply use the @code{>} character, followed by the name of the file we want it to be dumped in.
@example
$ echo "This is a test." > test.txt
@end example
This time, the @command{echo} command did not print anything in the terminal.
Instead, the shell (command-line environment) took the output, and ``re-directed'' it into a file called @file{test.txt}.
Let's confirm this with the @command{ls} command (@command{ls} is short for ``list'' and will list all the files in the current directory):
@example
$ ls
test.txt
@end example
@noindent
Now that you confirm the existence of @file{test.txt}, you can see its contents with the @command{cat} command (short for ``concatenation''; because it can also merge multiple files together):
@example
$ cat test.txt
This is a test.
@end example
@noindent
Now that we have written our first line in @file{test.txt}, let's try adding a second line (do not forget that our final catalog of objects to simulate will have multiple lines):
@example
$ echo "This is my second line." > test.txt
$ cat test.txt
This is my second line.
@end example
As you see, the first line that you put in the file is no longer present!
This happens because `@code{>}' always starts dumping content to a file from the start of the file.
In effect, this means that any possibly pre-existing content is over-written by the new content!
To append new lines (or dumping new content at the end of existing content), you can use `@code{>>}'.
for example, with the commands below, first we will write the first sentence (using `@code{>}'), then use `@code{>>}' to add the second and third sentences.
Finally, we will print the contents of @file{test.txt} to confirm that all three lines are preserved.
@example
$ echo "My first sentence." > test.txt
$ echo "My second sentence." >> test.txt
$ echo "My third sentence." >> test.txt
$ cat test.txt
My first sentence.
My second sentence.
My third sentence.
@end example
The student thanked Sufi for this explanation and now feels more comfortable with redirection.
Therefore Sufi continues with the main project.
But before that, he deletes the temporary test file:
@example
$ rm test.txt
@end example
To put the catalog of profile data and their metadata (that was described above) into a file, Sufi uses the commands below.
While Sufi was writing these commands, the student complained that ``I could have done in this in a text editor''.
Sufi reminded the student that it is indeed possible; but it requires manual intervention.
The advantage of a solution like below is that it can be automated (for example, adding more rows; for more profiles in the final image).
@example
$ echo "# Column 1: ID [counter, u8] Identifier" > cat.txt
$ echo "# Column 2: X [pix, f32] Horizontal position" >> cat.txt
$ echo "# Column 3: Y [pix, f32] Vertical position" >> cat.txt
$ echo "# Column 4: PROF [name, u8] Radial profile function" \
>> cat.txt
$ echo "# Column 5: R [pix, f32] Effective radius" >> cat.txt
$ echo "# Column 6: N [n/a, f32] Sersic index" >> cat.txt
$ echo "# Column 7: PA [deg, f32] Position angle" >> cat.txt
$ echo "# Column 8: Q [n/a, f32] Axis ratio" >> cat.txt
$ echo "# Column 9: MAG [log, f32] Magnitude" >> cat.txt
$ echo "# Column 10: TRUNC [n/a, f32] Truncation (multiple of R)" \
>> cat.txt
$ echo "0 0.000 0.000 2 5 4.7 0.0 1.0 30.0 5.0" >> cat.txt
$ echo "1 250.0 250.0 1 40 1.0 -25 0.4 3.44 5.0" >> cat.txt
$ echo "2 50.00 50.00 4 0 0.0 0.0 0.0 6.00 0.0" >> cat.txt
$ echo "3 450.0 50.00 4 0 0.0 0.0 0.0 6.50 0.0" >> cat.txt
$ echo "4 50.00 450.0 4 0 0.0 0.0 0.0 7.00 0.0" >> cat.txt
$ echo "5 450.0 450.0 4 0 0.0 0.0 0.0 7.50 0.0" >> cat.txt
@end example
@noindent
To make sure if the catalog's content is correct (and there was no typo for example!), Sufi runs `@command{cat cat.txt}', and confirms that it is correct.
@cindex Zero point
Now that the catalog is created, Sufi is ready to call MakeProfiles to build the image containing these objects.
He looks into his records and finds that the zero point magnitude for that night, and that particular detector, was 18 magnitudes.
The student was a little confused on the concept of zero point, so Sufi pointed him to @ref{Brightness flux magnitude}, which the student can study in detail later.
Sufi therefore runs MakeProfiles with the command below:
@example
$ astmkprof --prepforconv --mergedsize=499,499 --zeropoint=18.0 cat.txt
MakeProfiles @value{VERSION} started on Sat Oct 6 16:26:56 953
- 6 profiles read from cat.txt
- Random number generator (RNG) type: ranlxs1
- Basic RNG seed: 1652884540
- Using 12 threads.
---- row 3 complete, 5 left to go
---- row 4 complete, 4 left to go
---- row 6 complete, 3 left to go
---- row 5 complete, 2 left to go
---- ./0_cat_profiles.fits created.
---- row 1 complete, 1 left to go
---- row 2 complete, 0 left to go
- ./cat_profiles.fits created. 0.092573 seconds
-- Output: ./cat_profiles.fits
MakeProfiles finished in 0.293644 seconds
@end example
Sufi encourages the student to read through the printed output.
As the statements say, two FITS files should have been created in the running directory.
So Sufi ran the command below to confirm:
@example
$ ls
0_cat_profiles.fits cat_profiles.fits cat.txt
@end example
@cindex Oversample
@noindent
The file @file{0_cat_profiles.fits} is the PSF Sufi had asked for, and @file{cat_profiles.fits} is the image containing the main objects in the catalog.
Sufi opened the main image with the command below (using SAO DS9):
@example
$ astscript-fits-view cat_profiles.fits --ds9scale=95
@end example
The student could clearly see the main elliptical structure in the center.
However, the size of @file{cat_profiles.fits} was surprising for the student, instead of 499 by 499 (as we had requested), it was 2615 by 2615 pixels (from the command below):
@example
$ astfits cat_profiles.fits
Fits (GNU Astronomy Utilities) @value{VERSION}
Run on Sat Oct 6 16:26:58 953
-----
HDU (extension) information: 'cat_profiles.fits'.
Column 1: Index (counting from 0, usable with '--hdu').
Column 2: Name ('EXTNAME' in FITS standard, usable with '--hdu').
Column 3: Image data type or 'table' format (ASCII or binary).
Column 4: Size of data in HDU.
-----
0 MKPROF-CONFIG no-data 0
1 Mock profiles float32 2615x2615
@end example
@noindent
So Sufi explained why oversampling is important in modeling, especially for parts of the image where the flux change is significant over a pixel.
Recall that when you oversample the model (for example, by 5 times), for every desired pixel, you get 25 pixels (@mymath{5\times5}).
Sufi then explained that after convolving (next step below) we will down-sample the image to get our originally desired size/resolution.
After seeing the image, the student complained that only the large elliptical model for the Andromeda nebula can be seen in the center.
He could not see the four stars that we had also requested in the catalog.
So Sufi had to explain that the stars are there in the image, but the reason that they are not visible when looking at the whole image at once, is that they only cover a single pixel!
To prove it, he centered the image around the coordinates 2308 and 2308, where one of the stars is located in the over-sampled image [you can do this in @command{ds9} by selecting ``Pan'' in the ``Edit'' menu, then clicking around that position].
Sufi then zoomed in to that region and soon, the star's non-zero pixel could be clearly seen.
Sufi explained that the stars will take the shape of the PSF (cover an area of more than one pixel) after convolution.
If we did not have an atmosphere and we did not need an aperture, then stars would only cover a single pixel with normal CCD resolutions.
So Sufi convolved the image with this command:
@example
$ astconvolve --kernel=0_cat_profiles.fits cat_profiles.fits \
--output=cat_convolved.fits
Convolve started on Sat Oct 6 16:35:32 953
- Using 8 CPU threads.
- Input: cat_profiles.fits (hdu: 1)
- Kernel: 0_cat_profiles.fits (hdu: 1)
- Input and Kernel images padded. 0.075541 seconds
- Images converted to frequency domain. 6.728407 seconds
- Multiplied in the frequency domain. 0.040659 seconds
- Converted back to the spatial domain. 3.465344 seconds
- Padded parts removed. 0.016767 seconds
- Output: cat_convolved.fits
Convolve finished in: 10.422161 seconds
@end example
@noindent
When convolution finished, Sufi opened @file{cat_convolved.fits} and the four stars could be easily seen now:
@example
$ astscript-fits-view cat_convolved.fits --ds9scale=95
@end example
It was interesting for the student that all the flux in that single pixel is now distributed over so many pixels (the sum of all the pixels in each convolved star is actually equal to the value of the single pixel before convolution).
Sufi explained how a PSF with a larger FWHM would make the points even wider than this (distributing their flux in a larger area).
With the convolved image ready, they were prepared to resample it to the original pixel scale Sufi had planned [from the @command{$ astmkprof -P} command above, recall that MakeProfiles had over-sampled the image by 5 times].
Sufi explained the basic concepts of warping the image to his student and ran Warp with the following command:
@example
$ astwarp --scale=1/5 --centeroncorner cat_convolved.fits
Warp started on Sat Oct 6 16:51:59 953
Using 8 CPU threads.
Input: cat_convolved.fits (hdu: 1)
matrix:
0.2000 0.0000 0.4000
0.0000 0.2000 0.4000
0.0000 0.0000 1.0000
$ astfits cat_convolved_scaled.fits --quiet
0 WARP-CONFIG no-data 0
1 Warped float32 523x523
@end example
@noindent
@file{cat_convolved_scaled.fits} now has the correct pixel scale.
However, the image is still larger than what we had wanted, it is @mymath{523\times523} pixels (not our desired @mymath{499\times499}).
The student is slightly confused, so Sufi also resamples the PSF with the same scale by running
@example
$ astwarp --scale=1/5 --centeroncorner 0_cat_profiles.fits
$ astfits 0_cat_profiles_scaled.fits --quiet
0 WARP-CONFIG no-data 0
1 Warped float32 25x25
@end example
@noindent
Sufi notes that @mymath{25=12+12+1} and that @mymath{523=499+12+12}.
He goes on to explain that frequency space convolution will dim the edges and that is why he added the @option{--prepforconv} option to MakeProfiles above.
Now that convolution is done, Sufi can remove those extra pixels using Crop with the command below.
Crop's @option{--section} option accepts coordinates inclusively and counting from 1 (according to the FITS standard), so the crop region's first pixel has to be 13, not 12.
@example
$ astcrop cat_convolved_scaled.fits --section=13:*-12,13:*-12 \
--mode=img --zeroisnotblank
Crop started on Sat Oct 6 17:03:24 953
- Read metadata of 1 image. 0.001304 seconds
---- ...nvolved_scaled_cropped.fits created: 1 input.
Crop finished in: 0.027204 seconds
@end example
@noindent
To fully convince the student, Sufi checks the size of the output of the crop command above:
@example
$ astfits cat_convolved_scaled_cropped.fits --quiet
0 n/a no-data 0
1 n/a float32 499x499
@end example
@noindent
Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499} pixels and the mock Andromeda galaxy is centered on the central pixel.
This is the same dimensions as Sufi had desired in the beginning.
All this trouble was certainly worth it because now there is no dimming on the edges of the image and the profile centers are more accurately sampled.
The final step to simulate a real observation would be to add noise to the image.
Sufi set the zero point magnitude to the same value that he set when making the mock profiles and looking again at his observation log, he had measured the background flux near the nebula had a @emph{per-pixel} magnitude of 7 that night.
For more on how the background value determines the noise, see @ref{Noise basics}.
So using these values he ran Arithmetic's @code{mknoise-sigma-from-mean} operator, and with the second command, he visually inspected the image.
The @code{mknoise-sigma-from-mean} operator takes the noise standard deviation in linear units, not magnitudes (which are logarithmic).
Therefore within the same Arithmetic command, he has converted the sky background magnitude to counts using Arithmetic's @code{counts-to-mag} operator.
@example
$ astarithmetic cat_convolved_scaled_cropped.fits \
7 18 mag-to-counts mknoise-sigma-from-mean \
--output=out.fits
$ astscript-fits-view out.fits
@end example
@noindent
The @file{out.fits} file now contains the noised image of the mock catalog Sufi had asked for.
The student had not observed the nebula in the sky, so when he saw the mock image in SAO DS9 (with the second command above), he understood why Sufi was dubious: it was very diffuse!
Seeing how the @option{--output} option allows the user to specify the name of the output file, the student was confused and wanted to know why Sufi had not used it more regularly before?
Sufi explained that for intermediate steps, you can rely on the automatic output of the programs (see @ref{Automatic output}).
Doing so will give all the intermediate files a similar basic name structure, so in the end you can simply remove them all with the Shell's capabilities, and it will be familiar for other users.
So Sufi decided to show this to the student by making a shell script from the commands he had used before.
The command-line shell has the capability to read all the separate input commands from a file.
This is useful when you want to do the same thing multiple times, with only the names of the files or minor parameters changing between the different instances.
Using the shell's history (by pressing the up keyboard key) Sufi reviewed all the commands and then he retrieved the last 5 commands with the @command{$ history 5} command.
He selected all those lines he had input and put them in a text file named @file{mymock.sh}.
Then he defined the @code{edge} and @code{base} shell variables for easier customization later, and before every command, he added some comments (lines starting with @key{#}) for future readability.
Finally, Sufi pointed the student to Gnuastro's @ref{General program usage tutorial}, which has a full section on @ref{Writing scripts to automate the steps}.
@example
#!/bin/bash
edge=12
base=cat
# Stop running next commands if one fails.
set -e
# Remove any (possibly) existing output (from previous runs)
# before starting.
rm -f out.fits
# Run MakeProfiles to create an oversampled FITS image.
astmkprof --prepforconv --mergedsize=499,499 --zeropoint=18.0 \
"$base".txt
# Convolve the created image with the kernel.
astconvolve "$base"_profiles.fits \
--kernel=0_"$base"_profiles.fits \
--output="$base"_convolved.fits
# Scale the image back to the intended resolution.
astwarp --scale=1/5 --centeroncorner "$base"_convolved.fits
# Crop the edges out (dimmed during convolution). '--section'
# accepts inclusive coordinates, so the start of the section
# must be one pixel larger than its end.
st_edge=$(( edge + 1 ))
astcrop "$base"_convolved_scaled.fits --zeroisnotblank \
--mode=img --section=$st_edge:*-$edge,$st_edge:*-$edge
# Add noise to the image.
astarithmetic "$base"_convolved_scaled_cropped.fits \
7 18 mag-to-counts mknoise-sigma-from-mean \
--output=out.fits
# Remove all the temporary files.
rm 0*.fits "$base"*.fits
@end example
@cindex Comments
He used this chance to remind the student of the importance of comments in code or shell scripts!
Just like metadata in a dataset, when writing the code, you have a good mental picture of what you are doing, so writing comments might seem superfluous and excessive.
However, in one month when you want to re-use the script, you have lost that mental picture and remembering it can be time-consuming and frustrating.
The importance of comments is further amplified when you want to share the script with a friend/colleague.
So it is good to accompany any step of a script, or code, with useful comments while you are writing it (create a good mental picture of why you are doing something: do not just describe the command, but its purpose).
@cindex Gedit
@cindex GNU Emacs
Sufi then explained to the eager student that you define a variable by giving it a name, followed by an @code{=} sign and the value you want.
Then you can reference that variable from anywhere in the script by calling its name with a @code{$} prefix.
So in the script whenever you see @code{$base}, the value we defined for it above is used.
If you use advanced editors like GNU Emacs or even simpler ones like Gedit (part of the GNOME graphical user interface) the variables will become a different color which can really help in understanding the script.
We have put all the @code{$base} variables in double quotation marks (@code{"}) so the variable name and the following text do not get mixed, the shell is going to ignore the @code{"} after replacing the variable value.
To make the script executable, Sufi ran the following command:
@example
$ chmod +x mymock.sh
@end example
@noindent
Then finally, Sufi ran the script, simply by calling its file name:
@example
$ ./mymock.sh
@end example
After the script finished, the only file remaining is the @file{out.fits} file that Sufi had wanted in the beginning.
Sufi then explained to the student how he could run this script anywhere that he has a catalog if the script is in the same directory.
The only thing the student had to modify in the script was the name of the catalog (the value of the @code{base} variable in the start of the script) and the value to the @code{edge} variable if he changed the PSF size.
The student was also happy to hear that he will not need to make it executable again when he makes changes later, it will remain executable unless he explicitly changes the executable flag with @command{chmod}.
The student was really excited, since now, through simple shell scripting, he could really speed up his work and run any command in any fashion he likes allowing him to be much more creative in his works.
Until now he was using the graphical user interface which does not have such a facility and doing repetitive things on it was really frustrating and some times he would make mistakes.
So he left to go and try scripting on his own computer.
He later reminded Sufi that the second tutorial in the Gnuastro book as more complex commands in data analysis, and a more advanced introduction to scripting (see @ref{General program usage tutorial}).
Sufi could now get back to his own work and see if the simulated nebula which resembled the one in the Andromeda constellation could be detected or not.
Although it was extremely faint@footnote{The brightness of a diffuse object is added over all its pixels to give its final magnitude, see @ref{Brightness flux magnitude}.
So although the magnitude 3.44 (of the mock nebula) is orders of magnitude brighter than 6 (of the stars), the central galaxy is much fainter.
Put another way, the brightness is distributed over a large area in the case of a nebula.}.
Therefore, Sufi ran Gnuastro's detection software (@ref{NoiseChisel}) to see if this object is detectable or not.
NoiseChisel's output (@file{out_detected.fits}) is a multi-extension FITS file, so he used Gnuastro's @command{astscript-fits-view} program in the second command to see the output:
@example
$ astnoisechisel out.fits
$ astscript-fits-view out_detected.fits
@end example
In the ``Cube'' window (that was opened with DS9), if Sufi clicked on the ``Next'' button to see the pixels that were detected to contain significant signal.
Fortunately the nebula's shape was detectable and he could finally confirm that the nebula he kept in his notebook was actually observable.
He wrote this result in the draft manuscript that would later become ``Book of fixed stars''@footnote{@url{https://en.wikipedia.org/wiki/Book_of_Fixed_Stars}}.
He still had to check the other nebula he saw from Yemen and several other such objects, but they could wait until tomorrow (thanks to the shell script, he only has to define a new catalog).
It was nearly sunset and they had to begin preparing for the night's measurements on the ecliptic.
@node Detecting lines and extracting spectra in 3D data, Color images with full dynamic range, Sufi simulates a detection, Tutorials
@section Detecting lines and extracting spectra in 3D data
@cindex IFU
@cindex MUSE
@cindex ACIS
@cindex Pixel
@cindex Voxel
@cindex Spectrum
@cindex 3D data cube
@cindex Cube (3D) spectra
@cindex Integral field unit
@cindex Hyperspectral imaging
3D data cubes are an increasingly common format of data products in observational astronomy.
As opposed to 2D images (where each 2D ``picture element'' or ``pixel'' covers an infinitesimal area on the surface of the sky), 3D data cubes contain ``volume elements'' or ``voxels'' that are also connected in a third dimension.
The most common case of 3D data in observational astrophysics is when the first two dimensions are spatial (RA and Dec on the sky), and the third dimension is wavelength.
This type of data is generically (also outside of astronomy) known as Hyperspectral imaging@footnote{@url{https://en.wikipedia.org/wiki/Hyperspectral_imaging}}.
For example high-level data products of Integral Field Units (IFUs) like MUSE@footnote{@url{https://en.wikipedia.org/wiki/Multi-unit_spectroscopic_explorer}} in the optical, ACIS@footnote{@url{https://en.wikipedia.org/wiki/Advanced_CCD_Imaging_Spectrometer}} in the X-ray, or in the radio where most data are 3D cubes.
@cindex Abell 370 galaxy cluster
In this tutorial, we'll use a small crop of a reduced deep MUSE cube centered on the @url{https://en.wikipedia.org/wiki/Abell_370, Abell 370} galaxy cluster from the Pilot-WINGS survey; see Lagattuta et al. @url{https://arxiv.org/abs/2202.04663,2022}.
Abell 370 has a spiral galaxy in its background that is stretched due to the cluster's gravitational potential to create a beautiful arch.
If you haven't seen it yet, have a look at some of its images in the Wikipedia link above before continuing.
The Pilot-WINGS survey data are available in its webpage@footnote{@url{https://astro.dur.ac.uk/~hbpn39/pilot-wings.html}}.
The cube of the @emph{core} region is 10.2GBs.
This can be prohibitively large to download (and later process) on many networks and smaller computers.
Therefore, in this demonstration we won't be using the full cube.
We have prepared a small crop@footnote{You can download the full cube and create the crop your self with the commands below.
Due to the decompression of the +10GB file that is necessary for the compressed downloaded file (note that its suffix is @file{.fits.gz}), the Crop command will take a little long.
@example
$ wget https://astro.dur.ac.uk/~hbpn39/pilotWINGS/A370_PilotWINGS_CORE.fits.gz
$ astcrop A370_PilotWINGS_CORE.fits.gz -hDATA --mode=img \
--section=200:300,100:200 -oa370-crop.fits --metaname=DATA
$ astcrop A370_PilotWINGS_CORE.fits.gz -hSTAT --mode=img --append \
--section=200:300,100:200 -oa370-crop.fits --metaname=STAT
@end example
} of the full cube that you can download with the first command below.
The randomly selected crop is centered on (RA,Dec) of (39.96769,-1.58930), with a width of about 27 arcseconds.
@example
$ mkdir tutorial-3d
$ cd tutorial-3d
$ wget http://akhlaghi.org/data/a370-crop.fits # Downloads 287 MB
@end example
In the sections below, we will first review how you can visually inspect a 3D data cube in DS9 and interactively see the spectra of any region.
We will then subtract the continuum emission, detect the emission-lines within this cube and extract their spectra.
We will finish with creating synthetic narrow-band images optimized for some of the emission lines.
@menu
* Viewing spectra and redshifted lines:: Interactively see the spectra of an object
* Sky lines in optical IFUs:: How to see sky lines in a cube.
* Continuum subtraction:: Removing the continuum from a data cube.
* 3D detection with NoiseChisel:: Finding emission-lines and their spectra.
* 3D measurements and spectra:: Measuring 3d properties including spectra.
* Extracting a single spectrum and plotting it:: Extracting a single vector row.
* Cubes with logarithmic third dimension:: When the wavelength/frequency is logarithmic.
* Synthetic narrow-band images:: Collapsing the third dimension into a 2D image.
@end menu
@node Viewing spectra and redshifted lines, Sky lines in optical IFUs, Detecting lines and extracting spectra in 3D data, Detecting lines and extracting spectra in 3D data
@subsection Viewing spectra and redshifted lines
In @ref{Detecting lines and extracting spectra in 3D data} we downloaded a small crop from the Pilot-WINGS survey of Abell 370 cluster; observed with MUSE.
In this section, we will review how you can visualize/inspect a data cube using that example.
With the first command below, we'll open DS9 such that each 2D slice of the cube (at a fixed wavelength) is seen as a single image.
If you move the slider in the ``Cube'' window (that also opens), you can view the same field at different wavelengths.
We are ending the first command with a `@code{&}' so you can continue viewing DS9 while using the command-line (press one extra @code{ENTER} to see the prompt).
With the second command, you can see that the spacing between each slice is @mymath{1.25\times10^{-10}} meters (or 1.25 Angstroms).
@example
$ astscript-fits-view a370-crop.fits -h1 --ds9scale="limits -5 20" &
$ astfits a370-crop.fits --pixelscale
Basic info. for --pixelscale (remove info with '--quiet' or '-q')
Input: a370-crop.fits (hdu 1) has 3 dimensions.
Pixel scale in each FITS dimension:
1: 5.55556e-05 (deg/pixel) = 0.2 (arcsec/pixel)
2: 5.55556e-05 (deg/pixel) = 0.2 (arcsec/pixel)
3: 1.25e-10 (m/slice)
Pixel area (on each 2D slice) :
3.08642e-09 (deg^2) = 0.04 (arcsec^2)
Voxel volume:
3.85802e-19 (deg^2*m) = 5e-12 (arcsec^2*m) = 0.05 (arcsec^2*A)
@end example
In the DS9 ``Cube'' window, you will see two numbers on the two sides of the scroller.
The left number is the wavelength in meters (WCS coordinate in 3rd dimension) and the right number is the slice number (slice number or array coordinates in 3rd dimension).
You can manually edit any of these numbers and press ENTER to go to that slice in any coordinate system.
If you want to go one-by-one, simply press the ``Next'' button.
The first few slides are very noisy, but in the rest the noise level decreases and the galaxies are more obvious.
As you slide between the different wavelengths, you see that the noise-level is not constant and in some slices, the sky noise is very strong (for example, go to slice 3201 and press the ``Next'' button).
We will discuss these issues below (in @ref{Sky lines in optical IFUs}).
To view the spectra of a region in DS9 take the following steps:
@enumerate
@item
Click somewhere on the image (to make sure DS9 receives your keyboard inputs), then press @code{Ctrl+R} to activate regions and click on the brightest galaxy of this cube (center-right, at RA, Dec of 39.9659175 and -1.5893075).
@item
A thin green circle will show up; this is called a ``region'' in DS9.
@item
Double-click on the region, and you will see a ``Circle'' window.
@item
Within the ``Circle'' window, click on the ``Analysis'' menu and select ``Plot 3D''.
@item
A second ``Circle'' window will open that shows the spectra within your selected region.
This is just the sum of values on each slice within the region.
@item
Don't close the second ``circle'' window (that shows the spectrum).
Click and hold the region in DS9, and move it to other objects within the cube.
You will see that the spectrum changes as you move the region, and you can see that different objects have very different spectra.
You can even see the spectra of only one part of a galaxy, not the whole galaxy.
@item
Take the region back to the first (brightest) galaxy that we originally started with.
@item
Slide over different wavelengths in the ``Cube'' window, you will see the light-blue line moving through the spectrum as you slide to different wavelengths.
This line shows the wavelength of the displayed image in the main window over the spectra.
@cindex H-alpha
@item
The strongest emission line in this galaxy appears to be around 8500 Angstroms or @mymath{8.5\times10^{-7}} meters.
From the position of the @url{https://en.wikipedia.org/wiki/Balmer_jump, Balmer break} (blue-ward of 5000 Angstroms for this galaxy), the strong seems to be H-alpha.
@item
To confirm that this is H-alpha, you can select the ``Edit'' menu in the spectrum window and select ``Zoom''.
@item
Double-click and hold (for next step also) somewhere before the strongest line and slightly above the continuum (for example at @code{8E-07} in the horizontal and @mymath{50\times10^{-20}}erg/Angstrom/cm@mymath{^2}/s on the vertical).
As you move your cursor (while holding), you will see a rectangular box getting created.
@item
Move the bottom-left corner of the box to somewhere after the strongest line and below the continuum.
For example at @code{9E-07} and @mymath{20\times10^{-20}}erg/Angstrom/cm@mymath{^2}/s.
@item
Once you remove your finger from the mouse/touchpad, it will zoom-in to that part of the spectrum.
@item
To zoom out to the full spectrum, just press the right mouse button over the spectra (or tap with two fingers on a touchpad).
@item
Select that zoom-box again to see the brightest line much more clearly.
You can also see the two lines of the Nitrogen II doublet that sandwich H-alpha.
Beside its relative position to the Balmer break, this is further evidence that the strongest line is H-alpha.
@item
@cindex NII doublet
Let's have a look at the galaxy in its best glory: right over the H-alpha line:
Move the wavelength slider accurately (by pressing the ``Previous'' or ``Next'' buttons) such that the blue line falls in the middle of the H-alpha line.
We see that the wavelength at this slice is @code{8.56593e-07} meters or 8565.93 Angstroms.
Please compare the image of the galaxy at this wavelength with the wavelengths before (by pressing ``Next'' or ``Previous'').
You will also see that it is much more extended and brighter than other wavelengths!
H-alpha shows the un-obscured star formation of the galaxy!
@end enumerate
@cartouche
@noindent
@strong{Automaticly going to next slice:} When you want to get a general feeling of the cube, pressing the ``Next'' button many times is annoying and slow.
To automatically shift between the slices, you can press the ``Play'' button in the DS9 ``Cube'' window.
You can adjust the time it stays on each slice by clicking on the ``Interval'' menu and selecting lower values.
@end cartouche
Knowing that this is H-alpha at 8565.93 Angstroms, you can get the redshift of the galaxy with the first command below and the location of all other expected lines in Gnuastro's spectral line database with the second command.
Because there are many lines in the second command (more than 200!), with the third command, we'll only limit it to the Balmer series (that start with @code{H-}) using @command{grep}.
The output of the second command prints the metadata on the top (that is not shown any more in the third command due to the @code{grep} call).
To be complete, the first column is the observed wavelength of the given line in the given redshift and the second column is the name of the line.
@example
# Redshift where H-alpha falls on 8565.93.
$ astcosmiccal --obsline=H-alpha,8565.93 --usedredshift
0.305221
# Wavelength of all lines in Gnuastro's database at this redshift
$ astcosmiccal --obsline=H-alpha,8565.93 --listlinesatz
# Only the Balmer series (Lines starting with 'H-'; given to Grep).
$ astcosmiccal --obsline=H-alpha,8565.93 --listlinesatz | grep H-
4812.13 H-19
4818.29 H-18
4825.61 H-17
4834.36 H-16
4844.95 H-15
4857.96 H-14
4874.18 H-13
4894.79 H-12
4921.52 H-11
4957.1 H-10
5006.03 H-9
5076.09 H-8
5181.83 H-epsilon
5353.68 H-delta
5665.27 H-gamma
6345.11 H-beta
8565.93 H-alpha
4758.84 H-limit
@end example
@cindex H-beta
Zoom-out to the full spectrum and move the displayed slice to the location of the first emission line that is blue-ward (at shorter wavelengths) of H-alpha (at around 6300 Angstroms) and follow the previous steps to confirm that you are on its center.
You will see that it falls exactly on @mymath{6.34468\times10^{-7}} m or 6344.68 Angstroms.
Now, have a look at the Balmer lines above.
You have found the H-beta line!
The rest of the @url{https://en.wikipedia.org/wiki/Balmer_series, Balmer series} that you see in the list above (like H-gamma, H-delta and H-epsilon) are visible only as absorption lines.
Please check their location by moving the blue line on the wavelengths above and confirm the spectral absorption lines with the ones above.
The Balmer break is caused by the fact that these stronger Balmer absorption lines become too close to each other.
Looking back at the full spectrum, you can also confirm that the only other relatively strong emission line in this galaxy, that is on the blue side of the spectrum is the weakest OII line that is approximately located at 4864 Angstroms in the observed spectra of this galaxy.
The numbers after the various OII emission lines show their rest-frame wavelengths (``OII'' can correspond to many electron transitions, so we should be clear about which one we are talking about).
@example
$ astcosmiccal --obsline=H-alpha,8565.93 --listlinesatz | grep O-II-
4863.3 O-II-3726
4866.93 O-II-3728
5634.82 O-II-4317
5762.42 O-II-4414
9554.21 O-II-7319
9568.22 O-II-7330
@end example
Please stop here and spend some time on doing the exercise above on other galaxies in the this cube to get a feeling of types of galaxy spectral features (and later on the full/large cube).
You will notice that only star-forming galaxies have such strong emission lines!
If you enjoy it, go get the full non-cropped cube and investigate the spectra, redshifts and emission/absorption lines of many more galaxies.
But going into those higher-level details of the physical meaning of the spectra (as intriguing as they are!) is beyond the scope of this tutorial.
So we have to stop at this stage unfortunately.
Now that you have a relatively good feeling of this small cube, let's start doing some analysis to extract the spectra of the objects in this cube.
@node Sky lines in optical IFUs, Continuum subtraction, Viewing spectra and redshifted lines, Detecting lines and extracting spectra in 3D data
@subsection Sky lines in optical IFUs
@cindex Sky emission-lines
@cindex O-H lines (from atmosphere)
As we were visually inspecting the cube in @ref{Viewing spectra and redshifted lines}, we noticed some slices with very bad noise.
They will later affect our detection within the cube, so in this section let's have a fast look at them here.
We'll start by looking at the two cubes within the downloaded FITS file:
@example
$ astscript-fits-view a370-crop.fits
@end example
The cube on the left is the same cube we studied before.
The cube on the right (which is called @code{STAT}) shows the variance of each voxel.
Go to slice 3195 and press ``Next'' to view the subsequent slices.
Initially (for the first 5 or 6 slices), the noise looks reasonable.
But as you pass slice 3206, you will see that the noise becomes very bad in both cubes.
It stays like this until about slice 3238!
As you go through the whole cube, you will notice that these slices are much more frequent in the reddest wavelengths.
@cindex Sky
@cindex Atmosphere emission lines
These slices are affected by the emission lines from our own atmosphere!
The atmosphere's emission in these wavelengths significantly raises the background level in these slices.
As a result, the Poisson noise also increases significantly (see @ref{Photon counting noise}).
During the data reduction, the excess background flux of each slice is removed as the Sky (or the mean of undetected pixels, see @ref{Sky value}).
However, the increased Poisson noise (scatter of pixel values) remains!
To see spectrum of the sky emission lines, simply put a region somewhere in the @code{STAT} cube and generate its spectrum (as we did in @ref{Viewing spectra and redshifted lines}).
You will clearly see the comb-like shape of atmospheric emission lines and can use this to know where to expect them.
@node Continuum subtraction, 3D detection with NoiseChisel, Sky lines in optical IFUs, Detecting lines and extracting spectra in 3D data
@subsection Continuum subtraction
@cindex Continuum subtraction
In @ref{Viewing spectra and redshifted lines}, we visually inspected some of the most prominent emission lines of the brightest galaxy of the demo MUSE cube (see @ref{Detecting lines and extracting spectra in 3D data}).
Here, we will remove the ``continuum'' flux from under the emission lines to see them more distinctly.
Within a spectra, the continuum is the local ``background'' flux in the third/wavelength dimension.
In other words, it is the flux that would be present at that wavelength if the emission line didn't exist.
Therefore, to accurately measure the flux of the emission line, we first need to subtract the continuum.
One crude way of estimating the continuum flux at every slice is to use the sigma-clipped median value of that same pixel in the @mymath{\pm{N/2}} slides around it (for more on sigma-clipping, see @ref{Sigma clipping}).
In this case, @mymath{N=100} should be a good first approximate (since it is much larger than any of the absorption or emission lines).
With the first command below, let's use Arithmetic's filtering operators for estimating the sigma-clipped median only along the third dimension for every pixel in every slice (see @ref{Filtering operators}).
With the second command, have a look at the filtered cube and spectra.
Note that the first command is computationally expensive and may take a minute or so.
@example
$ astarithmetic a370-crop.fits set-i --output=filtered.fits \
3 0.2 1 1 100 i filter-sigclip-median
$ astscript-fits-view filtered.fits -h1 --ds9scale="limits -5 20"
@end example
Looking at the filtered cube above, and sliding through the different wavelengths, you will see the noise in each slice has been significantly reduced!
This is expected because each pixel's value is now calculated from 100 others (along the third dimension)!
Using the same steps as @ref{Viewing spectra and redshifted lines}, plot the spectra of the brightest galaxy.
Then, have a look at its spectra.
You see that the emission lines have been significantly smoothed out to become almost@footnote{For more on why Sigma-clipping is only a crude solution to background removal, see Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.} invisible.
You can now subtract this ``continuum'' cube from the input cube to create the emission-line cube.
In fact, as you see below, we can do it in a single Arithmetic command (blending the filtering and subtraction in one command).
Note how the only difference with the previous Arithmetic command is that we added an @code{i} before the @code{3} and a @code{-} after @code{filter-sigclip-median}.
For more on Arithmetic's powerful notation, see @ref{Reverse polish notation}.
With the second command below, let's view the input @emph{and} continuum-subtracted cubes together:
@example
$ astarithmetic a370-crop.fits set-i --output=no-continuum.fits \
i 3 0.2 1 1 100 i filter-sigclip-median -
$ astscript-fits-view a370-crop.fits no-continuum.fits -g1 \
--ds9scale="limits -5 20"
@end example
Once the cubes are open, slide through the different wavelengths.
Comparing the left (input) and right (continuum-subtracted) slices, you will rarely see any galaxy in the continuum-subtracted one!
As its name suggests, the continuum flux is continuously present in all the wavelengths (with gradual change)!
But the continuum has been subtracted now; so in the right-side image, you don't see anything on wavelengths that don't contain a spectral emission line.
Some dark regions also appear; these are absorption lines!
Please spend a few minutes sliding through the wavelengths and seeing how the emission lines pop-up and disappear again.
It is almost like scuba diving, with fish appearing out of nowhere and passing by you.
@cindex Doppler effect
@cindex Galaxy kinematics
@cindex Kinematics (galaxies)
Let's go to slice 3046 (corresponding to 8555.93 Angstroms; just before the H-alpha line for the brightest galaxy in @ref{Viewing spectra and redshifted lines}).
Now press the ``Next'' button to change slices one by one until there is no more emission in the brightest galaxy.
As you go to redder slices, you will see that not only does the brightness increase, but the position of the emission also changes.
This is the @url{https://en.wikipedia.org/wiki/Doppler_effect, Doppler effect} caused by the rotation of the galaxy: the side that rotating towards us gets blue-shifted to bluer slices and the one that is going away from us gets redshifted to redder slices.
If you go to the emission lines of the other galaxies, you will see that they move with a different angle!
We can use this to derive the galaxy's rotational properties and kinematics (Gnuastro doesn't have this feature yet).
To see the Doppler shift in the spectrum, plot the spectrum over the top-side of the galaxy (which is visible in slice 3047).
Then Zoom-in to the H-alpha line (as we did in @ref{Viewing spectra and redshifted lines}) and press ``Next'' until you reach the end of the H-alpha emission-line.
You see that by the time H-alpha disappears in the spectrum, within the cube, the emission shifts in the vertical axis by about 15 pixels!
Then, move the region across the same path that the emission passed.
You will clearly see that the H-alpha and Nitrogen II lines also move with you, in the zoomed-in spectra.
Again, try this for several other emission lines, and several other galaxies to get a good feeling of this important concept when using hyper-spectral 3D data.
@node 3D detection with NoiseChisel, 3D measurements and spectra, Continuum subtraction, Detecting lines and extracting spectra in 3D data
@subsection 3D detection with NoiseChisel
In @ref{Continuum subtraction} we subtracted the continuum emission, leaving us with only noise and the absorption and emission lines.
The absorption lines are negative and will be missed by detection methods that look for a positive skewness@footnote{But if you want to detect the absorption lines, just multiply the cube by @mymath{-1} and repeat the same steps here (the noise is symmetric around 0).} (like @ref{NoiseChisel}).
So we will focus on the detection and extraction of emission lines here.
The first step is to extract the voxels that contain emission signal.
To do that, we will be using @ref{NoiseChisel}.
NoiseChisel and @ref{Segment} operate on 2D images or 3D cubes.
But by default, they are configured for 2D images (some parameters like tile size take a different number of values based on the dimensionality).
Therefore, to do 3D detection, the first necessary step is to run NoiseChisel with the default 3D configuration file.
To see where Gnuastro's programs are installed, you can run the following command (the printed output is the default location when you install Gnuastro from source, but if you used another installation method or manually set a different location, you will see a different output, just use that):
@example
$ which astnoisechisel
/usr/local/bin/astnoisechisel
@end example
As you see, the compiled binary programs (like NoiseChisel) are installed in the @file{bin/} sub-directory of the install path (@file{/usr/local} in the example above, may be different on your system).
The configuration files are in the @file{etc/gnuastro/} sub-directory of the install path (here only showing NoiseChisel's configuration files):
@example
$ ls /usr/local/etc/gnuastro/astnoisechisel*.conf
/usr/local/etc/gnuastro/astnoisechisel-3d.conf
/usr/local/etc/gnuastro/astnoisechisel.conf
@end example
@noindent
We should therefore call NoiseChisel with the 3D configuration file like below (please change @file{/usr/local} to any directory that you find from the @code{which} command above):
@example
$ astnoisechisel no-continuum.fits --output=det.fits \
--config=/usr/local/etc/gnuastro/astnoisechisel-3d.conf
@end example
But having to add this long @option{--config} option is annoying and makes the command hard to read!
To simplify the calling of NoiseChisel in 3D, let's first make a shell alias called @command{astnoisechisel-3d} using the @command{alias} command.
Afterwards, we can just use the alias.
Afterwards (in the second command below), we are calling the alias, producing the same output as above.
Finally (with the last command), let's have a look at NoiseChisel's output:
@example
$ alias astnoisechisel-3d="astnoisechisel \
--config=/usr/local/etc/gnuastro/astnoisechisel-3d.conf"
$ astnoisechisel-3d no-continuum.fits --output=det.fits
$ astscript-fits-view det.fits
@end example
Similar to its 2D outputs, NoiseChisel's output contains four extensions/HDUs (see @ref{NoiseChisel output}).
For a multi-extension file with 3D data, @code{astscript-fits-view} shows each cube as a separate DS9 ``Frame''.
In this way, as you slide through the wavelengths, you see the same slice in all the cubes.
The third and fourth extensions are the Sky and Sky standard deviation, which are not relevant here, so you can close them.
To do that, press on the ``Frame'' button (in the top row of buttons), then press ``delete'' two times in the second row of buttons.
As a final preparation, manually set the scale of @code{INPUT-NO-SKY} cube to a fixed range so the changing flux/noise in each slice doesn't interfere with visually comparing the data in the slices as you move around:
@enumerate
@item
Click on the @code{INPUT-NO-SKY} cube, so it is selected.
@item
Click on the ``Scale'' menu, then the ``Scale Parameters''.
@item
For the ``Low'' value set -2 and for the ``High'' value set 5.
@item
In the ``Cube'' window, slide between the slices to confirm that the noise level is visually fixed.
@item
Go back to the first slice for the next steps.
Note that the first and last couple of slices have much higher noise, don't worry about those.
@end enumerate
As you press the ``Next'' button in the first few slides, you will notice that the @code{DETECTION} cube is fully black: showing that nothing has been detected.
The first detection pops up in the 55th slice for the galaxy on the top of this cube.
As you press ``Next'' you will see that the detection fades away and other detections pop up.
Spend a few minutes shifting between the different slices and comparing the detected voxels with the emission lines in the continuum-subtracted cube (the @code{INPUT-NO-SKY} extension).
Go ahead to slice 2933 and press ``Next'' a few times.
You will notice that the detections suddenly start covering the whole slice and until slice 2943 where the detection map becomes normal (no extra detections!).
This is the effect of the sky lines we mentioned before in @ref{Sky lines in optical IFUs}.
The increased noise makes the reduction very hard and as a result, a lot of artifacts appear.
To reduce the effect of sky lines, we can divide the cube by its standard deviation (the square root of the variance or @code{STAT} extension; see @ref{Sky lines in optical IFUs}) and run NoiseChisel afterwards.
@example
$ astarithmetic no-continuum.fits -h1 a370-crop.fits -hSTAT sqrt / \
--output=sn.fits
$ astnoisechisel-3d sn.fits --output=det.fits
$ astscript-fits-view det.fits
@end example
After the new detection map opens have another look at the specific slices mentioned above (from slice 2933 to 2943).
You will see that there are no more detection maps that cover the whole field of view.
Scroll the slide counter across the whole cube, you will rarely see such effects by Sky lines any more.
But this is just a crude solution and doesn't remove all sky line artifacts.
For example go to slide 650 and press ``Next''.
You will see that the artifacts caused by this sky line are so strong that the solution above wasn't successful.
For these very strong emission lines, we need to improve the reduction.
But generally, since the number of sky-line affected slices has significantly decreased, we can go ahead.
@node 3D measurements and spectra, Extracting a single spectrum and plotting it, 3D detection with NoiseChisel, Detecting lines and extracting spectra in 3D data
@subsection 3D measurements and spectra
In the context of optical IFUs or radio IFUs in astronomy, a ``Spectrum'' is defined as separate measurements on each 2D slice of the 3D cube.
Each 2D slice is defined by the first two FITS dimensions: the first FITS dimension is the horizontal axis and the second is the vertical axis.
As with the tutorial on 2D image analysis (in @ref{Segmentation and making a catalog}), let's run Segment to see how it works in 3D.
Like NoiseChisel above, to simplify the commands, let's make an alias (@ref{3D detection with NoiseChisel}):
@example
$ alias astsegment-3d="astsegment \
--config=/usr/local/etc/gnuastro/astsegment-3d.conf"
$ astsegment-3d det.fits --output=seg.fits
$ astscript-fits-view seg.fits
@end example
You see that we now have 3D clumps and 3D objects.
So we can go ahead to do measurements.
MakeCatalog can do single-valued measurements (as in 2D) on 3D datasets also.
For example, with the command below, let's get the flux-weighted center (in the three dimensions) and sum of pixel values.
There isn't usually a standard name for the third WCS dimension (unlike Ra/Dec).
So in Gnuastro, we just call it @option{--w3}.
With the second command, we are having a look at the first 5 rows.
Note that we are not using @option{-Y} with @command{asttable} anymore because the wavelength column would only be shown as zero (since it is in meters!).
@example
$ astmkcatalog seg.fits --ids --ra --dec --w3 --sum --output=cat.fits
$ asttable cat.fits -h1 -O --txtf64p=5 --head=5
# Column 1: OBJ_ID [counter ,i32,] Object identifier.
# Column 2: RA [deg ,f64,] Flux weighted center (WCS axis 1).
# Column 3: DEC [deg ,f64,] Flux weighted center (WCS axis 2).
# Column 4: AWAV [m ,f64,] Flux weighted center (WCS axis 3).
# Column 5: SUM [input-units,f32,] Sum of sky subtracted values.
1 3.99677e+01 -1.58660e+00 4.82994e-07 7.311189e+02
2 3.99660e+01 -1.58927e+00 4.86411e-07 7.872681e+03
3 3.99682e+01 -1.59141e+00 4.90609e-07 1.314548e+03
4 3.99677e+01 -1.58666e+00 4.90816e-07 7.798024e+02
5 3.99659e+01 -1.58930e+00 4.93657e-07 3.255210e+03
@end example
Besides the single-valued measurements above (that are shared with 2D inputs), on 3D cubes, MakeCatalog can also do per-slice measurements.
The options for these measurements are formatted as @option{--*in-slice}.
With the command below, you can check their list:
@example
$ astmkcatalog --help | grep in-slice
--area-in-slice [3D input] Number of labeled in each slice.
--area-other-in-slice [3D input] Area of other lab. in projected area.
--area-proj-in-slice [3D input] Num. voxels in '--sum-proj-in-slice'.
--sum-err-in-slice [3D input] Error in '--sum-in-slice'.
--sum-in-slice [3D input] Sum of values in each slice.
--sum-other-err-in-slice [3D input] Area in '--sum-other-in-slice'.
--sum-other-in-slice [3D input] Sum of other lab. in projected area.
--sum-proj-err-in-slice [3D input] Error of '--sum-proj-in-slice'.
--sum-proj-in-slice [3D input] Sum of projected area in each slice.
@end example
For every label and measurement, these options will give many values in a vector column (see @ref{Vector columns}).
Let's have a look by asking for the sum of values and area of each label in each slice associated to each label with the command below.
There is just one important point: in @ref{3D detection with NoiseChisel}, we ran NoiseChisel on the signal-to-noise image, not the continuum-subtracted image!
So the values to use for the measurement of each label should come from the @file{no-continuum.fits} file (not @file{seg.fits}).
@example
$ astmkcatalog seg.fits --ids --ra --dec --w3 --sum \
--area-in-slice --sum-in-slice --output=cat.fits \
--valuesfile=no-continuum.fits --valueshdu=1
$ asttable -i cat.fits
--------
cat.fits (hdu: 1)
------- ----- ---- -------
No.Name Units Type Comment
------- ----- ---- -------
1 OBJ_ID counter int32 Object identifier.
2 RA deg float64 Flux wht center (WCS 1).
3 DEC deg float64 Flux wht center (WCS 2).
4 AWAV m float64 Flux wht center (WCS 3).
5 SUM input-units float32 Sum of sky-subed values.
6 AREA-IN-SLICE counter int32(3681) Number of pix. in each slice.
7 SUM-IN-SLICE input-units float32(3681) Sum of values in each slice.
--------
Number of rows: 194
--------
@end example
You can see that the new @code{AREA-IN-SLICE} and @code{SUM-IN-SLICE} columns have a @code{(3681)} in their types.
This shows that unlike the single-valued columns before them, in these columns, each row has 3681 values (a ``vector'' column).
If you are not already familiar with vector columns, please take a few minutes to read @ref{Vector columns}.
Since a MUSE data cube has 3681 slices, this is effectively the spectrum of each object.
Let's find the object that corresponds to the H-alpha emission of the brightest galaxy (that we found in @ref{Viewing spectra and redshifted lines}).
That emission line was around 8565.93 Angstroms, so let's look for the objects within @mymath{\pm5} Angstroms of that value (between 8560 to 8570 Angstroms):
@example
$ asttable cat.fits --range=AWAV,8.560e-7,8.570e-7 -cobj_id,ra,dec -Y
181 39.965897 -1.589279
@end example
From the command above, we see that at this wavelength, there was only one object.
Let's extract its spectrum by asking for the @code{sum-in-slice} column:
@example
$ asttable cat.fits --range=AWAV,8.560e-7,8.570e-7 \
-carea-in-slice,sum-in-slice
@end example
If you look into the outputs, you will see that it is a single line!
It contains a long list of 0 values at the start and @code{nan} values in the end.
If you scroll slowly, in the middle of each you will see some non-zero and non-NaN numbers.
To help interpret this more easily, let's transpose these vector columns (so each value of the vector column becomes a row in the output).
We will use the @option{--transpose} option of Table for this (just note that since transposition changes the number of rows, it can only be used when your table only has vector columns and they all have the same number of elements (as in this case, for more):
@example
$ asttable cat.fits --range=AWAV,8.560e-7,8.570e-7 \
-carea-in-slice,sum-in-slice --transpose
@end example
We now see the measurements on each slice printed in a separate line (making it much more easier to visually read).
However, without a counter, it is very hard to interpret them.
Let's pipe the output to a new Table command and use column arithmetic's @code{counter} operator for displaying the slice number (see @ref{Size and position operators}).
Note that since we are piping the output, we also added @option{-O} so the column metadata are also passed to the new instance of Table:
@example
$ asttable cat.fits --range=AWAV,8.560e-7,8.570e-7 -O \
-carea-in-slice,sum-in-slice --transpose \
| asttable -c'arith $1 counter swap',2
...[[truncated]]...
3040 0 nan
3041 0 nan
3042 0 nan
3043 0 nan
3044 1 4.311140e-01
3045 18 3.936019e+00
3046 161 -5.800080e+00
3047 360 2.967184e+02
3048 625 1.912855e+03
3049 823 5.140487e+03
3050 945 7.174101e+03
3051 999 6.967604e+03
3052 1046 6.468591e+03
3053 1025 6.457354e+03
3054 996 6.599119e+03
3055 966 6.762280e+03
3056 873 5.014052e+03
3057 649 2.003334e+03
3058 335 3.167579e+02
3059 131 1.670975e+01
3060 25 -2.953789e+00
3061 0 nan
3062 0 nan
3063 0 nan
3064 0 nan
...[[truncated]]...
$ astscript-fits-view seg.fits
@end example
After DS9 opens with the last command above, go to slice 3044 (which is the first non-NaN slice in the spectrum above).
In the @code{OBJECTS} extension of this slice, you see several non-zero pixels.
The few non-zero pixels on the bottom have a label of 180 and the single non-zero pixel at a higher Y axis position has a label of 181 (which as we saw above, was the label of the H-alpha emission of this galaxy).
The few 197 labeled pixels in this slice are the last voxels of the NII emission that is just blue-ward of H-alpha.
The single pixel you see in slice 3044 is why you see a value of 1 in the @code{AREA-IN-SLICE} column.
As you go to the next slices, if you count the pixels, you will see they add up to the same number you see in that column.
The values in the @code{SUM-IN-SLICE} are the sum of values in the continuum-subtracted cube for those same voxels.
You should now be able to understand why the @option{--sum-in-slice} column has NaN values in all other slices: because this label doesn't exist in any other slice!
Also, within slices that contain label 181, this column only uses the voxels that have the label.
So as you see in the second column above, the area that is used in each changes.
Therefore @option{--sum-in-slice} or @option{area-in-slice} are the raw 3D spectrum of each 3D emission-line.
This is a different concept from the traditional ``spectrum'' where the same area is used over all the slices.
To get that you should use the @option{--sumprojinslice} column of MakeCatalog.
All the @option{--*in-slice} options that contain a @code{proj} in their name are measurements over the fixed ``projection'' of the 3D volume on the 2D surface of each slice.
To see the effect, let's also ask MakeCatalog to measure this projected sum column:
@example
$ astmkcatalog seg.fits --ids --ra --dec --w3 --sum \
--area-in-slice --sum-in-slice --sum-proj-in-slice \
--output=cat.fits --valuesfile=no-continuum.fits \
--valueshdu=1
$ asttable cat.fits --range=AWAV,8.560e-7,8.570e-7 -O \
-carea-in-slice,sum-in-slice,sum-proj-in-slice \
--transpose \
| asttable -c'arith $1 counter swap',2,3
...[[truncated]]...
3040 0 nan 8.686357e+02
3041 0 nan 4.384907e+02
3042 0 nan 4.994813e+00
3043 0 nan -1.595918e+02
3044 1 4.311140e-01 -2.793141e+02
3045 18 3.936019e+00 -3.251023e+02
3046 161 -5.800080e+00 -2.709914e+02
3047 360 2.967184e+02 1.049625e+02
3048 625 1.912855e+03 1.841315e+03
3049 823 5.140487e+03 5.108451e+03
3050 945 7.174101e+03 7.149740e+03
3051 999 6.967604e+03 6.913166e+03
3052 1046 6.468591e+03 6.442184e+03
3053 1025 6.457354e+03 6.393185e+03
3054 996 6.599119e+03 6.572642e+03
3055 966 6.762280e+03 6.716916e+03
3056 873 5.014052e+03 4.974084e+03
3057 649 2.003334e+03 1.870787e+03
3058 335 3.167579e+02 1.057906e+02
3059 131 1.670975e+01 -2.415764e+02
3060 25 -2.953789e+00 -3.534623e+02
3061 0 nan -3.745465e+02
3062 0 nan -2.532008e+02
3063 0 nan -2.372232e+02
3064 0 nan -2.153670e+02
...[[truncated]]...
@end example
As you see, in the new @code{SUM-PROJ-IN-SLICE} column, we have a measurement in each slice: including slices that do not have the label of 181 at all.
Also, the area used to measure this sum is the same in all slices (similar to a classical spectrometer's output).
However, there is a big problem: have a look at the sums in slices 3040 and 3041: the values are increasing!
This is because of the emission in the NII line that also falls over the projected area of H-alpha.
This shows the power of IFUs as opposed to classical spectrometers: we can distinguish between individual lines based on spatial position and do measurements in 3D!
Finally, in case you want the spectrum with the continuum, you just have to change the file given to @option{--valuesfile}:
@example
$ astmkcatalog seg.fits --ids --ra --dec --w3 --sum \
--area-in-slice --sum-in-slice --sum-proj-in-slice \
--valuesfile=a370-crop.fits --valueshdu=1 \
--output=cat-with-continuum.fits
@end example
@node Extracting a single spectrum and plotting it, Cubes with logarithmic third dimension, 3D measurements and spectra, Detecting lines and extracting spectra in 3D data
@subsection Extracting a single spectrum and plotting it
In @ref{3D measurements and spectra} we measured the spectra of all the objects with the MUSE data cube of this demonstration tutorial.
Let's now write the resulting spectra for our object 181 into a file to view our measured spectra in TOPCAT for a more visual inspection.
But we don't want slice numbers (which are specific to MUSE), we want the horizontal axis to be in Angstroms.
To do that, we can use the WCS information:
@table @code
@item CRPIX3
The ``Coordinate Reference PIXel'' in the 3rd dimension (or slice number of reference)
Let's call this @mymath{s_r}.
@item CRVAL3
The ``Coordinate Reference VALue'' in the 3rd dimension (the WCS coordinate of the slice in @code{CRPIX3}.
Let's call this @mymath{\lambda_r}
@item CDELT3
The ``Coordinate DELTa'' in the 3rd dimension, or how much the WCS changes with every slice.
Let's call this @mymath{\delta}.
@end table
@noindent
To find the @mymath{\lambda} (wavelength) of any slice with number @mymath{s}, we can simply use this equation:
@dispmath{\lambda=\lambda_r+\delta(s-s_r)}
Let's extract these three values from the FITS WCS keywords as shell variables to automatically do this within Table's column arithmetic.
Here we are using the technique that is described in @ref{Separate shell variables for multiple outputs}.
@example
$ eval $(astfits seg.fits --keyvalue=CRPIX3,CRVAL3,CDELT3 -q \
| xargs printf "sr=%s; lr=%s; d=%s;")
## Just for a check:
$ echo $sr
1.000000e+00
$ echo $lr
4.749679687500000e-07
$ echo $d
1.250000000000000e-10
@end example
Now that we have the necessary constants, we can simply convert the equation above into @ref{Reverse polish notation} and use column arithmetic to convert the slice counter into wavelength in the command of @ref{3D measurements and spectra}.
@example
$ asttable cat.fits --range=AWAV,8.560e-7,8.570e-7 -O \
-carea-in-slice,sum-in-slice,sum-proj-in-slice \
--transpose \
| asttable -c'arith $1 counter '$sr' - '$d' x '$lr' + f32 swap' \
-c2,3 --output=spectrum-obj-181.fits \
--colmetadata=1,WAVELENGTH,m,"Wavelength of slice." \
--colmetadata=2,"AREA-IN-SLICE",voxel,"No. of voxels."
$ astscript-fits-view spectrum-obj-181.fits
@end example
Once TOPCAT opens, take the following steps:
@enumerate
@item
In the ``Graphics'' menu, select ``Plane plot''.
@item
Change @code{AREA-IN-SLICE} to @code{SUM-PROJ-IN-SLICE}.
@item
Select the ``Form'' tab.
@item
Click on the button with the large green ``+'' button and select ``Add line''.
@item
Un-select the ``Mark'' item that was originally selected.
@end enumerate
@noindent
Of course, the table in @file{spectrum-obj-181.fits} can be plotted using any other plotting tool you prefer to use in your scientific papers.
In the next section (@ref{Cubes with logarithmic third dimension}), we'll review the necessary modifications to the recipes in this section for cubes where the third dimension is logarithmic, not linear (as in MUSE cubes).
Finally, in @ref{Cubes with logarithmic third dimension}, you'll see how you can make narrow-band images of your desired target around your desired emission line.
@node Cubes with logarithmic third dimension, Synthetic narrow-band images, Extracting a single spectrum and plotting it, Detecting lines and extracting spectra in 3D data
@subsection Cubes with logarithmic third dimension
In @ref{Extracting a single spectrum and plotting it}, a single object's spectrum was extracted from the catalog and plotted.
Extracting the wavelength of each slice was easy there because MUSE data cubes provide a linear third dimension.
However, it can happen that the third axis of a cube is logarithmic not linear (as in the MUSE cube used in this tutorial).
An example in the optical regime is the data cubes of the @url{https://www.sdss4.org/surveys/manga/, MaNGA survey}@footnote{An example data cube from the MaNGA survey can be downloaded from here: @url{https://data.sdss.org/sas/dr17/manga/spectro/redux/v3_1_1/7443/stack/manga-7443-12703-LOGCUBE.fits.gz}}.
To identify if an axis is logarithmic or linear, the @url{https://ui.adsabs.harvard.edu/abs/2006A&A...446..747G, FITS WCS standard} (Section 3.2) says that you should look at the @code{CTYPE} keywords and check if any have a @code{-LOG} suffix.
For example, here is the output on a MaNGA data cube:
@example
$ astfits manga.fits -h1 | grep CTYPE
CTYPE1 = 'RA---TAN'
CTYPE2 = 'DEC--TAN'
CTYPE3 = 'WAVE-LOG'
@end example
In the same section, the FITS standard describes how the ``world coordinate'' (wavelength in this case) can be calculated in such cases.
The column arithmetic command to add the wavelength to each slice's measurement looks is shown below (just for the first object in the catalog; replace @option{--head=1} as you wish).
For the @code{d}, @code{lr} and @code{sr} shell variables that are used in this command, see @ref{Extracting a single spectrum and plotting it}.
@example
$ asttable cat.fits --head=1 -csum-in-slice --transpose \
| asttable -c'arith $1 index '$d' x '$lr' / set-p set-s \
e p pow '$lr' x s'
@end example
@node Synthetic narrow-band images, , Cubes with logarithmic third dimension, Detecting lines and extracting spectra in 3D data
@subsection Synthetic narrow-band images
In @ref{Continuum subtraction} we subtracted/separated the continuum from the emission/absorption lines of our galaxy in the MUSE cube.
Let's visualize the morphology of the galaxy at some of the spectral lines to see how it looks.
To do this, we will create synthetic narrow-band 2D images by collapsing the cube along the third dimension within a certain wavelength range that is optimized for that flux.
Let's find the wavelength range that corresponds to H-alpha emission we studied in @ref{Extracting a single spectrum and plotting it}.
Fortunately MakeCatalog can calculate the minimum and maximum position of each label along each dimension like the command below.
If you always need these values, you can include these columns in the same MakeCatalog with @option{--sum-proj-in-slice}.
Here we are running it separately to help you follow the discussion there.
@example
$ astmkcatalog seg.fits --output=cat-ranges.fits \
--ids --min-x --max-x --min-y --max-y --min-z --max-z
@end example
Let's extract the minimum and maximum positions of this particular object with the first command and with the second, we'll write them into different shell variables.
With the second command, we are writing those six values into a single string in the format of Crop's @ref{Crop section syntax}.
For more on the @code{eval}-based shell trick we used here, see @ref{Separate shell variables for multiple outputs}.
Finally, we are running Crop and viewing the cropped 3D cube.
@example
$ asttable cat-ranges.fits --equal=OBJ_ID,181 \
-cMIN_X,MAX_X,MIN_Y,MAX_Y,MIN_Z,MAX_Z
56 101 11 61 3044 3060
$ eval $(asttable cat-ranges.fits --equal=OBJ_ID,181 \
-cMIN_X,MAX_X,MIN_Y,MAX_Y,MIN_Z,MAX_Z \
| xargs printf "section=%s:%s,%s:%s,%s:%s; ")
$ astcrop no-continuum.fits --mode=img --section=$section \
--output=crop-no-continuum.fits
$ astscript-fits-view crop-no-continuum.fits
@end example
Go through the slices and you will only see this particular region of the full cube.
We can now collapse the third dimension of this image into a 2D synthetic-narrow band image with Arithmetic's @ref{Dimensionality changing operators}:
@example
$ astarithmetic crop-no-continuum.fits 3 collapse-sum \
--output=collapsed-all.fits
$ astscript-fits-view collapsed-all.fits
@end example
During the collapse, used all the pixels in each slice.
This is not good for the faint outskirts in the peak of the emission line: the noise of the slices with less signal decreases the over-all signal-to-noise ratio in the synthetic-narrow band image.
So let's set all the pixels that aren't labeled with this object as NaN, then collapse.
To do that, we first need to crop the @code{OBJECT} cube in @file{seg.fits}.
With the second command, please have a look to confirm how the labels change as a function of wavelength.
@example
$ astcrop seg.fits -hOBJECTS --mode=img --section=$section \
--output=crop-obj.fits
$ astscript-fits-view crop-obj.fits
@end example
Let's use Arithmetic to first set all the pixels that are not equal to 198 in @file{collapsed-obj.fits} to be NaN in @file{crop-no-continuum.fits}.
With the second command, we are opening the two collapsed images together:
@example
$ astarithmetic crop-no-continuum.fits set-i \
crop-obj.fits set-o \
i o 181 ne nan where 3 collapse-sum \
-g1 --output=collapsed-obj.fits
$ astscript-fits-view collapsed-all.fits collapsed-obj.fits \
--ds9extra="-lock scalelimits yes -blink"
@end example
Let it blink a few times and focus on the outskirts: you will see that the diffuse flux in the outskirts has indeed been preserved better in the object-based collapsed narrow-band image.
But this is a little hard to appreciate in the 2D image.
To see it better practice, let's get the two radial profiles.
We will approximately assume a position angle of -80 and axis ratio of 0.7@footnote{To derive the axis ratio and position angle automatically, you can take the following steps. Note that we are not using NoiseChisel because this crop has been intentionally selected to contain signal, so there is no raw noise inside of it.
@example
$ aststatistics collapsed-all.fits --sky --tilesize=3,3
$ astarithmetic collapsed-all.fits -h1 collapsed-all_sky.fits -hSKY_STD / 5 gt
$ astmkcatalog collapsed-all_arith.fits -h1 --valuesfile=collapsed-all.fits \
--valueshdu=1 --position-angle --axis-ratio
$ asttable collapsed-all_arith_cat.fits -Y
-79.100 0.700
@end example
}.
With the final command below, we are opening both radial profiles in TOPCAT to visualize them.
We are also undersampling the radial profile to have better signal-to-noise ratio in the outer radii:
@example
$ astscript-radial-profile collapsed-all.fits \
--position-angle=-80 --axis-ratio=0.7 \
--undersample=2 --output=collapsed-all-rad.fits
$ astscript-radial-profile collapsed-obj.fits \
--position-angle=-80 --axis-ratio=0.7 \
--undersample=2 --output=collapsed-obj-rad.fits
@end example
To view the difference, let's merge the two profiles (the @code{MEAN} column) into one table and simply print the two profiles beside each other.
We will then pipe the resulting table containing both columns to a second call to Gnuastro's Table and use column arithmetic to subtract the two mean values and divide them by the optimized one (to get the fractional difference):
@example
$ asttable collapsed-all-rad.fits --catcolumns=MEAN -O \
--catcolumnfile=collapsed-obj-rad.fits \
| asttable -c1,2,3 -c'arith $3 $2 - $3 /' \
--colmetadata=2,MEAN-ALL \
--colmetadata=3,MEAN-OBJ \
--colmetadata=4,DIFF,frac,"Fractional diff." -YO
# Column 1: RADIUS [pix ,f32,] Radial distance
# Column 2: MEAN-ALL [input-units,f32,] Mean of sky subtracted values.
# Column 3: MEAN-OBJ [input-units,f32,] Mean of sky subtracted values.
# Column 4: DIFF [frac ,f32,] Fractional diff.
0.000 436.737 450.256 0.030
2.000 371.880 384.071 0.032
4.000 313.429 320.138 0.021
6.000 275.744 280.102 0.016
8.000 152.214 154.470 0.015
10.000 59.311 62.207 0.047
12.000 18.466 20.396 0.095
14.000 6.940 8.671 0.200
16.000 3.052 4.256 0.283
18.000 1.590 2.848 0.442
20.000 1.430 2.550 0.439
22.000 0.838 1.975 0.576
@end example
@noindent
As you see, beyond a radius of 10, the last fractional difference column becomes very large, showing that a lot of signal is missing in the @code{MEAN-ALL} column.
For a more visual comparison of the two profiles, you can use the command below to open both tables in TOPCAT:
@example
$ astscript-fits-view collapsed-all-rad.fits \
collapsed-obj-rad.fits
@end example
Once TOPCAT has opened take the following steps:
@enumerate
@item
Select @file{collapsed-all-rad.fits}
@item
In the ``Graphics'' menu, select ``Plane Plot''.
@item
Click on the ``Axes'' side-bar (by default, at the bottom half of the window), and click on ``Y Log'' to view the vertical axis in logarithmic scale.
@item
In the ``Layers'' menu, select ``Add Position Control''.
You will see that at the bottom half, a new scatter plot information is displayed.
@item
Click on the scroll-down menu in front of ``Table'' and select @file{2: collapsed-obj-rad.fits}.
Afterwards, you will see the optimized synthetic-narrow-band image radial profile as blue points.
@end enumerate
@node Color images with full dynamic range, Zero point of an image, Detecting lines and extracting spectra in 3D data, Tutorials
@section Color images with full dynamic range
Color images are fundamental tools to visualize astronomical datasets, allowing to visualize valuable physical information within them.
A color image is a composite representation derived from different channels.
Each channel usually corresponding to different filters (each showing wavelength intervals of the object's spectrum).
In general, most common color image formats (like JPEG, PNG or PDF) are defined from a combination of Red-Green-Blue (RGB) channels (to cover the optical range with normal cameras).
These three filters are hard-wired in your monitor and in most normal camera (for example smartphone or DSLR) pixels.
For more on the concept and usage of colors, see @ref{Color} and @ref{Colormaps for single-channel pixels}.
@cindex Dynamic range
However, normal images (that you take with your smartphone during the day for example) have a very limited dynamic range (difference between brightest and fainest part of an image).
For example in an image you take from a farm, the brightness pixel (the sky) cannot be more than 255 times the faintest/darkest shadow in the image (because normal cameras produce unsigned 8 bit integers; containing @mymath{2^8=256} levels; see @ref{Numeric data types}).
However, astronomical sources span a much wider dynamic range such that their central parts can be tens of millions of times brighter than their larger outer regions.
Our astronomical images in the FITS format are therefore usually 32-bit floating points to preserve this information.
Therefore a simple linear scaling of 32-bit astronomical data to the 8-bit range will put most of the pixels on the darkest level and barely show anything!
This presents a major challenge in visualizing our astronomical images on a monitor, in print or for a projector when showing slides.
In this tutorial, we review how to prepare your images and create informative RGB images for your PDF reports.
We start with aligning the images to the same pixel grid (which is usually necessary!) and using the low-level engine (Gnuastro's @ref{ConvertType} program) directly to create an RGB image.
Afterwards, we will use a higher-level installed script (@ref{Color images with gray faint regions}).
This is a high-level wrapper over ConvertType that does some pre-processing and stretches the pixel values to enhance their 8-bit representation before calling ConvertType.
@menu
* Color channels in same pixel grid:: Warping all inputs to the same pixel grid.
* Color image using linear transformation:: A linear color mapping won't show much!
* Color for bright regions and grayscale for faint:: Show the full dynamic range.
* Manually setting color-black-gray regions:: Physically motivated regions.
* Weights contrast markers and other customizations:: Nice ways to enhance visual appearance.
@end menu
@node Color channels in same pixel grid, Color image using linear transformation, Color images with full dynamic range, Color images with full dynamic range
@subsection Color channels in same pixel grid
In order to use different images as color channels, it is important that the images be properly aligned and on the same pixel grid.
When your inputs are high-level products of the same survey, this is usually the case.
However, in many other situations the images you plan to use as different color channels lie on different sky positions, even if they may have the same number of pixels.
In this section we will show how to solve this problem.
For an example dataset, let's use the same SDSS field that we used in @ref{Detecting large extended targets}: the field covering the outer parts of the M51 group.
With the commands below, we'll make an @file{inputs} directory and download and prepare the three g, r and i band images of SDSS over the same field there:
@example
$ mkdir in
$ sdssurl=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
$ for f in g r i; do \
wget $sdssurl/301/3716/6/frame-$f-003716-6-0117.fits.bz2 \
-O$f.fits.bz2; \
bunzip2 $f.fits.bz2; \
astfits $f.fits --copy=0 -oin/$f-sdss.fits; \
rm $f.fits; \
done
@end example
Let's have a look at the three three images with the first command, and get their number of pixels with the second:
@example
## Open the images locked by image coordinates
$ astscript-fits-view in/*-sdss.fits
## Check the number of pixels along each axis of all images.
$ astfits in/*-sdss.fits --keyvalue=NAXIS1,NAXIS2
in/g-sdss.fits 2048 1489
in/i-sdss.fits 2048 1489
in/r-sdss.fits 2048 1489
@end example
From the first command, the images look like they cover the same astronomical object (M51) in the same region of the sky, and with the second, we see that they have the same number of pixels.
But this general visual inspection does not guarantee that the astronomical objects within the pixel grid cover exactly the same positions (within a pixel!) on the sky.
Let's open the images again, but this time asking DS9 to only show one at a time, and to ``blink'' between them:
@example
$ astscript-fits-view in/*-sdss.fits \
--ds9extra="-single -zoom to fit -blink"
@end example
If you pay attention, you will see that the objects within each image are at slightly different locations.
If you don't immediately see it, try zooming in to any star within the image and let DS9 continue blinking.
You will see that the star jumps a few pixels between each blink.
In essence, the images are not aligned on the same pixel grid, therefore, the same source does not share identical image coordinates across these three images.
As a consequence, it is necessary to align the images before making the color image, otherwise this misalignment will generate multiply-peaked point-sources (stars and centers of galaxies) and artificial color gradients in the more diffuse parts.
To align the images to the same pixel grid, we will employ Gnuastro's @ref{Warp} program.
In particular, its features to @ref{Align pixels with WCS considering distortions}.
Let's take the middle (r band) filter as the reference to define our grid.
With the first command after building the @file{aligned/} directory, let's align the r filter to the celestial coordinates (so the M51 group's position angle doesn't depend on the orientation of the telescope when it took this image).
With for the other two filters, we will use Warp's @option{--gridfile} option to ensure that ensure that their pixel grid and WCS exactly match the r band image, but the pixel values come from the other two filters.
Finally, in the last command, we'll visualize the three aligned images.
@example
## Put all three channels in the same pixel grid.
$ mkdir aligned
$ astwarp in/r-sdss.fits --output=aligned/r-sdss.fits
$ astwarp in/g-sdss.fits --output=aligned/g-sdss.fits \
--gridfile=aligned/r-sdss.fits
$ astwarp in/i-sdss.fits --output=aligned/i-sdss.fits \
--gridfile=aligned/r-sdss.fits
## Open the images locked by image coordinates
$ astscript-fits-view aligned/*-sdss.fits \
--ds9extra="-single -zoom to fit -blink"
@end example
As the images blink between each other, zoom in to some of the smaller stars and you will see that they no longer jump from one blink to the next.
These images are now precisely pixel-aligned.
We are now equipped with the essential data to proceed with the color image generation in @ref{Color image using linear transformation}.
@node Color image using linear transformation, Color for bright regions and grayscale for faint, Color channels in same pixel grid, Color images with full dynamic range
@subsection Color image using linear transformation
Previously (in @ref{Color channels in same pixel grid}), we downloaded three SDSS filters of M51 and described how you can put them all in the same pixel grid.
In this section, we will explore the raw and low-level process of generating color images using the input images (without modifying the pixel value distributions).
We will use Gnuastro's ConvertType program (with executable name @command{astconvertt}).
Let's create our first color image using the aligned SDSS images mentioned in the previous section.
The order in which you provide the images matters, so ensure that you sort the filters from redder to bluer (iSDSS and gSDSS are respectively the reddest and bluest of the three filters used here).
@example
$ astconvertt aligned/i-sdss.fits aligned/r-sdss.fits \
aligned/g-sdss.fits -g1 --output=m51.pdf
@end example
@cartouche
@noindent
@strong{Other color formats:} In the example above, we are using PDF because this is usually the best format to later also insert marks that are commonly necessary in scientific publications (see @ref{Marking objects for publication}.
But you can also generate JPEG and TIFF outputs simply by using a different suffix for your output file (for example @option{--output=m51.jpg} or @option{--output=m51.tiff}).
@end cartouche
Open the image with your PDF viewer and have a look.
Do you see something?
Initially, it appears predominantly black.
However, upon closer inspection, you will discern very tiny points where some color is visible.
These points correspond to the brightest part of the brightest sources in this field!
The reason you saw much more structure when looking at the image in DS9 previously in @ref{Color channels in same pixel grid} was that @code{astscript-fits-view} used DS9's @option{-zscale} option to scale the values in a non-linear way!
Let's have another look at the images with the linear @code{minmax} scaling of DS9:
@example
$ astscript-fits-view aligned/*-sdss.fits \
--ds9extra="-scale minmax -lock scalelimits"
@end example
You see that it looks very similar to the PDF we generated above: almost fully black!
This phenomenon exemplifies the challenge discussed at the start of this tutorial in @ref{Color images with full dynamic range}).
Given the vast number of pixels close to the sky background level compared to the relatively few very bright pixels, visualizing the entire dynamic range simultaneously is tricky.
To address this challenge, the low-level ConvertType program allows you to selectively choose the pixel value ranges to be displayed in the color image.
This can be accomplished using the @option{--fluxlow} and @option{--fluxhigh} options of ConvertType.
Pixel values below @option{--fluxlow} are mapped to the minimum value (displayed as black in the default colormap), and pixel values above @option{--fluxhigh} are mapped to the maximum value (displayed as white))
The choice of these values depends on the pixel value distribution of the images.
But before that, we have to account for an important differences between the filters: the brightness of the background also has different values in different filters (the sky has colors!)
So before making more progress, generally, first you have to subtract the sky from all three images you want to feed to the color channels.
In a previous tutorial (@ref{Detecting large extended targets}) we used these same images as a basis to show how you can do perfect sky subtraction in the presence of large extended objects like M51.
Here we are just doing a visualization and bringing pixels to 8-bit, so we don't need that level of precision reached there (we won't be doing photometry!).
Therefore, let's just keep the @option{--tilesize=100,100} of NoiseChisel.
@example
$ mkdir no-sky
$ for f in i r g; do \
astnoisechisel aligned/$f-sdss.fits --tilesize=100,100 \
--output=no-sky/$f-sdss.fits; \
done
@end example
@cartouche
@noindent
@strong{Accounting for zero points:} An important step that we have not implemented in this section is to unify the zero points of the three filters.
In the case of SDSS (and some other surveys), the images have already been brought to the same zero point, but that is not generally the case.
So before subtracting sky (and estimating the standard deviation) you should also unify the zero points of your images (for example through Arithmetic's @code{counts-to-customzp}, @code{counts-to-nanomaggy} or @code{counts-to-jy} described in @ref{Unit conversion operators}).
If you don't already have the zero point of your images, see the dedicated tutorial to measure it: @ref{Zero point of an image}.
@end cartouche
Now that we know the noise fluctuates around zero in all three images, we can start to define the values for the @option{--fluxlow} and @option{--fluxhigh}.
But the sky standard deviation comes from the sky brightness in different filters and is therefore different!
Let's have a look by taking the median value of the @code{SKY_STD} extension of NoiseChisel's output:
@example
$ aststatistics no-sky/i-sdss.fits -hSKY_STD --median
2.748338e-02
$ aststatistics no-sky/r-sdss.fits -hSKY_STD --median
1.678463e-02
$ aststatistics no-sky/g-sdss.fits -hSKY_STD --median
9.687680e-03
@end example
You see that the sky standard deviation of the reddest filter (i) is almost three times the bluest filter (g)!
This is usually the case in any scenario (redder emission usually requires much less energy and gets absorbed less, so the background is usually brighter in the reddest filters).
As a result, we should define our limits based on the noise of the reddest filter.
Let's set the minimum flux to 0 and the maximum flux to ~50 times the noise of the i-band image (@mymath{0.027\times50=1.35}).
@example
$ astconvertt no-sky/i-sdss.fits no-sky/r-sdss.fits no-sky/g-sdss.fits \
-g1 --fluxlow=0.0 --fluxhigh=1.35 --output=m51.pdf
@end example
After opening the new color image, you will observe that a spiral arm of M51 and M51B (or NGC5195, which is interacting with M51), become visible.
However, the majority of the image remains black.
Feel free to experiment with different values for @option{--fluxhigh} to set the maximum value closer to the noise-level and see the more diffuse structures.
For instance, try with @option{--fluxhigh=0.27} the brightest pixels will have a signal-to-noise ratio of 10, or even @option{--fluxhigh=0.135} for a signal-to-noise ratio of 5.
But you will notice that, the brighter areas of the galaxy become "saturated": you don't see the structure of brighter parts of the galaxy any more.
As you bring down the maximum threshold, the saturated areas also increase in size: loosing some useful information on the bright side!
Let's go to the extreme and decrease the threshold to close the noise-level (for example @option{--fluxhigh=0.027} to have a signal-to-noise ratio of 1)!
You will see that the noise now becomes colored!
You generally don't want this because the difference in filter values of one pixel are only physically meaningful when they have a high signal-to-noise ratio.
For lower signal-to-noise ratios, we should avoid color.
Ideally, we want to see both the brighter parts of the central galaxy, as well as the fainter diffuse parts together!
But with the simple linear transformation here, that is not possible!
You need some pre-processing (before calling ConvertType) to scale the images.
For example, you can experiment with taking the logarithm or the square root of the images (using @ref{Arithmetic}) before creating the color image.
These non-linear functions transform pixel values, mapping them to a new range.
After applying such transformations, you can use the transformed images as inputs to @command{astconvertt} to generate color images (similar to how we subtracted the sky; which is a linear operation).
In addition to that, it is possible to use a different color schema for showing the different brightness ranges as it is explained in the next section.
In the next section (@ref{Color for bright regions and grayscale for faint}), we'll review one high-level installed script which will simplify all these pre-processings and help you produce images with more information in them.
@node Color for bright regions and grayscale for faint, Manually setting color-black-gray regions, Color image using linear transformation, Color images with full dynamic range
@subsection Color for bright regions and grayscale for faint
In the previous sections we aligned three SDSS images of M51 group @ref{Color channels in same pixel grid}, and created a linearly-scaled color image (only using @command{astconvertt} program) in @ref{Color image using linear transformation}.
But we saw that showing the brighter and fainter parts of the galaxy in a single image is impossible in the linear scale!
In this section, we will use Gnuastro's @command{astscript-color-faint-gray} installed script to address this problem and create images which visualize a major fraction of the contents of our astronomical data.
This script aims to solve the problems mentioned in the previous section.
See Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024}, which first introduced this script, for examples of the final images we will be producing in this tutorial.
This script uses a non-linear transformation to modify the bright input values before combining them to produce the color image.
Furthermore, for the faint regions of the image, it will use grayscale and avoid color over all (as we saw, colored noised is not too nice to look at!).
The faint regions are also inverted: so the brightest pixel in the faint (black-and-white or grayscale) region is black and the faintest pixels will be white.
Black therefore creates a smooth transition from the colored bright pixels: the faintest colored pixel is also black.
Since the background is white and the diffuse parts are black, the final product will also show nice in print or show on a projector (the background is not black, but white!).
The SDSS image we used in the previous sections doesn't show the full glory of the M51 group!
Therefore, in this section, we will use the wider images from the @url{https://www.j-plus.es, J-PLUS survey}.
Fortunately J-PLUS includes the SDSS filters, so we can use the same iSDSS, rSDSSS, and gSDSS filters of J-PLUS.
As a consequence, similar to the previous section, the R, G, and B channels are respectively mapped to the iSDSS, rSDSS and gSDSS filters of J-PLUS.
The J-PLUS identification numbers for the images containing the M51 galaxy group are in these three filters are respectively: 92797, 92801, 92803.
The J-PLUS images are already sky subtracted and aligned into the same pixel grid (so we will not need the @command{astwarp} and @command{astnoisechisel} steps before).
However, zero point magnitudes of the J-PLUS images are different: 23.43, 23.74, 23.74.
Also, the field of view of the J-PLUS Camera is very large and we only need a small region to see the M51 galaxy group.
Therefore, we will crop the regions around the M51 group with a width of 0.35 degree wide (or 21 arcmin) and put the crops in the same @file{aligned/} directory we made before (which contains the inputs to the colored images).
With all the above information, let's download, crop, and have a look at the images to check that everything is fine.
Finally, let's run @command{astscript-color-faint-gray} on the three cropped images.
@example
## Download
$ url=https://archive.cefca.es/catalogues/vo/siap/jplus-dr3/get_fits?id=
$ wget "$url"92797 -Oin/i-jplus.fits.fz
$ wget "$url"92801 -Oin/r-jplus.fits.fz
$ wget "$url"92803 -Oin/g-jplus.fits.fz
## Crop
$ widthdeg=0.35
$ ra=202.4741207
$ dec=47.2171879
$ for f in i r g; do \
astcrop in/$f-jplus.fits.fz --center=$ra,$dec \
--width=$widthdeg --output=aligned/$f-jplus.fits; \
done
## Visual inspection of the images used for the color image
$ astscript-fits-view aligned/*-jplus.fits
## Create colored image.
$ R=aligned/i-jplus.fits
$ G=aligned/r-jplus.fits
$ B=aligned/g-jplus.fits
$ astscript-color-faint-gray $R $G $B -g1 --output=m51.pdf
@end example
After opening the PDF, you will notice that it is a color image with a gray background, making the M51 group and background galaxies visible together.
However, the images does not look nice and there is significant room for improvement!
You will notice that at the end of its operation, the script printed some numerical values for four options in a table, to show automatically estimated parameter values.
To enhance the output, let's go through and explain these step by step.
@cartouche
@noindent
@strong{Zero as blank value:}
@cindex Blank values
@cindex Zero as blank/NaN
@cindex NaN (Not a Number)
@cindex Not a Number (NaN)
Some astronomical data analysis software do not put ``Not a Number'' (NaN) in pixels that do not have data (for example there was no exposure there); instead they put a value of zero (or any other arbitrary number)!
When present, such pixels usually occur on the outer edges of images (for example the image was taken at a rotated angle to the equatorial coordinates of the pixel grid).
However, zero (or any arbitrary number) is statistically meaningful and will bias the measurements done in this (or any other) analysis.
The examples here don't have such regions, but it is important to be prepared.
If your inputs suffer from this problem, run the command below to convert the zero (or any other arbitrary value) to a NaN before starting to use this script:
@example
$ astarithmetic img.fits set-i i i 0 eq nan where --output=good.fits
@end example
@end cartouche
The first important point to take into account is the photometric calibration.
If the images are photometrically calibrated, then it is necessary to use the calibration to put the images in the same physical units and create ``real'' colors.
The script is able to do it through the zero point magnitudes with the option @option{--zeropoint} (or @option{-z}).
With this option, the images are internally transformed to have the same pixel units and then create the color image.
Since the magnitude zero points are 23.43, 23.74, 23.74 for the i, r, and g images, let's use them in the definition
@example
$ astscript-color-faint-gray $R $G $B -g1 --output=m51.pdf \
-z23.43 -z23.74 -z23.74
@end example
Open the image and have a look.
This image does not differ too much from the one generated by default (not using the zero point magnitudes).
This is because the zero point values used here are similar for the three images.
But in other datasets the calibration could make a big difference!
Let's consider another vital parameter: the minimum value to be displayed (@option{--minimum} or @option{-m}).
Pixel values below this number will not be shown on the color image.
In general, if the sky background has been subtracted (see @ref{Color image using linear transformation}), you can use the same value (0) for all three.
However, it is possible to consider different minimum values for the inputs (in this case use as many @option{-m} as input images).
In this particular case, a minimum value of zero for all images is suitable.
To keep the command simple, we'll add the zero point, minimum and HDU of each image in the variable that also had its filename.
@example
$ R="aligned/i-jplus.fits -h1 --zeropoint=23.43 --minimum=0.0"
$ G="aligned/r-jplus.fits -h1 --zeropoint=23.74 --minimum=0.0"
$ B="aligned/g-jplus.fits -h1 --zeropoint=23.74 --minimum=0.0"
$ astscript-color-faint-gray $R $G $B --output=m51.pdf
@end example
In contrast to the previous image, the new PDF (with a minimum value of zero) exhibits a better background visualization because it is avoiding negative pixels to be included in the scaling (they are white).
Now let's review briefly how the script modifies the pixel value distribution in order to show the entire dynamical range in an appropriate way.
The script combines the three images into a single one by using a the mean operator, as a consequence, the combined image is the average of the three R, G, and B images.
This averaged image is used for performing the asinh transformation of Lupton et al. @url{https://ui.adsabs.harvard.edu/abs/2004PASP..116..133L, 2004} that is controlled by two parameters: @option{--qbright} (@mymath{q}) and @option{--stretch} (@mymath{s}).
The asinh transformation consists in transforming the combined image (@mymath{I}) according to the expression: @mymath{f(I) = asinh(q\times{}s\times{}I)/q}.
When @mymath{q\rightarrow0}, the expression becomes linear with a slope of the ``stretch'' (@mymath{s}) parameter: @mymath{f(I) = s\times{}I}.
In practice, we can use this characteristic to first set a low value for @option{--qbright} and see the brighter parts in color, while changing the parameter @option{--stretch} to show linearly the fainter regions (outskirts of the galaxies for example.
The image obtained previously was computed by the default parameters (@option{--qthresh=1.0} and @option{--stretch=1.0}).
So, let's set a lower value for @option{--qbright} and check the result.
@example
$ astscript-color-faint-gray $R $G $B --output=m51-qlow.pdf \
--qbright=0.01
@end example
Comparing @file{m51.pdf} and @file{m51-qlow.pdf}, you will see that a large area of the previously colored colored pixels have become black.
Only the very brightest pixels (core of the galaxies and stars) are shown in color.
Now, let's bring out the fainter regions around the brightest pixels linearly by increasing @option{--stretch}.
This allows you to reveal fainter regions, such as outer parts of galaxies, spiral arms, stellar streams, and similar structures.
Please, try different values to see the effect of changing this parameter.
Here, we will use the value of @option{--stretch=100}.
@example
$ astscript-color-faint-gray $R $G $B --output=m51-qlow-shigh.pdf \
--qbright=0.01 --stretch=100
@end example
Do you see how the spiral arms and the outskirts of the galaxies have become visible as @option{--stretch} is increased?
After some trials, you will have the necessary feeling to see how it works.
Please, play with these two parameters until you obtain the desired results.
Depending on the absolute pixel values of the input images and the photometric calibration, these two parameters will be different.
So, when using this script on your own data, take your time to study and analyze which parameters are good for showing the entire dynamical range.
For this tutorial, we will keep it simple and use the previous parameters.
Let's define a new variable to keep the parameters already discussed so we have short command-line examples.
@example
$ params="--qbright=0.01 --stretch=100"
$ astscript-color-faint-gray $R $G $B $params --output=m51.pdf
$ rm m51-qlow.pdf m51-qlow-shigh.pdf
@end example
Having a separate color-map for the fainter parts is generally a good thing, but for some reason you may not want it!
To disable this feature, you can use the @option{--coloronly} option:
@example
$ astscript-color-faint-gray $R $G $B $params --coloronly \
--output=m51-coloronly.pdf
@end example
Open the image and note that now the coloring has gone all the way into the noise (producing a black background).
In contrast with the gray background images before, the fainter/smaller stars/galaxies and the low surface brightness features are not visible anymore!
These regions show the interaction of two galaxies; as well as all the other background galaxies and foreground stars.
These structures were entirely hidden in the ``only-color'' images.
Consequently, the gray background color scheme is particularly useful for visualizing the most features of your data and you will rarely need to use the @option{--coloronly} option.
We will therefore not use this option anymore in this tutorial; and let's clean the temporary file made before:
@example
$ rm m51-coloronly.pdf
@end example
Now that we have the basic parameters are set, let's consider other parameters that allow to fine tune the three ranges of values: color for the brightest pixel values, black for intermediate pixel values, and gray for the faintest pixel values:
@itemize
@item
@option{--colorval} defines the boundary between the color and black regions (the lowest pixel value that is colored).
@item
@option{--grayval} defines the boundary between the black and gray regions (the highest gray value).
@end itemize
Looking at the last lines that the script prints, we see that the default value estimated for @option{--colorval} and @option{--grayval} are roughly 1.4.
What do they mean?
To answer this question it is necessary to have a look at the image that is used to separate those different regions.
By default, this image is computed internally by the script and removed at the end.
To have a look at it, you need to use the option @option{--keeptmp} to keep the temporary files.
Let's put the temporary files into the @file{tmp} directory with the option @option{--tmpdir=tmp --keeptmp}.
The first will use the name @file{tmp} for the temporary directory and with the second, we ask the script to not delete (keep) it after all operations are done.
@example
$ astscript-color-faint-gray $R $G $B $params --output=m51.pdf \
--tmpdir=tmp --keeptmp
@end example
The image that defines the thresholds is @file{./tmp/colorgray_threshold.fits}.
By default, this image is the asinh-transformed image with the pixel values between 0 (faint) and 100 (bright).
If you obtain the statistics of this image, you will see that the median value is exactly the value that the script is giving as the @option{--colorval}.
@example
$ aststatistics ./tmp/colorgray_threshold.fits
@end example
In other words, all pixels between 100 and this value (1.4) on the threshold image will be shown in color.
To see its effect, let's increase this parameter to @option{--colorval=25}.
By doing this, we expect that only bright pixels (those between 100 and 25 in the threshold image) will be in color.
@example
$ astscript-color-faint-gray $R $G $B $params --colorval=25 \
--output=m51-colorval.pdf
@end example
Open @file{m51-colorval.pdf} and check that it is true!
Only the central part of the objects (very bright pixels, those between 100 and 25 on the threshold image) are shown in color.
Fainter pixels (below 25 on the threshold image) are shown in black and gray.
However, in many situations it is good to be able to show the outskirts of galaxies and low surface brightness features in pure black, while showing the background in gray.
To do that, we can use another threshold that separates the black and gray pixels: @option{--grayval}.
Similar to @option{--colorval}, the @option{--grayval} option defines the separation between the pure black and the gray pixels from the threshold image.
For example, by setting @option{--grayval=5}, those pixels below 5 in the threshold image will be shown in gray, brighter pixels will be shown in black until the value 25.
Pixels brighter than 25 are shown in color.
@example
$ astscript-color-faint-gray $R $G $B $params --output=m51-check.pdf \
--colorval=25 --grayval=5
@end example
Open the image and check that the regions shown in color are smaller (as before), and that now there is a region around those color pixels that are in pure black.
After the black pixels toward the fainter ones, they are shown in gray.
As explained above, in the gray region, the brightest are black and the faintest are white.
It is recommended to experiment with different values around the estimated one to have a feeling on how it changes the image.
To have even better idea of those regions, please run the following example to keep temporary files and check the labeled image it has produced:
@example
$ astscript-color-faint-gray $R $G $B $params --output=m51-check.pdf \
--colorval=25 --grayval=5 \
--tmpdir=tmp --keeptmp
$ astscript-fits-view tmp/total_mask-2color-1black-0gray.fits
@end example
In this segmentation image, pixels equal to 2 will be shown in color, pixels equal to 1 will be shown as pure black, and pixels equal to zero are shown in gray.
By default, the script sets the same value for both thresholds.
That means that there is not many pure black pixels.
By adjusting the @option{--colorval} and @option{--grayval} parameters, you can obtain an optimal result to show the bright and faint parts of your data within one printable image.
The values used here are somewhat extreme to illustrate the logic of the procedure, but we encourage you to experiment with values close to the estimated by default in order to have a smooth transition between the three regions (color, black, and gray).
The script can provide additional information about the pixel value distributions used to estimate the parameters by using the @option{--checkparams} option.
To conclude this section of the tutorial, let's clean up the temporary test files:
@example
$ rm m51-check.pdf m51-colorval.pdf
@end example
@node Manually setting color-black-gray regions, Weights contrast markers and other customizations, Color for bright regions and grayscale for faint, Color images with full dynamic range
@subsection Manually setting color-black-gray regions
In @ref{Color for bright regions and grayscale for faint}, we created a non-linear colored image.
We used the @option{--colorval} and @option{--grayval} options to specify which regions to show in gray (faintest values), black (intermediate values) and color (brightest values).
We also saw that the script uses a labeled image with three possible values for each pixel to identify how that pixel should be colored.
A useful feature of this script is the possibility of providing this labeled image as an input directly.
This expands the possibilities of generating color images in a more quantitative way.
In this section, we'll use this feature to use a more physically motivated criteria to select these three regions (the surface brightness in the reddest band).
First, let's generate a surface brightness image from the R channel.
That is, the value of each pixel will be in the units of surface brightness (mag/arcsec@mymath{^2}).
To do that, we need obtain the pixel area in arcsec and use the zero point value of the image.
Then, the @option{counts-to-sb} operator of @command{astarithmetic} is used.
For more on the conversion of NaN surface brightness values and the value to @code{R_sbl} (which is roughly the surface brightness limit of this image), see @ref{FITS images in a publication}.
@example
$ sb_sbl=26
$ sb_zp=23.43
$ sb_img=aligned/i-jplus.fits
$ pixarea=$(astfits $sb_img --pixelareaarcsec2 --quiet)
# Compute the SB image (set NaNs to SB of 26!)
$ astarithmetic $sb_img $sb_zp $pixarea counts-to-sb set-sb \
sb sb isblank sb $sb_sbl gt or $sb_sbl where \
--output=sb.fits
# Have a look at the image
$ astscript-fits-view sb.fits --ds9scale=minmax \
--ds9extra="-invert"
@end example
Remember that because @file{sb.fits} is a surface brightness image where lower values are brighter and higher values are fainter.
Let's build the labeled image that defines the regions (@file{regions.fits}) step-by-step with the following criteria in surface brightness (SB)
@table @asis
@item @mymath{\rm{SB}<23}
These are the brightest pixels, we want these in color.
In the regions labeled image, these should get a value of 2.
@item @mymath{23<\rm{SB}<25}
These are the intermediate pixel values, to see the fainter parts better, we want these in pure black (no change in color in this range).
In the regions labeled image, these should get a value of 1.
@item @mymath{\rm{SB}>25}
These are the faintest pixel values, we want these in a gray color map (pixels with an SB of 25 will be black and as they become fainter, they will become lighter shades of gray).
In the regions labeled image, these should get a value of 0.
@end table
@example
# SB thresholds (low and high)
$ sb_faint=25
$ sb_bright=23
# Select the three ranges of pixels.
$ astarithmetic sb.fits set-sb \
sb $sb_bright lt set-color \
sb $sb_bright ge sb $sb_faint lt and set-black \
color 2 u8 x black + \
--output=regions.fits
# Check the images
$ astscript-fits-view regions.fits
@end example
We can now use this labeled image with the @option{--regions} option for obtaining the final image with the desired regions (the @code{R}, @code{G}, @code{B} and @code{params} shell variables were set previously in @ref{Color for bright regions and grayscale for faint}):
@example
$ astscript-color-faint-gray $R $G $B $params --output=m51-sb.pdf \
--regions=regions.fits
@end example
Open @file{m51-sb.pdf} and have a look.
Do you see how the different regions (SB intervals) have been colored differently?
They come from the SB levels we defined, and because it is using absolute thresholds in physical units of surface brightness, the visualization is not only a nice looking color image, but can be used in scientific analysis.
This is really interesting because now it is possible to use color images for detecting low surface brightness features at the same time they provide quantitative measurements.
Of course, here we have defined this region label image just using two surface brightness values, but it is possible to define any other labeled region image that you may need for your particular purpose.
@node Weights contrast markers and other customizations, , Manually setting color-black-gray regions, Color images with full dynamic range
@subsection Weights, contrast, markers and other customizations
Previously (in @ref{Manually setting color-black-gray regions}) we used an absolute (in units of surface brightness) thresholding for selecting which regions to show by color, black and gray.
To keep the previous configurations and avoid long commands, let's add the previous options to the @code{params} shell variable.
To help in readability, we will repeat the other shell variables from previous sections also:
@example
$ R="aligned/i-jplus.fits -h1 --zeropoint=23.43 --minimum=0.0"
$ G="aligned/r-jplus.fits -h1 --zeropoint=23.74 --minimum=0.0"
$ B="aligned/g-jplus.fits -h1 --zeropoint=23.74 --minimum=0.0"
$ params="--regions=regions.fits --qbright=0.01 --stretch=100"
$ astscript-color-faint-gray $R $G $B $params --output=m51.pdf
@end example
To modify the color balance of the output image, you can weigh the three channels differently with the @option{--weight} or @option{-w} option.
For example, by using @option{-w1 -w1 -w2}, you give two times more weight to the blue channel than to the red and green channels:
@example
$ astscript-color-faint-gray $R $G $B $params -w1 -w1 -w2 \
--output=m51-weighted.pdf
@end example
The colored pixels of the output are much bluer now and the distinction between the two merging galaxies is more clear.
However, keep in mind that altering the different filters can lead to incorrect subsequent analyses by the readers/viewers of this work (for example they will falsely think that the galaxy is blue, and not red!).
If the reduction and photometric calibration are correct, and the images represent what you consider as the red, green, and blue channels, then the output color image should be suitable without weights.
In certain situations, the combination of channels may not have a traditional color interpretation.
For instance, combining an X-ray channel with an optical filter and a far-infrared image can complicate the interpretation in terms of human understanding of color.
But the physical interpretation remains valid as the different channels (colors in the output) represent different physical phenomena of astronomical sources.
Another easier example is the use of narrow-band filters such as the H-alpha of J-PLUS survey.
This is shown in the Bottom-right panel of Figure 1 by Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024}, in this case the G channel has been substituted by the image corresponding to the H-alpha filter to show the star formation regions.
Therefore, please use the weights with caution, as it can significantly affect the output and misinform your readers/viewers.
If you do apply weights be sure to report the weights in the caption of the image (beside the filters that were used for each channel).
With great power there must also come great responsibility!
Two additional transformations are available to modify the appearance of the output color image.
The linear transformation combines bias adjustment and contrast enhancement through the @option{--bias} and @option{--contrast} options.
In most cases, only the contrast adjustment is necessary to improve the quality of the color image.
To illustrate the impact of adjusting image contrast, we will generate an image with higher contrast and compare with the previous one.
@example
$ astscript-color-faint-gray $R $G $B $params --contrast=2 \
--output=m51-contrast.pdf
@end example
When you compare this (@file{m51-contrast.pdf}) with the previous output (@file{m51.pdf}), you see that the colored parts are now much more clear!
Use this option also with caution because it may happen that the bright parts become saturated.
Another option available for transforming the image appearance is the gamma correction, a non-linear transformation that can be useful in specific cases.
You can experiment with different gamma values to observe the impact on the resulting image.
Lower gamma values will enhance faint structures, while higher values will emphasize brighter regions.
Let's have a look by giving two very different values to it with the simple loop below:
@example
$ for g in 0.4 2.0; do \
astscript-color-faint-gray $R $G $B $params --contrast=2 \
--gamma=$g --output=m51-gamma-$g.pdf; \
done
@end example
Comparing the last three files (@file{m51-contrast.pdf}, @file{m51-gamma-0.4.pdf} and @file{m51-gamma-2.0.pdf}), you will clearly see the effect of the @option{--gamma}.
Instead of using a combination of the three input images for the gray background, you can introduce a fourth image that will be used for generating the gray background.
This image is referred to as the "K" channel and may be useful when a particular filter is deeper, has unique characteristics, or you have built by some custom processing to show the diffuse features better.
In this case, this image will be used for defining the @option{--colorval} and @option{--grayval} thresholds, but the rationale remains the same as explained earlier.
Two additional options are available to smooth different regions by convolving with a Gaussian kernel: @option{--colorkernelfwhm} for smoothing color regions and @option{--graykernelfwhm} for convolving gray regions.
The value specified for these options represents the full width at half maximum of the Gaussian kernel.
Finally, another commonly useful feature is @option{--markoptions}: it allows you to mark and label the final output image with vector graphics over the color image.
The arguments passed through this option are directly passed to ConvertType for the generation of the output image.
This feature was already used in @ref{Marking objects for publication} of the @ref{General program usage tutorial}; see there for a more complete introduction.
Let's create four marks/labels just to illustrate the procedure within @command{astscript-color-faint-gray}.
First we need to create a table that contains the parameters for creating the marks (coordinates, shape, size, colors, etc.).
In order to have an example that could be easily salable to more marks, with elaborated options let's create it by parts: the header with the column names, and the parameters.
With the following commands, we'll create the header that contains the column metadata.
@example
echo "# Column 1: ra [pix, f32] RA coordinate" > markers.txt
echo "# Column 2: dec [pix, f32] Dec coordinate" >> markers.txt
echo "# Column 3: shape [none, u8] Marker shape" >> markers.txt
echo "# Column 4: size [pix, f32] Marker Size" >> markers.txt
echo "# Column 5: aratio [none, f32] Axis ratio" >> markers.txt
echo "# Column 6: angle [deg, f32] Position angle" >> markers.txt
echo "# Column 7: color [none, u8] Marker color" >> markers.txt
@end example
Next is to create the parameters that define the markers.
In this case, with the lines below we create four markers (cross, ellipse, square, and line) at different positions, with different shapes, and colors.
These lines are appended to the header file created previously.
@example
echo "400.00 400.00 3 60.000 0.50 0.000 8" >> markers.txt
echo "1800.0 400.00 4 120.00 0.30 45.00 58" >> markers.txt
echo "400.00 1800.0 6 180.00 1.00 0.000 85" >> markers.txt
echo "1800.0 1800.0 8 240.00 1.00 -45.0 25" >> markers.txt
@end example
Now that we have the table containing the definition of the markers, we use the @option{--markoptions} option of this script.
This option will pass what ever is given to it directly to ConvertType, so you can use all the options in @ref{Drawing with vector graphics}.
For this basic example, let's give it the following options:
@example
markoptions="--mode=img \
--sizeinarcsec \
--markshape=shape \
--markrotate=angle \
--markcolor=color \
--marks=markers.txt \
--markcoords=ra,dec \
--marksize=size,aratio"
@end example
The last step consists in executing the script with the option that provides all the markers options.
@example
$ astscript-color-faint-gray $R $G $B $params --contrast=2 \
--markoptions="$markoptions" \
--output=m51-marked.pdf
@end example
Open the @file{m51-marked.pdf} and check that the four markers have been printed on the image.
With this quick example we just show the possibility of drawing markers on images very easily.
This task can be automated, for example by plotting markers from a given catalog at specific positions, and so on.
Note that there are many other options for customize your markers/drawings over an output of ConvertType, see @ref{Drawing with vector graphics} and @ref{Marking objects for publication}.
Congratulations!
By following the tutorial up to this point, we have been able to reproduce three images of Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024}.
You can see the commands that were used to generate them within the reproducible source of that paper at @url{https://codeberg.org/gnuastro/papers/src/branch/color-faint-gray}.
Remember that this paper is exactly reproducible with Maneage, so you can explore and build the entire paper by yourself.
For more on Maneage, see Akhlaghi et al. @url{https://ui.adsabs.harvard.edu/abs/2021CSE....23c..82A, 2021}.
This tutorial provided a general overview of the various options to construct a color image from three different FITS images using the @command{astscript-color-faint-gray} script.
Keep in mind that the optimal parameters for generating the best color image depend on your specific goals and the quality of your input images.
We encourage you to follow this tutorial with the provided J-PLUS images and later with your own dataset.
See @ref{Color images with gray faint regions} for more information, and please consider citing Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024} if you use this script in your work (the full Bib@TeX{} entry of this paper will be given to you with the @option{--cite} option).
@node Zero point of an image, Pointing pattern design, Color images with full dynamic range, Tutorials
@section Zero point of an image
The ``zero point'' of an image is astronomical jargon for the calibration factor of its pixel values; allowing us to convert the raw pixel values to physical units.
It is therefore a critical step during data reduction.
For more on the definition and importance of the zero point magnitude, see @ref{Brightness flux magnitude} and @ref{Zero point estimation}.
@cindex SDSS
In this tutorial, we will use Gnuastro's @command{astscript-zeropoint}, to estimate the zero point of a single exposure image from the @url{https://www.j-plus.es, J-PLUS survey}, while using an @url{http://www.sdss.org, SDSS} image as reference (recall that all SDSS images have been calibrated to have a fixed zero point of 22.5).
In this case, both images that we are using were taken with the SDSS @emph{r} filter.
See Eskandarlou et al. @url{https://arxiv.org/abs/2312.04263,2023}.
@cartouche
@cindex Johnson filters
@cindex Johnson vs. SDSS filters
@cindex SDSS vs. Johnson filters
@cindex Filter transmission curve
@cindex Transmission curve of filters
@cindex SVO database (filter transmission curve)
@noindent
@strong{Same filters and SVO filter database:} It is very important that both your images are taken with the same filter.
When looking at filter names, don't forget that different filter systems sometimes have the same names for one filter, such as the name ``R''; which is used in both the Johnson and SDSS filter systems.
Hence if you confront an image in the ``R'' or ``r'' filter, double check to see exactly which filter system it corresponds to.
If you know which observatory your data came from, you can use the @url{http://svo2.cab.inta-csic.es/theory/fps, SVO database} to confirm the similarity of the transmission curves of the filters of your input and reference images.
SVO contains the filter data for many of the observatories world-wide.
@end cartouche
@menu
* Zero point tutorial with reference image:: Using a reference image.
* Zero point tutorial with reference catalog:: Using a reference catalog.
@end menu
@node Zero point tutorial with reference image, Zero point tutorial with reference catalog, Zero point of an image, Zero point of an image
@subsection Zero point tutorial with reference image
First, let’s create a directory named @file{tutorial-zeropoint} to keep things clean and work in that.
Then, with the commands below, you can download an image from J-PLUS and SDSS.
To speed up the analysis, the image is cropped to have a smaller region around its center.
@example
$ mkdir tutorial-zeropoint
$ cd tutorial-zeropoint
$ jplusdr2=http://archive.cefca.es/catalogues/vo/siap/jplus-dr2/reduced
$ wget $jplusdr2/get_fits?id=771463 -O jplus.fits.fz
$ astcrop jplus.fits.fz --center=107.7263,40.1754 \
--width=0.6 --output=jplus-crop.fits
@end example
Although we cropped the J-PLUS image, it is still very large in comparison with the SDSS image (the J-PLUS field of view is almost @mymath{1.5\times1.5} deg@mymath{^2}, while the field of view of SDSS in each filter is almost @mymath{0.3\times0.5} deg@mymath{^2}).
Therefore, let's download two SDSS images (and then decompress them) in the region of the cropped J-PLUS image to have a more accurate result compared to a single SDSS footprint: generally, your zero point estimation will have less scatter with more overlap between your reference image(s) and your input image.
@example
$ sdssbase=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
$ wget $sdssbase/301/6509/5/frame-r-006509-5-0115.fits.bz2 \
-O sdss1.fits.bz2
$ wget $sdssbase/301/6573/5/frame-r-006573-5-0174.fits.bz2 \
-O sdss2.fits.bz2
$ bunzip2 sdss1.fits.bz2
$ bunzip2 sdss2.fits.bz2
@end example
To have a feeling of the data, let's open the three images with @command{astscript-fits-view} using the command below.
Wait a few seconds to see the three images ``blinking'' one after another.
The largest one is the J-PLUS crop and the two smaller ones that partially cover it in different regions are from SDSS.
@example
$ astscript-fits-view sdss1.fits sdss2.fits jplus-crop.fits \
--ds9extra="-lock frame wcs -single -zoom to fit -blink yes"
@end example
The test above showed that the three images are already astrometrically calibrated (the coverage of the pixel positions on the sky is correct in both).
To confirm, you can zoom-in to a certain object and confirm it on a pixel level.
It is always good to do the visual check above when you are confronted with new images (and may not be confident about the accuracy of the astrometry).
Do not forget that the goal here is to find the calibration of pixel values; and that we assume pixel positions are already calibrated (the image already has a good astrometry).
The SDSS images are Sky subtracted, while this single-exposure J-PLUS image still contains the counts related to the Sky emission within them.
In the J-PLUS survey, the sky-level in each pixel is kept in a separate @code{BACKGROUND_MODEL} HDU of @file{jplus.fits.fz}; this allows you to use a different sky if you like.
The SDSS image FITS files also have multiple extensions.
To understand our inputs, let's have a fast look at the basic info of each:
@example
$ astfits sdss1.fits
Fits (GNU Astronomy Utilities) @value{VERSION}
Run on Fri Apr 14 11:24:03 2023
-----
HDU (extension) information: 'sdss1.fits'.
Column 1: Index (counting from 0, usable with '--hdu').
Column 2: Name ('EXTNAME' in FITS standard, usable with '--hdu').
('n/a': no name in HDU metadata)
Column 3: Image data type or 'table' format (ASCII or binary).
Column 4: Size of data in HDU.
Column 5: Units of data in HDU (only images).
('n/a': no unit in HDU metadata, or HDU is a table)
-----
0 n/a float32 2048x1489 nanomaggy
1 n/a float32 2048 n/a
2 n/a table_binary 1x3 n/a
3 n/a table_binary 1x31 n/a
$ astfits jplus.fits.fz
Fits (GNU Astronomy Utilities) @value{VERSION}
Run on Fri Apr 14 11:21:30 2023
-----
HDU (extension) information: 'jplus.fits.fz'.
Column 1: Index (counting from 0, usable with '--hdu').
Column 2: Name ('EXTNAME' in FITS standard, usable with '--hdu').
('n/a': no name in HDU metadata)
Column 3: Image data type or 'table' format (ASCII or binary).
Column 4: Size of data in HDU.
Column 5: Units of data in HDU (only images).
('n/a': no unit in HDU metadata, or HDU is a table)
-----
0 n/a no-data 0 n/a
1 IMAGE float32 9216x9232 adu
2 MASKED_PIXELS int16 9216x9232 n/a
3 BACKGROUND_MODEL float32 9216x9232 n/a
4 MASK_MODEL uint8 9216x9232 n/a
@end example
Therefore, in order to be able to compare the SDSS and J-PLUS images, we should first subtract the sky from the J-PLUS image.
To do that, we can either subtract the @code{BACKGROUND_MODEL} HDU from the @code{IMAGE} HDU using @ref{Arithmetic}, or we can use @ref{NoiseChisel} to find a good sky ourselves.
As scientists we like to tweak and be creative, so let's estimate it ourselves with the command below.
Generally, you may not have a pre-estimated Sky estimation like above, so you should be prepared to subtract the sky yourself.
@example
$ astnoisechisel jplus-crop.fits --output=jplus-nc.fits
$ astscript-fits-view jplus-nc.fits
@end example
Notice that there is a relatively bright star in the center-bottom of the image.
In the ``Cube'' window, click on the ``Next'' button to see the @code{DETECTIONS} HDU.
The large footprint of the bright star is obvious.
Press the ``Next'' button one more time to get to the @code{SKY} HDU.
You see that in the center-bottom, the footprint of the large star is clearly visible in the measured Sky level.
This is not good!
With Sky values above 54 ADU in the center of the star (the white pixels).
This over-subtracted Sky level in part of the image will affect your magnitude measurements and thus the zero point!
In @ref{General program usage tutorial}, we have a section on @ref{NoiseChisel optimization for detection}, there is also a full tutorial on this in @ref{Detecting large extended targets}.
Therefore, we will not go into the details of NoiseChisel optimization here.
Given the large images of J-PLUS, we will increase the tile-size to @mymath{100\times100} pixels and the number of neighbors to identify outlying tiles to 50 (these are usually the first parameters you should start editing when you are confronted with a new image).
After the second command, check the @code{SKY} extension to confirm that there is no footprint of any bright object there.
You will still see a gradient, but note the minimum and maximum values of the Sky level: their difference is more than 26 times smaller than the noise standard deviation (so statistically speaking, it is pretty flat!)
@example
$ astnoisechisel jplus-crop.fits --output=jplus-nc.fits \
--tilesize=100,100 --outliernumngb=50
$ astscript-fits-view jplus-nc.fits
## Check that the gradient in the sky is statistically negligible.
$ aststatistics jplus-nc.fits -hSKY --minimum --maximum \
| awk '@{print $2-$1@}'
0.32809
$ aststatistics jplus-nc.fits -hSKY_STD --median
8.377977e+00
@end example
We are now ready to find the zero point!
First, let's run the @command{astscript-zeropoint} with @option{--help} to see the option names (recall that you can see more details of each option in @ref{Invoking astscript-zeropoint}).
For the first time, let's use the script in the most simple state possible.
We will keep only the essential options: the names of the input and reference images (and their HDUs), the name of the output, and also two apertures with radii of 3 arcsec to start with:
@example
$ astscript-zeropoint --help
$ astscript-zeropoint jplus-nc.fits --hdu=INPUT-NO-SKY \
--refimgs=sdss1.fits,sdss2.fits \
--output=jplus-zeropoint.fits \
--refimgszp=22.5,22.5 \
--refimgshdu=0,0 \
--aperarcsec=3
@end example
The output is a FITS table (because generally, you will give more apertures and choose the best one based on a higher-level analysis).
Let's check the output's internal structure with Gnuastro's @command{astfits} program.
@example
$ astfits jplus-zeropoint.fits
-----
0 n/a no-data 0 n/a
1 ZEROPOINTS table_binary 1x3 n/a
2 APER-3 table_binary 321x2 n/a
@end example
You can see that there are two HDUs in this file.
The HDU names give a hint, so let's have a look at each extension with Gnuastro's @command{asttable} program:
@example
$ asttable jplus-zeropoint.fits --hdu=1 -i
--------
jplus-zeropoint.fits (hdu: 1)
------- ----- ---- -------
No.Name Units Type Comment
------- ----- ---- -------
1 APERTURE arcsec float32 n/a
2 ZEROPOINT mag float32 n/a
3 ZPSTD mag float32 n/a
--------
Number of rows: 1
--------
@end example
@noindent
As you can see, in the first extension, for each of the apertures you requested (@code{APERTURE}), there is a zero point (@code{ZEROPOINT}) and the standard deviation of the measurements on the apertures (@code{ZPSTD}).
In this case, we only requested one aperture, so it only has one row.
Now, let's have a look at the next extension:
@example
$ asttable jplus-zeropoint.fits --hdu=2 -i
--------
jplus-zeropoint.fits (hdu: 2)
------- ----- ---- -------
No.Name Units Type Comment
------- ----- ---- -------
1 MAG-REF f32 float32 Magnitude of reference.
2 MAG-DIFF f32 float32 Magnitude diff with input.
--------
Number of rows: 321
--------
@end example
It contains a table of measurements for the aperture with the least scatter.
In this case, we only gave one aperture, so it is the same.
If you give multiple apertures, only the one with least scatter will be present by default.
In the @code{MAG-REF} column you see the magnitudes within each aperture on the reference (SDSS) image(s).
The @code{MAG-DIFF} column contains the difference of the input (J-PLUS) and reference (SDSS) magnitudes for each aperture (see @ref{Zero point estimation}).
The two catalogs, created by the aperture photometry from the SDSS images, are merged into one so that there are more stars to compare.
Therefore, no matter how many reference images you provide, there will only be a single table here.
If the two SDSS images overlapped, each object in the overlap region would have two rows (one row for the measurement from one SDSS image, and another from the measurement from the other).
Now that we have obtained the zero point of the J-PLUS image, let's go a little deeper into lower-level details of how this script operates.
This will help you better understand what happened and how to interpret and improve the outputs when you are confronted with a new image and strange outputs.
To keep intermediate results the @command{astscript-zeropoint} script keeps temporary files in a temporary directory and later deletes it (and all the intermediate products).
If you like to check the temporary files of the intermediate steps, you can use @option{--keeptmp} option to not remove them.
Let's take a closer look into the contents of each HDU.
First, we'll use Gnuastro’s @command{asttable} to see the measured zero point for this aperture.
We are using @option{-Y} to have human-friendly (non-scientific!) numbers (which are sufficient here) and @option{-O} to also show the metadata of each column at the start.
@example
$ asttable jplus-zeropoint.fits -Y -O
# Column 1: APERTURE [arcsec,f32,] Aperture used.
# Column 2: ZEROPOINT [mag ,f32,] Zero point (sig-clip median).
# Column 3: ZPSTD [mag ,f32,] Zero point Standard deviation.
3.000 26.435 0.057
@end example
@noindent
Now, let's have a look at the first 10 rows of the second (@code{APER-3}) extension.
From the previous check we did above, we see that it contains 321 rows!
@example
$ asttable jplus-zeropoint.fits -Y -O --hdu=APER-3 --head=10
# Column 1: MAG-REF [f32,f32,] Magnitude of reference.
# Column 2: MAG-DIFF [f32,f32,] Magnitude diff with input.
16.461 30.035
16.243 28.209
15.427 26.427
20.064 26.459
17.334 26.425
20.518 26.504
17.100 26.400
16.919 26.428
17.654 26.373
15.392 26.429
@end example
But the table above is hard to interpret, so let's plot it.
To do this, we'll use the same @command{astscript-fits-view} command above that we used for images.
It detects if the file has a image or table HDU and will call DS9 or TOPCAT respectively.
You can also use any other plotter you like (TOPCAT is not part of Gnuastro), this script just calls it.
@example
$ astscript-fits-view jplus-zeropoint.fits --hdu=APER-3
@end example
After @code{TOPCAT} opens, you can select the ``Graphics'' menu and then ``Plain plot''.
This will show a plot with the SDSS (reference image) magnitude on the horizontal axis and the difference of magnitudes between the the input and reference (the zero point) on the vertical axis.
In an ideal world, the zero point should be independent of the magnitude of the different stars that were used.
Therefore, this plot should be a horizontal line (with some scatter as we go to fainter stars).
But as you can see in the plot, in the real world, this expected behavior is seen only for stars with magnitudes about 16 to 19 in the reference SDSS images.
The stars that are brighter than 16 are saturated in one (or both) surveys@footnote{To learn more about saturated pixels and recognition of the saturated level of the image, please see @ref{Saturated pixels and Segment's clumps}}.
Therefore, they do not have the correct magnitude or mag-diff.
You can check some of these stars visually by using the blinking command above and zooming into some of the brighter stars in the SDSS images.
@cindex Depth of data
On the other hand, it is natural that we cannot measure accurate magnitudes for the fainter stars because the noise level (or ``depth'') of each image is limited.
As a result, the horizontal line becomes wider (scattered) as we go to the right (fainter magnitudes on the horizontal axis).
So, let's limit the range of used magnitudes from the SDSS catalog to calculate a more accurate zero point for the J-PLUS image.
For this reason, we have the @option{--magnituderange} option in @command{astscript-zeropoint}.
@cartouche
@noindent
@strong{Necessity of sky subtraction:}
To obtain this horizontal line, it is very important that both your images have been sky subtracted.
Please, repeat the last @command{astscript-zeropoint} command above only by changing the input file to @file{jplus-crop.fits}.
Then use Gnuastro’s @command{astscript-fits-view} again to draw a plot with @code{TOPCAT} (also same as above).
Instead of a horizontal line, you will see @emph{a sloped line} in the magnitude range above!
This happens because the sky level acts as a source of constant signal in all apertures, so the magnitude difference will not be independent of the star's magnitude, but dependent on it (the measurement on a fainter star will be dominated by the sky level).
@strong{Remember:} if you see a sloped line instead of a horizontal line, the input or reference image(s) are not sky subtracted.
@end cartouche
Another key parameter of this script is the aperture size (@option{--aperarcsec}) for the aperture photometry of images.
On one hand, if the selected aperture is too small, you will be at the mercy of the differing PSFs between your input and reference image(s): part of the light of the star will be lost in the image with the worse PSF.
On the other hand, with large aperture size, the light of neighboring objects (stars/galaxies) can affect the photometry.
We should select an aperture radius of the same order than the one used in the reference image, typically 2 to 3 times the PSF FWHM of the images.
For now, let's assume the values 2, 3, 4, 5, and 6 arcsec for the aperture sizes parameter.
The script will compare the result for several aperture sizes and choose the one with least standard deviation value, @code{ZPSTD} column of the @code{ZEROPOINTS} HDU.
Let's re-run the script with the following changes:
@itemize
@item
Using @option{--magnituderange} to limit the stars used for estimating the zero point.
@item
Giving more values for aperture size to find the best for these two images as explained above.
@item
Call @option{--keepzpap} option to keep the result of matching the catalogs done with the selected apertures in the different extensions of the output file.
@end itemize
@example
$ astscript-zeropoint jplus-nc.fits --hdu=INPUT-NO-SKY \
--refimgs=sdss1.fits,sdss2.fits \
--output=jplus-zeropoint.fits \
--refimgszp=22.5,22.5 \
--aperarcsec=2,3,4,5,6 \
--magnituderange=16,18 \
--refimgshdu=0,0 \
--keepzpap
@end example
Now, check number of HDU extensions by @command{astfits}.
@example
$ astfits jplus-zeropoint.fits
-----
0 n/a no-data 0 n/a
1 ZEROPOINTS table_binary 5x3 n/a
2 APER-2 table_binary 319x2 n/a
3 APER-3 table_binary 321x2 n/a
4 APER-4 table_binary 323x2 n/a
5 APER-5 table_binary 323x2 n/a
6 APER-6 table_binary 325x2 n/a
@end example
You can see that the output file now has a separate HDU for each aperture (thanks to @option{--keepzpap}.)
The @code{ZEROPOINTS} hdu contains the final zero point values for each aperture and their error.
The best zero point value belongs to the aperture that has the least scatter (has the lowest standard deviation).
The rest of extensions contain the zero point value computed within each aperture (as discussed above).
Let's check the different tables by plotting all magnitude tables at the same time with @code{TOPCAT}.
@example
$ astscript-fits-view jplus-zeropoint.fits
@end example
@noindent
After @code{TOPCAT} has opened take the following steps:
@enumerate
@item
From the ``Graphics'' menu, select ``Plain plot''.
You will see the last HDU's scatter plot open in a new window (for @code{APER-6}, with red points).
The Bottom-left panel has the logo of a red-blue scatter plot that has written @code{6:jplus-zeropoint.fits} in front of it (showing that this is the 6th HDU of this file).
In the bottom-right panel, you see the names of the columns that are being displayed.
@item
In the ``Layers'' menu, Click on ``Add Position Control''.
On the bottom-left panel, you will notice that a new blue-red scatter plot has appeared but it just says @code{<no table>}.
In the bottom-right panel, in front of ``Table:'', select any other extension.
This will plot the same two columns of that extension as blue points.
Zoom-in to the region of the horizontal line to see/compare the different scatters.
Change the HDU given to ``Table:'' and see the distribution of zero points for the different apertures.
@end enumerate
The manual/visual operation above is critical if this is your first time with a new dataset (it shows all kinds of systematic biases (like the Sky issue above)!
But once you know your data has no systematic biases, choosing between the different apertures is not easy visually!
Let's have a look at the table the @code{ZEROPOINTS} HDU (we don't need to explicitly call this HDU since it is the first one):
@example
$ asttable jplus-zeropoint.fits -O -Y
# Column 1: APERTURE [arcsec,f32,] Aperture used.
# Column 2: ZEROPOINT [mag ,f32,] Zero point (sig-clip median).
# Column 3: ZPSTD [mag ,f32,] Zero point Standard deviation.
2.000 26.405 0.028
3.000 26.436 0.030
4.000 26.448 0.035
5.000 26.458 0.042
6.000 26.466 0.056
@end example
The most accurate zero point is the one where @code{ZPSTD} is the smallest.
In this case, minimum of @code{ZPSTD} is with radii of 2 and 3 arcseconds.
Run the @command{astscript-fits-view} command above again to open TOPCAT.
Let's focus on the magnitude plots in these two apertures and determine a more accurate range of magnitude.
The more reliable option is the range between 16.4 (where we have no saturated stars) and 18.5 mag (fainter than this, the scatter becomes too strong).
Finally, let's set some more apertures between 2 and 3 arcseconds radius:
@example
$ astscript-zeropoint jplus-nc.fits --hdu=INPUT-NO-SKY \
--refimgs=sdss1.fits,sdss2.fits \
--output=jplus-zeropoint.fits \
--magnituderange=16.4,18.5 \
--refimgszp=22.5,22.5 \
--aperarcsec=2,2.5,3,3.5,4 \
--refimgshdu=0,0 \
--keepzpap
$ asttable jplus-zeropoint.fits -Y
2.000 26.405 0.037
2.500 26.425 0.033
3.000 26.436 0.034
3.500 26.442 0.039
4.000 26.449 0.044
@end example
The aperture with the least scatter is therefore the 2.5 arcsec radius aperture, giving a zero point of 26.425 magnitudes for this image.
However, you can see that the scatter for the 3 arcsec aperture is also acceptable.
Actually, the @code{ZPSTD} for of the 2.5 and 3 arcsec apertures only have a difference of @mymath{3\%} (@mymath{= (0.034−0.0333)/0.033\times100}).
So simply choosing the minimum is just a first-order approximation (which is accurate within @mymath{26.436−26.425=0.011} magnitudes)
Note that in aperture photometry, the PSF plays an important role (because the aperture is fixed but the two images can have very different PSFs).
The aperture with the least scatter should also account for the differing PSFs.
Overall, please, always check the different and intermediate steps to make sure the parameters are the good so the estimation of the zero point is correct.
If you are happy with the minimum, you don't have to search for the minimum aperture or its corresponding zero point yourself.
This script has written it in @code{ZPVALUE} keyword of the table.
With the first command, we also see the name of the file also, (you can use this on many files for example).
With the second command, we are only printing the number by adding the @option{-q} (or @option{--quiet}) option (this is useful in a script where you want to write the value in a shell variable to use later).
@example
$ astfits jplus-zeropoint.fits --keyvalue=ZPVALUE
jplus-zeropoint.fits 2.642512e+01
$ astfits jplus-zeropoint.fits --keyvalue=ZPVALUE -q
2.642512e+01
@end example
Generally, this script will write the following FITS keywords (all starting with @code{ZP}) for your future reference in its output:
@example
$ astfits jplus-zeropoint.fits -h1 | grep ^ZP
ZPAPER = 2.5 / Best aperture.
ZPVALUE = 26.42512 / Best zero point.
ZPSTD = 0.03276644 / Best std. dev. of zeropoint.
ZPMAGMIN= 16.4 / Min mag for obtaining zeropoint.
ZPMAGMAX= 18.5 / Max mag for obtaining zeropoint.
@end example
Using the @option{--keyvalue} option of the @ref{Fits} program, you can easily get multiple of the values in one run (where necessary):
@example
$ astfits jplus-zeropoint.fits --hdu=1 --quiet \
--keyvalue=ZPAPER,ZPVALUE,ZPSTD
2.500000e+00 2.642512e+01 3.276644e-02
@end example
@node Zero point tutorial with reference catalog, , Zero point tutorial with reference image, Zero point of an image
@subsection Zero point tutorial with reference catalog
In @ref{Zero point tutorial with reference image}, we explained how to use the @command{astscript-zeropoint} for estimating the zero point of one image based on a reference image.
Sometimes there is not a reference image and we need to use a reference catalog.
Fortunately, @command{astscript-zeropoint} can also use the catalog instead of the image to find the zero point.
To show this, let's download a catalog of SDSS in the area that overlaps with the cropped J-PLUS image (used in the previous section).
For more on Gnuastro's Query program, please see @ref{Query}.
The columns of ID, RA, Dec and magnitude in the SDSS @emph{r} filter are called by their name in the SDSS catalog.
@example
$ astquery vizier \
--dataset=sdss12 \
--overlapwith=jplus-crop.fits \
--column=objID,RA_ICRS,DE_ICRS,rmag \
--output=sdss-catalog.fits
@end example
To visualize the position of the SDSS objects over the J-PLUS image, let's use @command{astscript-ds9-region} (for more details please see @ref{SAO DS9 region files from table}) with the command below (it will automatically open DS9 and load the regions it created):
@example
$ astscript-ds9-region sdss-catalog.fits \
--column=RA_ICRS,DE_ICRS \
--color=red --width=3 --output=sdss.reg \
--command="ds9 jplus-nc.fits[INPUT-NO-SKY] \
-scale zscale"
@end example
Now, we are ready to estimate the zero point of the J-PLUS image based on the SDSS catalog.
To download the input image and understand how to use the @command{astscript-zeropoint}, please see @ref{Zero point tutorial with reference image}.
Many of the options (like the aperture size) and magnitude range are the same so we will not discuss them further.
You will notice that the only substantive difference of the command below with the last command in the previous section is that we are using @option{--refcat} instead of @option{--refimgs}.
There are also some cosmetic differences for example a new output name, not using @option{--refimgszp} since it is only necessary for images) and the @option{--*column} options which are used to identify the names of the necessary columns of the input catalog:
@example
$ astscript-zeropoint jplus-nc.fits --hdu=INPUT-NO-SKY \
--refcat=sdss-catalog.fits \
--refcatmag=rmag \
--refcatra=RA_ICRS \
--refcatdec=DE_ICRS \
--output=jplus-zeropoint-cat.fits \
--magnituderange=16.4,18.5 \
--aperarcsec=2,2.5,3,3.5,4 \
--keepzpap
@end example
@noindent
Let's inspect the output with the command below.
@example
$ asttable jplus-zeropoint-cat.fits -Y
2.000 26.337 0.034
2.500 26.386 0.036
3.000 26.417 0.041
3.500 26.439 0.043
4.000 26.455 0.050
@end example
As you see, the values and standard deviations are very similar to the results we got previously in @ref{Zero point tutorial with reference image}.
The Standard deviations are generally a little higher here because we didn't do the photometry ourselves, but they are statistically similar.
Before we finish, let's open the two outputs (from a reference image and reference catalog) with the command below.
To confirm how they compare, we are showing the result for @code{APER-3} extension in both (following the TOPCAT plotting recipe in @ref{Zero point tutorial with reference image}).
@example
$ astscript-fits-view jplus-zeropoint.fits jplus-zeropoint-cat.fits \
-hAPER-3
@end example
@node Pointing pattern design, Moire pattern in coadding and its correction, Zero point of an image, Tutorials
@section Pointing pattern design
@cindex Dithering
@cindex Pointings
@cindex Observing strategy
@cindex Offset (in observing strategy)
A dataset that is ready for scientific analysis is usually composed of many separate exposures and how they are taken is usually known as ``observing strategy''.
This tutorial describes Gnuastro's tools to simplify the process of deciding the pointing pattern of your observing strategy.
A ``pointing'' is the location on the sky that each exposure is aimed at.
Each exposure's pointing is usually moved (on the sky) compared to the previous exposure.
This is done for reasons like improving calibration, increasing resolution, expending the area of the observation and etc.
Therefore, deciding a suitable pointing pattern is one of the most important steps when planning your observation strategy.
There are commonly two types of pointings: ``dither'' and ``offset''.
These are sometimes used interchangeably with ``pointing'' (especially when the final coadd is roughly the same area as the field of view.
Alternatively, ``dither'' and ``offset'' are used to distinguish pointings with large or small (on the scale of the field of view) movement compared to a previous one.
When a pointing has a large distance to the previous pointing, it is known as an ``offset'', while pointings with a small displacement are known as a ``dither''.
This distinction originates from the mechanics and optics of most modern telescopes: the overhead (for example the need to re-focus the camera) to make small movements is usually less than large movements.
In this tutorial, let's simulate a hypothetical pointing pattern using Gnuastro's @command{astscript-pointing-simulate} installed script (see @ref{Pointing pattern simulation}).
Since we will be testing very different displacements between pointings, we'll ignore the difference between offset and dither here, and only use the term pointing.
Let's assume you want to observe @url{https://en.wikipedia.org/wiki/Messier_94, M94} in the H-alpha and rSDSS filters (to study the extended star formation in the outer rings of this beautiful galaxy!).
Including the outer parts of the rings, the galaxy is half a degree in diameter!
This is very large, and you want to design a pointing pattern that will allow you to cover as much area, while not loosing your ability to calibrate properly.
@cartouche
@noindent
@strong{Do not start with this tutorial:} If you are new to Gnuastro and have not already completed @ref{General program usage tutorial}, we recommend going through that tutorial before starting this one.
Basic features like access to this book on the command-line, the configuration files of Gnuastro's programs, benefiting from the modular nature of the programs, viewing multi-extension FITS files, and many others are discussed in more detail there.
@end cartouche
@menu
* Preparing input and generating exposure map:: Download and image and build exposure map.
* Area of non-blank pixels on sky:: Account for the curved area on the sky.
* Script with pointing simulation steps so far:: Summary of steps for easy testing.
* Larger steps sizes for better calibration:: The initial small dither is not enough.
* Pointings that account for sky curvature:: Sky curvature will cause problems!
* Accounting for non-exposed pixels:: Parts of the detector do not get exposed to light.
@end menu
@node Preparing input and generating exposure map, Area of non-blank pixels on sky, Pointing pattern design, Pointing pattern design
@subsection Preparing input and generating exposure map
As mentioned in @ref{Pointing pattern design}, the assumed goal here is to plan an observations strategy for M94.
Let's assume that after some searching, you decide to write a proposal for the @url{https://oaj.cefca.es/telescopes/jast80, JAST80 telescope} at the @url{https://oaj.cefca.es, Observatorio Astrofísico de Javalambre}, OAJ@footnote{For full disclosure, Gnuastro is being developed at CEFCA (Centro de Estudios de F@'isica del Cosmos de Arag@'on); which also hosts OAJ.}, in Teruel (Spain).
The field of view of this telescope's camera is almost 1.4 degrees wide, nicely fitting M94!
It also has these two filters that you need@footnote{For the full list of available filters, see the @url{https://oaj.cefca.es/telescopes/t80cam, T80Cam description}.}.
Before we start, as described in @ref{Pointing pattern simulation}, it is just important to remember that the ideal pointing pattern depends primarily on your scientific objective, as well as the limitations of the instrument you are observing with.
Therefore, there is no single pointing pattern for all purposes.
However, the tools, methods, criteria or logic to check if your pointing pattern satisfies your scientific requirement are similar.
Therefore, you can use the same methods, tools or logic here to simulate or verify that your pointing pattern will produce the products you expect after the observation.
To start simulating a pointing pattern for a certain telescope, you just need a single-exposure image of that telescope with WCS information.
In other words, after astrometry, but before warping into any other pixel grid (to combine into a deeper coadd).
The image will give us the default number of the camera's pixels, its pixel scale (width of pixel in arcseconds) and the camera distortion.
These are reference parameters that are independent of the position of the image on the sky.
Because the actual position of the reference image is irrelevant, let's assume that in a previous project, presumably on @url{https://en.wikipedia.org/wiki/NGC_4395, NGC 4395}, you already had the download command of the following single exposure image.
With the last command, please take a look at this image before continuing and explore it.
@example
$ mkdir pointing-tutorial
$ cd pointing-tutorial
$ mkdir input
$ siapurl=https://archive.cefca.es/catalogues/vo/siap
$ wget $siapurl/jplus-dr3/reduced/get_fits?id=1050345 \
-O input/jplus-1050345.fits.fz
$ astscript-fits-view input/jplus-1050345.fits.fz
@end example
@cartouche
@noindent
@strong{This is the first time I am using an instrument:} In case you haven't already used images from your desired instrument (to use as reference), you can find such images from their public archives; or contacting them.
A single exposure images is rarely of any scientific value (post-processing and coadding is necessary to make high-level and science-ready products).
Therefore, they become publicly available very soon after the observation date; furthermore, calibration images are usually public immediately.
@end cartouche
As you see from the image above, the T80Cam images are large (9216 by 9232 pixels).
Therefore, to speed up the pointing testing, let's down-sample the image by a factor of 10.
This step is optional and you can safely use the full resolution, which will give you a more precise coadd.
But it will be much slower (maybe good after you have an almost final solution on the down-sampled image).
We will call the output @file{ref.fits} (since it is the ``reference'' for our test).
We are putting these two ``input'' files (to the script) in a dedicated directory to keep the running directory clean (and be able to easily delete temporary/test files for a fresh start with a `@command{rm *.fits}').
@example
$ astwarp input/jplus-1050345.fits.fz --scale=1/10 -oinput/ref.fits
@end example
For a first trial, let's create a cross-shaped pointing pattern with 5 points around M94, which is centered at its center on the RA and Dec of 192.721250, 41.120556.
We'll center one exposure on the center of the galaxy, and include 4 more exposures that are each 1 arc-minute away along the RA and Dec axes.
To simplify the actual command later@footnote{Instead of this, later, when you called @command{astscript-pointing-simulate}, you could pass the @option{--racol=1} and @option{--deccol=2} options.
But having metadata is always preferred (will avoid many bugs/frustrations in the long-run!).}, let's also include the column names in @file{pointing.txt} through two lines of metadata.
Also note that the @file{pointing.txt} file can be made in any manner you like, for example, by writing the coordinates manually on your favorite text editor, or through another programming language or logic, or etc.
Here, we are using AWK because it is sufficiently powerful for this job, and it is a very small program that is available on any Unix-based operating system (allowing you to easily run your programs on any computer).
@example
$ step_arcmin=1
$ center_ra=192.721250
$ center_dec=41.120556
$ echo "# Column 1: RA [deg, f64] Right Ascension" > pointing.txt
$ echo "# Column 2: Dec [deg, f64] Declination" >> pointing.txt
$ echo $center_ra $center_dec \
| awk '@{s='$step_arcmin'/60; fmt="%-10.6f %-10.6f\n"; \
printf fmt, $1, $2; \
printf fmt, $1+s, $2; \
printf fmt, $1, $2+s; \
printf fmt, $1-s, $2; \
printf fmt, $1, $2-s@}' \
>> pointing.txt
@end example
With the commands below, let's have a look at the produced file, first as plain-text, then with TOPCAT (which needs conversion to FITS).
After TOPCAT is opened, in the ``Graphics'' menu, select ``Plane plot'' to see the five points in a flat RA, Dec plot.
@example
$ cat pointing.txt
# Column 1: RA [deg, f64] Right Ascension
# Column 2: Dec [deg, f64] Declination
192.721250 41.120556
192.737917 41.120556
192.721250 41.137223
192.704583 41.120556
192.721250 41.103889
$ asttable pointing.txt -opointing.fits
$ astscript-fits-view pointing.fits
$ rm pointing.fits
@end example
We are now ready to generate the exposure map of the pointing pattern above using the reference image that we downloaded before.
Let's put the center of our final coadd to be on the center of the galaxy, and we'll assume the coadd has a size of 2 degrees.
With the second command, you can see the exposure map of the final coadd.
Recall that in this image, each pixel shows the number of input images that went into it.
@example
$ astscript-pointing-simulate pointing.txt --output=coadd.fits \
--img=input/ref.fits --center=$center_ra,$center_dec \
--width=2
$ astscript-fits-view coadd.fits
@end example
You will see that except for a thin boundary, we have a depth of 5 exposures over the area of the single exposure.
Let's see what the width of the deepest part of the image is.
First, we'll use Arithmetic to set all pixels that contain less than 5 exposures (the outer pixels) to NaN (Not a Number).
In the same Arithmetic command, let's trim all the blank rows and columns, so the output only contains the pixels that are exposed 5 times.
With the next command, let's view the deep region and with the last command below, let's use the @option{--skycoverage} option of the Fits program to see the coverage of deep part on the sky.
@example
$ deep_thresh=5
$ astarithmetic coadd.fits set-s s s $deep_thresh lt nan where trim \
--output=deep.fits
$ astscript-fits-view deep.fits
$ astfits deep.fits --skycoverage
Input file: deep.fits (hdu: 1)
Sky coverage by center and (full) width:
Center: 192.72125 41.120556
Width: 1.880835157 1.392461166
Sky coverage by range along dimensions:
RA 191.7808324 193.6616676
DEC 40.42058203 41.81304319
@end example
@cindex Sky value
@cindex Flat field
As we see, in declination, the width of this deep field is about 1.4 degrees.
Recall that RA is only defined on the equator and actual coverage in RA depends on the declination due to the spherical nature of the sky.
This area therefore nicely covers the expected outer parts of M94.
On first thought, it may seem that we are now finished, but that is not the case unfortunately!
There is a problem: with a step size of 1 arc-minute, the brighter central parts of this large galaxy will always be on very similar pixels; making it hard to calibrate those pixels properly.
If you are interested in the low surface brightness parts of this galaxy, it is even worse: the outer parts of the galaxy will always cover similar parts of the detector in all the exposures; and they cover a large area on your image.
To be able to accurately calibrate the image (in particular to estimate the flat field pattern and subtract the sky), you do not want this to happen!
You want each exposure to cover very different sources of astrophysical signal, so you can accurately calibrate the artifacts created by the instrument or environment (for example flat field) or of natural causes (for example the Sky).
For an example of how these calibration issues can ruin low surface brightness science, please see the image of M94 in the @url{https://www.legacysurvey.org/viewer,Legacy Survey interactive viewer}.
After it is loaded, at the bottom-left corner of the window, write ``M94'' in the box of ``Jump to object'' and press ENTER.
At first, M94 looks good with a black background, but as you increase the ``Brightness'' (by scrolling it to the right and seeing what is under the originally black pixels), you will see the calibration artifacts clearly.
@node Area of non-blank pixels on sky, Script with pointing simulation steps so far, Preparing input and generating exposure map, Pointing pattern design
@subsection Area of non-blank pixels on sky
In @ref{Preparing input and generating exposure map} we generated a pointing pattern with very small steps, showing how this can cause calibration problems.
Later (in @ref{Larger steps sizes for better calibration}) using larger steps is discussed.
In this section, let's see how we can get an accurate measure of the area that is covered in a certain depth.
A first thought would be to simply multiply the widths along RA and Dec reported before: @mymath{1.8808\times1.3924=2.6189} degrees squared.
But there are several problems with this:
@itemize
@item
It ignores the fact that RA only has units of degrees on the equator: at different declinations, differences in RA should be converted to degrees.
This is discussed further in this tutorial: @ref{Pointings that account for sky curvature}.
@item
It doesn't take into account the thin rows/columns of blank pixels (NaN) that are on the four edges of the @file{deep.fits} image.
@item
The differing area of the pixels on the spherical sky in relation to those blank values can result in wrong estimations of the area.
@end itemize
Let's get a very accurate estimation of the area that will not be affected by the issues above.
With the first command below, we'll use the @option{--pixelareaonwcs} option of the Fits program that will return the area of each pixel (in pixel units of degrees squared).
After running the second command, please have a look at the produced image.
@example
$ astfits deep.fits --pixelareaonwcs --output=deep-pix-area.fits
$ astfits deep.fits --pixelscale
Basic info. for --pixelscale (remove extra info with '--quiet' or '-q')
Input: deep.fits (hdu 1) has 2 dimensions.
Pixel scale in each FITS dimension:
1: 0.00154403 (deg/pixel) = 5.5585 (arcsec/pixel)
2: 0.00154403 (deg/pixel) = 5.5585 (arcsec/pixel)
Pixel area:
2.38402e-06 (deg^2) = 30.8969 (arcsec^2)
$ astscript-fits-view deep-pix-area.fits
@end example
@cindex Gnomonic projection (@code{TAN} in WCS)
@cindex @code{TAN} in WCS (Gnomonic projection)
You see a donut-like shape in DS9.
Move your mouse over the central (white) region of the region and look at the values.
You will see that the pixel area (in degrees squared) is exactly the same as we saw in the output of @option{--pixelscale}.
As you move your mouse away to other colors, you will notice that the area covered by each pixel (its value in this image) deceases very slightly (in the 5th decimal!).
This is the effect of the @url{https://en.wikipedia.org/wiki/Gnomonic_projection, Gnomonic projection}; summarized as @code{TAN} (for ``tangential'') in the FITS WCS standard, the most commonly used in optical astronomical surveys and the default in this script.
Having @file{deep-pix-area.fits}, we can now use Arithmetic to set the areas of all the pixels that were NaN in @file{deep.fits} and sum all the values to get an accurate estimate of the area we get from this pointing pattern:
@example
$ astarithmetic deep-pix-area.fits deep.fits isblank nan where -g1 \
sumvalue --quiet
1.93836806631634e+00
@end example
Therefore, the actual area that is covered is less than the simple multiplication above.
At these declinations, the dominant cause of this difference is the first point above (that RA needs correction), this will be discussed in more detail later in this tutorial (see @ref{Pointings that account for sky curvature}).
Generally, using this method to measure the area of your non-NAN pixels in an image is very easy and robust (automatically takes into account the curvature, coordinate system, projection and blank pixels of the image).
@node Script with pointing simulation steps so far, Larger steps sizes for better calibration, Area of non-blank pixels on sky, Pointing pattern design
@subsection Script with pointing simulation steps so far
In @ref{Preparing input and generating exposure map} and @ref{Area of non-blank pixels on sky}, the basic steps to simulate a pointing pattern's exposure map and measure the final output area on the sky where described in detail.
From this point on in the tutorial, we will be experimenting with the shell variables that were set above, but the actual commands will not be changed regularly.
If a change is necessary in a command, it is clearly mentioned in the text.
Therefore, it is better to write the steps above (after downloading the reference image) as a script.
In this way, you can simply change those variables and see the final result fast by running your script.
For more on writing scripts, see as described in @ref{Writing scripts to automate the steps}.
Here is a summary of some points to remember when transferring the code in the sections before into a script:
@itemize
@item
Where the commands are edited/changed, please also update them in your script.
@item
Keep all the variables at the top, even if they are used later.
This allows to easily view or changed them without digging into the script.
@item
You do not need to include visual check commands like the @code{astscript-fits-view} or @code{cat} commands above.
Those can be run interactively after your script is finished; recall that a script is for batch (non-interactive) processing.
@item
Put all your intermediate products inside a ``build'' directory.
@end itemize
Here is the script that summarizes the steps in @ref{Preparing input and generating exposure map} (after download) and @ref{Area of non-blank pixels on sky}:
@verbatim
#!/bin/bash
#
# Copyright (C) 2024-2025 Mohammad Akhlaghi <mohammad@akhlaghi.org>
#
# Copying and distribution of this file, with or without modification,
# are permitted in any medium under the GNU GPL v3+, without royalty
# provided the copyright notice and this notice are preserved. This
# file is offered as-is, without any warranty.
# Parameters of the script
deep_thresh=5
step_arcmin=1
center_ra=192.721250
center_dec=41.120556
# Input and build directories (can be anywhere in your file system)
indir=input
bdir=build
# Abort the script in case of an error.
set -e
# Make the build directory if it doesn't already exist.
if ! [ -d $bdir ]; then mkdir $bdir; fi
# Build the 5-pointing pointing pattern (with the step size above).
pointingcat=$bdir/pointing.txt
echo "# Column 1: RA [deg, f64] Right Ascension" > $pointingcat
echo "# Column 2: Dec [deg, f64] Declination" >> $pointingcat
echo $center_ra $center_dec \
| awk '{s='$step_arcmin'/60; fmt="%-10.6f %-10.6f\n"; \
printf fmt, $1, $2; \
printf fmt, $1+s, $2; \
printf fmt, $1, $2+s; \
printf fmt, $1-s, $2; \
printf fmt, $1, $2-s}' \
>> $pointingcat
# Simulate the pointing pattern.
coadd=$bdir/coadd.fits
astscript-pointing-simulate $pointingcat --output=$coadd \
--img=input/ref.fits --center=$center_ra,$center_dec \
--width=2
# Trim the regions shallower than the threshold.
deep=$bdir/deep.fits
astarithmetic $coadd set-s s s $deep_thresh lt nan where trim \
--output=$deep
# Calculate the area of each pixel on the curved celestial sphere:
pixarea=$bdir/deep-pix-area.fits
astfits $deep --pixelareaonwcs --output=$pixarea
# Report the final area (the empty 'echo's are for visual help in outputs)
echo; echo
echo "Area with step of $step_arcmin arcminutes, at $deep_thresh depth:"
astarithmetic $pixarea $deep isblank nan where -g1 \
sumvalue --quiet
@end verbatim
For a description of how to make it executable and how to run it, see @ref{Writing scripts to automate the steps}.
Note that as you start adding your own text to the script, be sure to add your name (and year that you modified) in the copyright notice at the start of the script (this is very important!).
@node Larger steps sizes for better calibration, Pointings that account for sky curvature, Script with pointing simulation steps so far, Pointing pattern design
@subsection Larger steps sizes for better calibration
In @ref{Preparing input and generating exposure map} we saw that a small pointing pattern is not good for the reduction of data from a large object like M94!
M94 is about half a degree in diameter; so let's set @code{step_arcmin=15}.
This is one quarter of a degree and will put the center of the four exposures on the four corners of the M94's main ring.
Furthermore, @ref{Script with pointing simulation steps so far}, the steps were summarized into a script to allow easy changing of variables without manually re-entering the individual/separate commands.
After you change @code{step_arcmin=15} and re-run the script, you will get a total area (from counting of per-pixel areas) of approximately 0.96 degrees squared.
This is just roughly half the previous area and will barely fit M94!
To understand the cause, let's have a look at the full coadd (not just the deepest area):
@example
$ astscript-fits-view build/coadd.fits
@end example
Compared to the first run (with @code{step_arcmin=1}), we clearly see how there are indeed fewer pixels that get photons in all 5 exposures.
As the area of the deepest part has decreased, the areas with fewer exposures have also grown.
Let's define our deep region to be the pixels with 3 or more exposures.
Please set @code{deep_thresh=3} in the script and re-run it.
You will see that the ``deep'' area is now almost 2.02 degrees squared!
This is (slightly) larger than the first run (with @code{step_arcmin=1})!
The difference between 3 exposures and 5 exposures seems a lot at first.
But let's calculate how much it actually affects the achieved signal-to-noise ratio and the surface brightness limit.
The surface brightness limit (or upper-limit surface brightness) are both calculated by applying the definition of magnitude to the standard deviation of the background.
So we should first calculate how much this difference in depth affects the sky standard deviation.
For a complete discussion on the definition of the surface brightness limit, see @ref{Metameasurements on full input}.
@cindex Poisson noise
Deep images will usually be dominated by @ref{Photon counting noise} (or Poisson noise).
Therefore, if a single exposure image has a sky standard deviation of @mymath{\sigma_s}, and we combine @mymath{N} such exposures by taking their mean, the final/coadded sky standard deviation (@mymath{\sigma}) will be @mymath{\sigma=\sigma_s/\sqrt{N}}.
As a result, the surface brightness limit between the regions with @mymath{N} exposures and @mymath{M} exposures differs by @mymath{2.5\times log_{10}(\sqrt{N/M}) = 1.25\times log_{10}(N/M)} magnitudes.
If we set @mymath{N=3} and @mymath{M=5}, we get a surface brightness magnitude difference of 0.28!
This is a very small difference; given all the other sources of error that will be present; but how much it improves the calibration artifacts.
Therefore at the cost of decreasing our surface brightness limit by 0.28 magnitudes, we are now able to calibrate the individual exposures much better, and even cover a larger area!
The argument above didn't involve any image and was primarily theoretical.
For the more visually-inclined readers, let's add raw Gaussian noise (with a @mymath{\sigma} of 100 counts) over each simulated exposure.
We will then instruct @command{astscript-pointing-simulate} to coadd them as we would coadd actual data (by taking the sigma-clipped mean).
The command below is identical to the previous call to the pointing simulation script with the following differences.
Note that this is just for demonstration, so you should not include this in your script (unless you want to see the noisy coadd every time; at double the processing time).
@table @option
@item --output
We are using a different output name, so we can compare the output of the new command with the previous one.
@item --coadd-operator
This should be one of the Arithmetic program's @ref{Coadding operators}.
By default the value is @code{sum}; because by default, each pixel of each exposure is given a value of 1.
When coadding is defined through the summation operator, we can obtain the exposure map that you have already seen above.
But in this run, we are adding noise to each input exposure (through the hook that is described below) and coadding them (as we would coadd actual science images).
Since the purpose differs here, we are using this option to change the operator.
@item --hook-warp-after
@cindex Hook (programming)
This is the most visible difference of this command the previous one.
Through a ``hook'', you can give any arbitrarily long (series of) command(s) that will be added to the processing of this script at a certain location.
This particular hook gets applied ``after'' the ``warp''ing phase of each exposure (when the pixels of each exposure are mapped to the final pixel grid; but not yet coadded).
Since the script runs in parallel (the actual work-horse is a Makefile!), you can't assume any fixed file name for the input(s) and output.
Therefore the inputs to, and output(s) of, hooks are some pre-defined shell variables that you should use in the command(s) that you hook into the processing.
They are written in full-caps to be clear and separate from your own variables.
In this case, they are the @code{$WARPED} (input file of the hook) and @code{$TARGET} (output name that next steps in the script will operate on).
As you see from the command below, through this hook we are calling the Arithmetic program to add noise to all non-zero pixels in the warped image.
For more on the noise-adding operators, see @ref{Random number generators}.
@end table
@example
$ center_ra=192.721250
$ center_dec=41.120556
$ astscript-pointing-simulate build/pointing.txt --img=input/ref.fits \
--center=$center_ra,$center_dec \
--width=2 --coadd-operator="3 0.2 sigclip-mean swap free" \
--output=build/coadd-noised.fits \
--hook-warp-after='astarithmetic $WARPED set-i \
i i 0 uint8 eq nan where \
100 mknoise-sigma \
--output=$TARGET'
$ astscript-fits-view build/coadd.fits build/coadd-noised.fits
@end example
When you visually compare the two images of the last command above, you will see that (at least by eye) it is almost impossible to distinguish the differing noise pattern in the regions with 3 exposures from the regions with 5 exposures.
But the regions with a single exposure are clearly visible!
This is because the surface brightness limit in the single-exposure regions is @mymath{1.25\times\log_{10}(1/5)=-0.87} magnitudes brighter.
This almost one magnitude difference in surface brightness is significant and clearly visible in the coadded image (recall that magnitudes are measured in a logarithmic scale).
Thanks to the argument above, we can now have a sufficiently large area with a usable depth.
However, each the center of each pointing will still contain the central part of the galaxy.
In other words, M94 will be present in all the exposures while doing the calibrations.
Even in not-too-deep observations, we already see a large ring around this galaxy.
When we do a low surface brightness optimized reduction, there is a good chance that the size of the galaxy is much larger than that ring.
This very extended structure will make it hard to do the calibrations on very accurate scales.
Accurate calibration is necessary if you do not want to loose the faint photons that have been recorded in your exposures.
@cartouche
@noindent
@strong{Calibration is very important:} Better calibration can result in a fainter surface brightness limit than more exposures with poor calibration; especially for very low surface brightness signal that covers a large area and is systematically affected by calibration issues.
@end cartouche
Ideally, you want your target to be on the four edges/corners of each image.
This will make sure that a large fraction of each exposure will not be covered by your final target in each exposure, allowing you to calibrate much more accurately.
@node Pointings that account for sky curvature, Accounting for non-exposed pixels, Larger steps sizes for better calibration, Pointing pattern design
@subsection Pointings that account for sky curvature
In @ref{Larger steps sizes for better calibration}, we saw how a small loss in surface brightness limit can allow better calibration and even a larger area.
Let's extend this by setting @code{step_arcmin=40} (almost half the width of the detector) inside your script (see @ref{Script with pointing simulation steps so far}).
After running the script with this change, take a look at @file{build/deep.fits}:
@example
$ astscript-fits-view build/deep.fits --ds9scale=minmax
@end example
You will see that the region with 5 exposure depth is a horizontally elongated rectangle now!
Also, the vertical component of the cross with four exposures is much thicker than the horizontal component!
Where does this asymmetry come from? All the steps in our pointing strategy had the same (fixed) size of 40 arc minutes.
This happens because the same change in RA and Dec (defined on the curvature of a sphere) will result in different absolute changes on the equator.
To visually see this, let's look at the pointing positions in TOPCAT:
@example
$ cat build/pointing.txt
# Column 1: RA [deg, f64] Right Ascension
# Column 2: Dec [deg, f64] Declination
192.721250 41.120556
193.387917 41.120556
192.721250 41.787223
192.054583 41.120556
192.721250 40.453889
$ asttable build/pointing.txt -obuild/pointing.fits
$ astscript-fits-view build/pointing.fits
@end example
After TOPCAT opens, under the ``graphics'' window, select ``Plane Plot''.
In the newly opened window, click on the ``Axes'' item on the bottom-left list of items.
Then activate the ``Aspect lock'' box so the vertical and horizontal axes have the same scaling.
You will see what you expect from the numbers: we have a beautifully symmetric set of 5 points shaped like a `+' sign.
Keep the previous window, and let's go back to the original TOPCAT window.
In the first TOPCAT window, click on ``Graphics'' again, but this time, select ``Sky plot''.
You will notice that the vertical component of the cross is now longer than the horizontal component!
If you zoom-out (by scrolling your mouse over the plot) a lot, you will see that this is actually on the spherical surface of the sky!
In other words, as you see here, on the sky, the horizontal points are closer to each other than the vertical points; causing a larger overlap between them, making the vertical overlap thicker in @file{build/pointing.fits}.
@cindex Declination
@cindex Right Ascension
@cindex Celestial sphere
On the celestial sphere, only the declination is measured in degrees.
In other words, the difference in declination of two points can be calculated only with their declination.
However, except for points that are on the equator, differences in right ascension depend on the declination.
Therefore, the origin of this problem is that we done the additions and subtractions for defining the pointing points in a flat space: based on the step size in arc minutes that was applied similarly on RA and Dec (in @ref{Preparing input and generating exposure map}).
To fix this problem, we need to convert our points from the flat RA/Dec into the spherical RA/Dec.
In the FITS standard, we have the ``World Coordinate System'' (WCS) that defines this type of conversion, using pre-defined projections in the @code{CTYPEi} keyword (short for for ``Coordinate TYPE in dimension i'').
Let's have a look at the coadd to see the default projection of our final coadd:
@verbatim
$ astfits build/coadd.fits -h1 | grep CTYPE
CTYPE1 = 'RA---TAN' / Right ascension, gnomonic projection
CTYPE2 = 'DEC--TAN' / Declination, gnomonic projection
@end verbatim
We therefore see that the default projection of our final coadd is the @code{TAN} (short for ``tangential'') projection, which is more formally known as the @url{https://en.wikipedia.org/wiki/Gnomonic_projection, Gnomonic projection}.
This is the most commonly used projection in optical astronomy.
Now that we know the final projection, we can do this conversion using Table's column arithmetic operator @option{eq-j2000-from-flat} like below:
@verbatim
$ pointingcat=build/pointing.txt
$ pointingonsky=build/pointing-on-sky.fits
$ asttable $pointingcat --output=$pointingonsky \
-c'arith RA set-r \
DEC set-d \
r meanvalue set-ref-r \
d meanvalue set-ref-d \
r d ref-r ref-d TAN eq-j2000-from-flat' \
--colmetadata=1,RA,deg,"Right ascension" \
--colmetadata=2,Dec,deg,"Declination"
$ astscript-fits-view build/pointing-on-sky.fits
@end verbatim
Here is a break-down of the first command above: to do the flat-to-sky conversion, we need a reference point (where the two are equal).
We have used the mean RA and mean Dec (through the @code{meanvalue} operator in Arithmetic) as our reference point (which are placed in the @code{ref-r} and @code{red-d} variables.
After calling the @option{eq-j2000-from-flat} operator, we have just added metadata to the two columns.
To confirm that this operator done the job correctly, after the second command above, repeat the same experiment as before with TOPCAT (where you viewed the pointing positions on a flat and spherical coordinate system).
You will see that indeed, on the sphere you have a `+' shape, but on the flat plot, it looks stretched.
@cartouche
@noindent
@strong{Script update 1:} you should now add the @code{pointingonsky} definition and the @code{asttable} command above into the script of @ref{Script with pointing simulation steps so far}.
They should be placed before the call to @code{astscript-pointing-simulate}.
Also, in the call to @code{astscript-pointing-simulate}, @code{$pointingcat} should be replaced with @code{$pointingonsky} (so it doesn't use the flat RA, Dec pointings).
@end cartouche
After implementing this change in your script and running it, open @file{deep.fits} and you will see that the widths of both the horizontal and vertical regions are much more similar.
The top of the vertical overlap is slightly wider than the bottom, but that is something you can't fix by just pointing (your camera's field of view is fixed on the sky!).
It can be correctly by slightly rotating some of the exposures, but that will result in different PSFs from one exposure to another; and this can cause more important problems for your final science.
@cartouche
@noindent
@strong{Plotting the spherical RA and Dec in your papers:} The inverse of the @code{eq-j2000-from-flat} operator above is the @code{eq-j2000-to-flat}.
@code{eq-j2000-to-flat} can be used when you want to plot a set points with spherical RA and Dec in a paper.
When the minimum and maximum RA and Dec differ by larger than half a degree, you'll clearly see the difference.
For more, see the description of these operators in @ref{Column arithmetic}.
@end cartouche
Try to slightly increase @code{step_arcmin} to make the cross-like region with 4 exposures as thin as possible.
For example, set it to @code{step_arcmin=42}.
When you open @file{deep.fits}, you will see that the depth across this image is almost contiguous (which is another positive factor!).
Try increasing it to 43 arc minutes to see that the central cross will become almost fully NaN in @file{deep.fits} (which is bad!).
You will notice that the vertical region of 4 exposure depth is thinner in the bottom than on the top.
This is due to the RA/Dec change above, but across the width of the image.
We can't therefore change this by just changing the position of the pointings, we need to rotate some of the exposures if we want it to be removed.
But rotation is not yet implemented in this script.
You can construct any complex pointing pattern (with more than 5 points and in any shape) based on the logic and reasoning above to help extract the most science from the valuable telescope time that you will be getting.
Since the output is a FITS file, you can easily download another FITS file of your target, open it with DS9 (and ``lock'' the ``WCS'') with the coadd produced by this simulation to make sure that the deep parts correspond to the area of interest for your science case.
Factors like the optimal exposure time are also critical for the final result@footnote{The exposure time will determine the Signal-to-noise ration on a single exposure level.}, but is was beyond the scope of this tutorial.
One relevant factor however is the effect of vignetting: the pixels on the outer extremes of the field of view that are not exposed to light and should be removed from your final coadd.
They effect your pointing pattern: by decreasing your total area, they act like a larger spacing between your points, causing similar shallow crosses as you saw when you set @code{step_arcmin} to 43 arc minutes.
In @ref{Accounting for non-exposed pixels}, we will show how this can be done within the same test concept that we done here.
@node Accounting for non-exposed pixels, , Pointings that account for sky curvature, Pointing pattern design
@subsection Accounting for non-exposed pixels
@cindex Baffle
@cindex Vignetting
@cindex Bad pixels
At the end of @ref{Pointings that account for sky curvature} we were able to maximize the region of same depth in our coadd.
But we noticed that issues like strong @url{https://en.wikipedia.org/wiki/Vignetting,vignetting} can create discontinuity in our final coadded data product.
In this section, we'll review the steps to account for such effects.
Generally, the full area of a detector is not usually used in the final coadd.
Vignetting is one cause, it can be due to other problems also.
For example due to baffles in the optical path (to block stray light), or large regions of bad (unusable or ``dead'') pixels that may be in any place on the detector@footnote{For an example of bad pixels over the detector, see Figures 4 and 6 of @url{https://www.stsci.edu/files/live/sites/www/files/home/hst/instrumentation/wfc3/documentation/instrument-science-reports-isrs/_documents/2019/WFC3-2019-03.pdf, Instrument Science Report WFC3 2019-03} by the Space Telescope Science Institute.}.
Without accounting for these pixels that do not receive any light, the deep area we measured in the sections above will be over-estimated.
In this sub-section, let's review the necessary additions to account for such artifacts.
Therefore, before continuing, please make sure that you have already read and applied the steps of the previous sections (this sub-section builds upon that section).
@cindex Hook (programming)
Vignetting strongly depends on the optical design of the instrument you are using.
It can be a constant number of pixels on all the edges the detector, or it can have a more complex shape.
For example on cameras that have multiple detectors on the field of view, in this case, the regions to exclude on each detector can be very different and will not be symmetric!
Therefore, within Gnuastro's @command{astscript-pointing-simulate} script there is no parameter for pre-defined vignetting shapes.
Instead, you should define a mask that you can apply on each exposure through the provided hook (@option{--hook-warp-before}; recall that we previously used another hook in @ref{Larger steps sizes for better calibration}).
Through the mask, you are free to set any vignetted or bad pixel to NaN (thus ignoring them in the coadd) and applying it in any way that best suites your instrument and detector.
The mask image should be same size as the reference image, but only containing two values: 0 or 1.
Pixels in each exposure that have a value of 1 in the mask will be set to NaN before the coadding process and will not contribute to the final coadd.
Ideally, you can use the master flat field image of the previous reductions to create this mask: any pixel that has a low sensitivity in the master flat (for any reason) can be set to 1, and the rest of the pixels to 0.
Let's build a simple mask by assuming that we only have strong vignetting that is affecting the outer 30 arc seconds of the individual exposures.
To mask the outer edges of an image we can use Gnuastro's Arithmetic program; and in particular, the @code{indexonly} operator.
To learn more about this operator, see @ref{Size and position operators}.
But before doing that, we need convert this angular distance to pixels on the detector.
In @ref{Pointing pattern design}, we used an undersampled version of the input image, so we should do this conversion on that image:
@example
$ margin_arcsec=30
$ margin_pix=$(astfits input/ref.fits --pixelscale --quiet \
| awk '@{print int('$margin_arcsec'/($1*3600))@}')
$ echo $margin_pix
5
@end example
To build the mask, we can now follow the recipe under ``Image: masking margins'' of the @code{index} operator in Arithmetic (for a full description of what this command is doing@footnote{By learning how this command works, you can customize it.
For example, to mask different widths along each separate edge: it often happens that the left/right or top/bottom edges are affected differently by vignetting.}, see @ref{Size and position operators}).
Finally, in the last command, let's look at the mask image in the ``red'' color map of DS9 (which will shows the thin 1-valued pixels to mask on the border more clearly).
@example
$ width=$(astfits input/ref.fits --keyvalue=NAXIS1 -q)
$ height=$(astfits input/ref.fits --keyvalue=NAXIS2 -q)
$ astarithmetic input/ref.fits indexonly set-i \
$width uint16 set-w \
$height uint16 set-h \
$margin_pix uint16 set-m \
i w % uint16 set-X \
i w / uint16 set-Y \
X m lt X w m - gt or \
Y m lt Y h m - gt or \
or --output=build/mask.fits
$ astscript-fits-view build/mask.fits --ds9extra="-cmap red"
@end example
We are now ready to run the main pointing simulate script.
With the command below, we will use the @option{--hook-warp-before} to apply this mask on the image of each exposure just before warping.
The concept of this hook is very similar to that of @option{--hook-warp-after} in @ref{Pointing pattern design}.
As the name suggests, this hook is applied ``before'' the warping.
The input to the command given to this hook should be called with @code{$EXPOSURE} and the output should be called with @code{$TOWARP}.
With the second command, let's compare the two outputs:
@example
$ astscript-pointing-simulate build/pointing-on-sky.fits \
--output=build/coadd-with-trim.fits --img=input/ref.fits \
--center=$center_ra,$center_dec --width=2 \
--hook-warp-before='astarithmetic $EXPOSURE build/mask.fits \
nan where -g1 -o$TOWARP'
$ astscript-fits-view build/coadd.fits build/coadd-with-trim.fits
@end example
As expected, due to the smaller area of the detector that is exposed to photons, the regions with 4 exposures have become much thinner and on the bottom, it has been removed.
To have contiguous depth in the deeper region, use this new call in your script and decrease the @code{step_arcmin=41}.
You can use the same command on a mask that is created in any way and as realistic as possible.
More generically, you can use the before and after hooks for any other operation; for example to insert objects from a catalog using @ref{MakeProfiles} as well as adding noise as we did in @ref{Pointing pattern design}.
Therefore it is also good to add the mask and its application in your script. This should be pretty easy by now (following @ref{Script with pointing simulation steps so far} and the ``Script update 1'' box of @ref{Pointings that account for sky curvature}).
So we will leave this as an exercise.
@node Moire pattern in coadding and its correction, Clipping outliers, Pointing pattern design, Tutorials
@section Moir@'e pattern in coadding and its correction
@cindex Moir@'e pattern or fringes
After warping some images with the default mode of Warp (see @ref{Align pixels with WCS considering distortions}) you may notice that the background noise is no longer flat.
Some regions will be smoother and some will be sharper; depending on the orientation and distortion of the input/output pixel grids.
This is due to the @url{https://en.wikipedia.org/wiki/Moir%C3%A9_pattern, Moir@'e pattern}, which is especially noticeable/significant when two slightly different grids are super-imposed.
With the commands below, we'll download a single exposure image from the @url{https://www.j-plus.es,J-PLUS survey} and run Warp (on a @mymath{8\times8} arcmin@mymath{^2} region to speed it up the demos here).
Finally, we'll open the image to visually see the artificial Moir@'e pattern on the warped image.
@example
## Download the image (73.7 MB containing an 9216x9232 pixel image)
$ jplusdr2=http://archive.cefca.es/catalogues/vo/siap/jplus-dr2/reduced
$ wget $jplusdr2/get_fits?id=771463 -Ojplus-exp1.fits.fz
## Align a small part of it with the sky coordinates.
$ astwarp jplus-exp1.fits.fz --center=107.62920,39.72472 \
--width=8/60 -ojplus-e1.fits
## Open the aligned region with DS9
$ astscript-fits-view jplus-e1.fits
@end example
In the opened DS9 window, you can see the Moir@'e pattern as wave-like patterns in the noise: some parts of the noise are more smooth and some parts are more sharp.
Right in the center of the image is a blob of sharp noise.
Warp has the @option{--checkmaxfrac} option for direct inspection of the Moir@'e pattern (described with the other options in @ref{Align pixels with WCS considering distortions}).
When run with this option, an extra HDU (called @code{MAX-FRAC}) will be added to the output.
The image in this HDU has the same size as the output.
However, each output pixel will contain the largest (maximum) fraction of area that it covered over the input pixel grid.
So if an output pixel has a value of 0.9, this shows that it covered @mymath{90\%} of an input pixel.
Let's run Warp with @option{--checkmaxfrac} and see the output (after DS9 opens, in the ``Cube'' window, flip between the first and second HDUs):
@example
$ astwarp jplus-exp1.fits.fz --center=107.62920,39.72472 \
--width=8/60 -ojplus-e1.fits --checkmaxfrac
$ astscript-fits-view jplus-e1.fits
@end example
By comparing the first and second HDUs/extensions, you will clearly see that the regions with a sharp noise pattern fall exactly on parts of the @code{MAX-FRAC} extension with values larger than 0.5.
In other words, output pixels where one input pixel contributed more than half of the its value.
As this fraction increases, the sharpness also increases because a single input pixel's value dominates the value of the output pixel.
On the other hand, when this value is small, we see that many input pixels contribute to that output pixel.
Since many input pixels contribute to an output pixel, it acts like a convolution, hence that output pixel becomes smoother (see @ref{Spatial domain convolution}).
Let's have a look at the distribution of the @code{MAX-FRAC} pixel values:
@example
$ aststatistics jplus-e1.fits -hMAX-FRAC
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: jplus-e1.fits (hdu: MAX-FRAC)
-------
Number of elements: 744769
Minimum: 0.250213461
Maximum: 0.9987495374
Mode: 0.5034223567
Mode quantile: 0.3773819498
Median: 0.5520805544
Mean: 0.5693956458
Standard deviation: 0.1554693738
-------
Histogram:
| ***
| **********
| *****************
| ************************
| *******************************
| **************************************
| *********************************************
| ****************************************************
| ***********************************************************
| ******************************************************************
|**********************************************************************
|----------------------------------------------------------------------
@end example
The smallest value is 0.25 (=1/4), showing that 4 input pixels contributed to the output pixels value.
While the maximum is almost 1.0, showing that a single input pixel defined the output pixel value.
You can also see that the most probable value (the mode) is 0.5, and that the distribution is positively skewed.
@cindex Pixel scale
@cindex @code{CDELT}
This is a well-known problem in astronomical imaging and professional photography.
If you only have a single image (that is already taken!), you can undersample the input: set the angular size of the output pixels to be larger than the input.
This will decrease the resolution of your image, but will ensure that pixel-mixing will always happen.
In the example below we are setting the output pixel scale (which is known as @code{CDELT} in the FITS standard) to @mymath{1/0.5=2} of the input's.
In other words each output pixel edge will cover double the input pixel's edge on the sky, and the output's number of pixels in each dimension will be half of the previous output.
@example
$ cdelt=$(astfits jplus-exp1.fits.fz --pixelscale -q \
| awk '@{print $1@}')
$ astwarp jplus-exp1.fits.fz --center=107.62920,39.72472 \
--width=8/60 -ojplus-e1.fits --cdelt=$cdelt/0.5 \
--checkmaxfrac
@end example
In the first extension, you can hardly see any Moir@'e pattern in the noise.
When you go to the next (@code{MAX-FRAC}) extension, you will see that almost all the pixels have a value of 1.
Of course, decreasing the resolution by half is a little too drastic.
Depending on your image, you may be able to reach a sufficiently good result without such a drastic degrading of the input image.
For example, if you want an output pixel scale that is just 1.5 times larger than the input, you can divide the original coordinate-delta (or ``cdelt'') by @mymath{1/1.5=0.6666} and try again.
In the @code{MAX-FRAC} extension, you will see that the range of pixel values is now between 0.56 to 1.0 (recall that originally, this was between 0.25 and 1.0).
This shows that the pixels are more similarly mixed and in fact, when you look at the actual warped image, you can hardly distinguish any Moir@'e pattern in the noise.
@cindex Coadding
@cindex Pointing
@cindex Coaddition
However, deep astronomical data are usually built by several exposures (images), not a single one.
Each image is also taken by (slightly) shifting the telescope compared to the previous exposure.
This shift is known as ``dithering'' or a ``pointing pattern'', see @ref{Pointing pattern design}.
We do this for many reasons (for example tracking errors in the telescope, high background values, removing the effect of bad pixels or those affected by cosmic rays, robust flat pattern measurement, etc.@footnote{E.g., @url{https://www.stsci.edu/hst/instrumentation/wfc3/proposing/dithering-strategies}}).
One of those ``etc.'' reasons is to correct the Moir@'e pattern in the final coadded deep image.
The Moir@'e pattern is fixed to the grid of the image, slightly shifting the telescope will result in the pattern appearing in different parts of the sky.
Therefore when we later coadd, or coadd, the separate exposures into a deep image, the Moir@'e pattern will be decreased there.
However, dithering has possible drawbacks based on the scientific goal.
For example when observing time-variable phenomena where cutting the exposures to several shorter ones is not feasible.
If this is not the case for you (for example in galaxy evolution), continue with the rest of this section.
Because we have multiple exposures that are slightly (sub-pixel) shifted, we can also increase the spatial resolution of the output.
For example, let's set the output coordinate-delta (@option{--cdelt}, or pixel scale) to be 1/2 of the input.
In other words, the number of pixels in each dimension of the output is double the first Warp command of this section:
@example
$ astwarp jplus-exp1.fits.fz --center=107.62920,39.72472 \
--width=8/60 -ojplus-e1.fits --cdelt=$cdelt/2 \
--checkmaxfrac
$ aststatistics jplus-e1.fits -hMAX-FRAC --minimum --maximum
6.26360438764095e-02 2.50680270139128e-01
$ astscript-fits-view jplus-e1.fits
@end example
From the last command, you see that like the previous change in @option{--cdelt}, the range of @code{MAX-FRAC} has decreased.
However, when you look at the warped image and the @code{MAX-FRAC} image with the last command, you still visually see the Moir@'e pattern in the noise (although it has significantly decreased compared to the original resolution).
It is still present because 2 is an exact multiple of 1.
Let's try increasing the resolution (oversampling) by a factor of 1.25 (which isn't an exact multiple of 1):
@example
$ astwarp jplus-exp1.fits.fz --center=107.62920,39.72472 \
--width=8/60 -ojplus-e1.fits --cdelt=$cdelt/1.25 \
--checkmaxfrac
$ astscript-fits-view jplus-e1.fits
@end example
You don't see any Moir@'e pattern in the noise any more, but when you look at the @code{MAX-FRAC} extension, you see it is very different from the ones you had seen before.
In the previous @code{MAX-FRAC} image, you could see large blobs of similar values.
But here, you see that the variation is almost on a pixel scale, and the difference between one pixel to the next is not significant.
This is why you don't see any Moir@'e pattern in the warped image.
In J-PLUS, each part of the sky was observed with a three-point pointing pattern (very small shifts in each pointing).
Let's download the other two exposures and warp the same region of the sky to the same pixel grid (using the @option{--gridfile} feature).
Then, let's open all three warped images in one DS9 instance:
@example
$ wget $jplusdr2/get_fits?id=771465 -Ojplus-exp2.fits.fz
$ wget $jplusdr2/get_fits?id=771467 -Ojplus-exp3.fits.fz
$ astwarp jplus-exp2.fits.fz --gridfile jplus-e1.fits \
-o jplus-e2.fits --checkmaxfrac
$ astwarp jplus-exp3.fits.fz --gridfile jplus-e1.fits \
-o jplus-e3.fits --checkmaxfrac
$ astscript-fits-view jplus-e*.fits
@end example
@noindent
In the three warped images, you don't see any Moir@'e pattern, so far so good...
now, take the following steps:
@enumerate
@item
In the small ``Cube'' window, click the ``Next'' button so you see the @code{MAX-FRAC} extension/HDU.
@item
Click on the ``Frame'' button (in the top row of buttons just on top of the image), and select the ``Single'' button in the bottom row.
@item
Open the ``Zoom'' menu (not button), and select ``Zoom 16''.
@item
Press the @key{TAB} key to flip through each exposure.
@item
Focus your eyes on the pixels with the largest value (white colored pixels), while pressing @key{TAB} to flip between the exposures.
You will see that in each exposure they cover different pixels (nicely getting averaged out after coadding).
@end enumerate
The exercise above shows that the Moir@'e pattern (that had already decreased significantly) will be further decreased after we coadd the images.
So let's coadd these three images with the commands below.
First, we need to remove the sky-level from each image using @ref{NoiseChisel}, then we'll coadd the @code{INPUT-NO-SKY} extensions using filled MAD-clipping (to reject outliers, and especially diffuse outliers, robustly, see @ref{Clipping outliers}).
@example
$ for i in $(seq 3); do \
astnoisechisel jplus-e$i.fits -ojplus-nc$i.fits; \
done
$ astarithmetic jplus-nc*.fits 3 5 0.2 sigclip-mean swap free \
-gINPUT-NO-SKY -ojplus-coadd.fits
$ astscript-fits-view jplus-nc*.fits jplus-coadd.fits
@end example
@noindent
After opening the individual exposures and the final coadd with the last command, take the following steps to see the comparisons properly:
@enumerate
@item
Click on the coadd image so it is selected.
@item
Go to the ``Frame'' menu, then the ``Lock'' item, then activate ``Scale and Limits''.
@item
Scroll your mouse or touchpad to zoom into the image.
@end enumerate
@noindent
You clearly see that the coadded image is deeper and that there is no Moir@'e pattern, while you have slightly @emph{improved} the spatial resolution of the output compared to the input.
For optimal results, the oversampling should be determined by the dithering pattern of the observation:
For example if you only have two dither points, you want the pixels with maximum value in the @code{MAX-FRAC} image of one exposure to fall on those with a minimum value in the other exposure.
Ideally, many more dither points should be chosen when you are planning your observation (not just for the Moir@'e pattern, but also for all the other reasons mentioned above).
Based on the dithering pattern, you want to select the increased resolution such that the maximum @code{MAX-FRAC} values fall on every different pixel of the output grid for each exposure.
Note that this discussion is on small shifts between pointings (dithers), not large ones like offsets); see @ref{Pointing pattern design}.
@node Clipping outliers, , Moire pattern in coadding and its correction, Tutorials
@section Clipping outliers
@cindex Outlier
@cindex Clipping of outliers
Outliers occur often in data sets.
For example cosmic rays in astronomical imaging: the image of your target galaxy can be affected by a cosmic ray in one of the five exposures you took in one night.
As a result, when you compare the measured magnitude of your target galaxy in all the exposures, you will get measurements like this (all in magnitudes) 19.8, 20.1, 20.5, 17.0, 19.9 (all fluctuating around magnitude 20, except the much brighter 17th magnitude measurement).
Normally, you would simply take the mean of these measurements to estimate the magnitude of your target with more precision.
However, the 17th magnitude measurement above is clearly wrong and will significantly affect the mean: without it, the mean magnitude is 20.07, but with it, the mean is 19.46:
@example
$ echo " 19.8 20.1 20.5 17 19.9" \
| tr ' ' '\n' \
| aststatistics --mean
1.94600000000000e+01
$ echo " 19.8 20.1 20.5 19.9" \
| tr ' ' '\n' \
| aststatistics --mean
2.00750000000000e+01
@end example
This difference of 0.61 magnitudes (or roughly 1.75 times) is significant (for the definition of magnitudes in astronomy, see @ref{Brightness flux magnitude}).
In the simple example above, you can visually identify the ``outlier'' and manually remove it.
But in most common situations you will not be so lucky!
For example when you want to coadd the five images of the five exposures above, and each image has @mymath{4000\times4000} (or 16 million!) pixels and not possible by hand in a reasonable time (an average human's lifetime!).
This tutorial reviews the effect of outliers and different available ways to remove them.
In particular, we will be looking at coadding of multiple datasets and collapsing one dataset along one of its dimensions.
But the concepts and methods are applicable to any analysis that is affected by outliers.
@menu
* Building inputs and analysis without clipping:: Building a dataset for demonstration below.
* Sigma clipping:: Standard deviation (STD) clipping.
* MAD clipping:: Median Absolute Deviation (MAD) clipping.
* Contiguous outliers:: Two clips with holes filled in the middle.
@end menu
@node Building inputs and analysis without clipping, Sigma clipping, Clipping outliers, Clipping outliers
@subsection Building inputs and analysis without clipping
As described in @ref{Clipping outliers}, the goal of this tutorial is to demonstrate the effects of outliers and show how to ``clip'' them from basic statistics measurements.
This is best done on an actual dataset (rather than pure theory).
In this section we will build nine noisy images with the script below, such that one of the images has a circle in the middle.
We will then coadd the 9 images into one final image based on different statistical measurements: the mean, median, standard deviation (STD), median absolute deviation (MAD) and number of inputs used in each pixel.
We will then analyze the resulting coadds to demonstrate the problem with outliers.
Put the script below into a plain-text file (assuming it is called @file{script.sh}), and run it with @command{bash ./script.sh}.
For more on writing and good practices in shell scripts, see @ref{Writing scripts to automate the steps}.
The last command of the script above calls DS9 to visualize the five output coadded images mentioned above.
@verbatim
# Constants
list=""
sigma=10
number=9
radius=30
width=201
bdir=build
profsum=3e5
background=10
random_seed=1699270427
# Clipping parameters (will be set later when we start clipping).
# clip_multiple: 3 for sigma; 4.5 for MAD
# clip_tolerance: 0.1 for sigma; 0.01 for MAD
clip_operator=""
clip_multiple=""
clip_tolerance=""
# Stop if there is any error.
set -e
# If the build directory does not exist, build it.
if ! [ -d $bdir ]; then mkdir $bdir; fi
# The final image (with largest number) will contain the outlier:
# we'll put a flat circle in the center of the image as the outlier
# structure.
outlier=$bdir/in-$number.fits
nn=$bdir/$number-no-noise.fits
export GSL_RNG_SEED=$random_seed
center=$(echo $width | awk '{print int($1/2)+1}')
echo "1 $center $center 5 $radius 0 0 1 $profsum 1" \
| astmkprof --mode=img --mergedsize=$width,$width \
--oversample=1 --output=$nn --mcolissum
astarithmetic $nn $background + $sigma mknoise-sigma \
--envseed -o$outlier
# Build pure noise and add elements to the list of images to coadd.
list=$outlier
numnoise=$(echo $number | awk '{print $1-1}')
for i in $(seq 1 $numnoise); do
img="$bdir/in-$i.fits"
if ! [ -f $img ]; then
export GSL_RNG_SEED=$(echo $random_seed | awk '{print $1+'$i'}')
astarithmetic $width $width 2 makenew float32 $background + \
$sigma mknoise-sigma --envseed --output=$img
fi
list="$list $img"
done
# Coadd the images.
for op in mean median std mad; do
if [ x"$clip_operator" = x ]; then # No clipping.
out=$bdir/coadd-$op.fits
astarithmetic $list $number $op -g1 --output=$out
else # With clipping.
operator=$clip_operator-$op
out=$bdir/coadd-$operator.fits
astarithmetic $list $number $clip_multiple $clip_tolerance \
$operator -g1 --writeall --output=$out
fi
done
# Collapse the first and last image along the 2nd dimension.
for i in 1 $number; do
if [ x"$clip_operator" = x ]; then # No clipping.
out=$bdir/collapsed-$i.fits
astarithmetic $bdir/in-$i.fits 2 collapse-median counter \
--writeall --output=$out
else # With clipping.
out=$bdir/collapsed-$clip_operator-$i.fits
astarithmetic $bdir/in-$i.fits $clip_multiple $clip_tolerance \
2 collapse-$clip_operator-median counter \
--writeall --output=$out
fi
done
@end verbatim
@noindent
After the script finishes, you can see the generated input images with the first command below.
The second command shows the coadded images with the different statistics:
@example
$ astscript-fits-view build/in-*.fits --ds9extra="-lock scalelimits yes"
$ astscript-fits-view build/coadd-*.fits
@end example
Color-blind readers may not clearly see the issue in the opened images with this color bar.
In this case, please choose the ``color'' menu at the top of the DS9 and select ``gray'' or any other color that makes the noisy circle (in the noise) most visible.
The effect of an outlier on the different measurements above can be visually seen (and quantitatively measured) through the visibility of the circle (that was only present in one image, of nine).
Let's look at them one by one (from the one that is most affected to the least):
@table @file
@item std.fits
The standard deviation (third image in DS9) is the most strongly affected statistic by an outlier.
This is so strong that the edge of the circle is also clearly visible!
The standard deviation is calculated by first finding the mean, and estimating the difference of each element from the mean.
Those differences are then taken to the power of two and finally the square root is taken (after a division by the number).
It is the power-of-two component that amplifies the effect of the single outlier as you see here.
@item mean.fits
The mean (first image in DS9) is also affected by the outlier in such a way that the circle footprint is clearly visible.
This is because the nine images have the same importance in the combination with a simple mean.
Therefore, the outlier value pushes the result to higher values and the circle is printed.
@item median.fits
The median (second image in DS9) is also affected by the outlier; although much less significantly than the standard deviation or mean.
At first sight the circle may not be too visible!
To see it more clearly, click on the ``Analysis'' menu in DS9 and then the ``smooth'' item.
After smoothing, you will see how the single outlier has leaked into the median coadd.
Intuitively, we would think that since the median is calculated from the middle element after sorting, the outlier goes to the end and won't affect the result.
However, this is not the case as we see here: with 9 elements, the ``central'' element is the 5th (counting from 1; after sorting).
Since the pixels covered by the circle only have 8 pure noise elements; the ``true'' median should have been the average of the 4th and 5th elements (after sorting).
By definition, the 5th element is always larger than the mean of the 4th and 5th (because the 4th element after sorting has a smaller value than the 5th element).
Therefore, using the 5th element (after sorting), we are systematically choosing higher noise values in regions that are covered by the circle!
With larger datasets, the difference between the central elements will be less.
However, the improved precision (in the elements without an outlier) will also be more.
A detailed analysis of the effect of a single outlier on the median based on the number of inputs can be done as an exercise; but in general, as this argument shows, the median is not immune to outliers; especially when you care about low signal-to-noise regimes (as we do in astronomy: low surface brightness science).
@item mad.fits
The median absolute deviation (fourth image in DS9) is affected by outliers in a similar fashion to the median.
@end table
The example above included a single outlier.
But we are not usually that lucky: there are usually more outliers!
For example, the last @code{for} loop in the script above collapsed @file{1.fits} (that was pure noise, without the circle) and @file{9.fits} (with the circle) along their second dimension (the vertical).
The output of collapsing has one less dimension; in this case, producing a 1D table (with the same number of rows as the image's horizontal axis).
Collapsing was done by taking the median along all the pixels in the vertical dimension.
To easily plot the output afterwards, we have also used the @code{counter} operator.
With the command below, you can open both tables and compare them:
@example
$ astscript-fits-view build/collapsed-*.fits
@end example
After TOPCAT has opened, select @file{collapsed-1.fits} in the ``Table List'' side-bar.
In the ``Graphics'' menu, select ``Plane plot'' and you will see all the values fluctuating around 10 (with a maximum/minimum around @mymath{\pm2}).
Afterwards, click on the ``Layers'' menu of the new window (with a plot) and click on ``Add position control''.
At the bottom of the window (where the scroll bar in front of ``Table'' is empty), select @file{collapsed-9.fits}.
In the regions that there was no circle in any of the vertical axes, the two match nicely (the noise level is the same).
However, you see that the regions that were partly covered by the outlying circle gradually get more affected as the width of the circle in that column increases (the full diameter of the circle was in the middle of the image).
This shows how the median is biased by outliers as their number increases.
To see the problem more prominently, use the @code{collapse-mean} operator instead of the median.
The reason that the mean is more strongly affected by the outlier is exactly the same as above for the coadding of the input images.
In the subsections below, we will describe some of the available ways (in Gnuastro) to reject the effect of these outliers (and have better coadds or collapses).
But the methodology is not limited to these types of data and can be generically applied; unless specified explicitly.
@node Sigma clipping, MAD clipping, Building inputs and analysis without clipping, Clipping outliers
@subsection Sigma clipping
Let's assume that you have pure noise (centered on zero) with a clear @url{https://en.wikipedia.org/wiki/Normal_distribution,Gaussian distribution}, or see @ref{Photon counting noise}.
Now let's assume you add very bright objects (signal) on the image which have a very sharp boundary.
By a sharp boundary, we mean that there is a clear cutoff (from the noise) at the pixels the objects finish.
In other words, at their boundaries, the objects do not fade away into the noise.
In optical astronomical imaging, cosmic rays (when they collide at a near normal incidence angle) are a very example of such outliers.
The tracks they leave behind in the image are perfectly immune to the blurring caused by the atmosphere on images of stars or galaxies and they have a very high signal-to-noise ratio.
They are also very energetic and so their borders are usually clearly separated from the surrounding noise.
See Figure 15 in Akhlaghi and Ichikawa, @url{https://arxiv.org/abs/1505.01664,2015}.
In such a case, when you plot the histogram (see @ref{Histogram and Cumulative Frequency Plot}) of the distribution, the pixels relating to those objects will be clearly separate from pixels that belong to parts of the image that did not have any signal (were just noise).
In the cumulative frequency plot, after a steady rise (due to the noise), you would observe a long flat region were for a certain range of the dynamic range (horizontal axis), there is no increase in the index (vertical axis).
In the previous section (@ref{Building inputs and analysis without clipping}) we created one such dataset (@file{in-9.fits}).
With the command below, let's have a look at its histogram and cumulative frequency plot (in simple ASCII format; we are decreasing the default number of bins with @option{--numasciibins} to show them easily within the width of the print version of this manual; feel free to change this).
@example
$ aststatistics build/in-9.fits --asciihist --asciicfp \
--numasciibins=65
ASCII Histogram:
Number: 40401
Y: (linear: 0 to 4191)
X: (linear: -31.9714 -- 150.323, in 65 bins)
| **
| ****
| ******
| ******
| ********
| ********
| **********
| ************
| **************
| ****************** ******
|******************************* *************************
|-----------------------------------------------------------------
ASCII Cumulative frequency plot:
Y: (linear: 0 to 40401)
X: (linear: -31.9714 -- 150.323, in 65 bins)
| ***************
| **********************************************
| ***********************************************
| *************************************************
| **************************************************
| ***************************************************
| ****************************************************
| *****************************************************
| ******************************************************
| ********************************************************
|*****************************************************************
|-----------------------------------------------------------------
@end example
@cindex Bimodal histogram
We see a clear @url{https://en.wikipedia.org/w/index.php?title=Bimodal, bimodal} distribution in the histogram.
Such outliers can significantly bias any single measurement over the whole dataset.
For example let's compare the median, mean and standard deviation of the image above with @file{1.fits}:
@example
$ aststatistics build/in-1.fits --median --mean --std
9.90529778313248e+00 9.96143102101206e+00 1.00137568561776e+01
$ aststatistics build/in-9.fits --median --mean --std
1.09305819367634e+01 1.74470443173776e+01 2.88895986970341e+01
@end example
The effect of the outliers is obvious in all three measures: the median has become 1.10 times larger, the mean 1.75 times and the standard deviation about 2.88 times!
The differing effect of outliers in different statistics was already discussed in @ref{Building inputs and analysis without clipping}; also see @ref{Quantifying signal in a tile}.
@mymath{\sigma}-clipping is one commonly used way to remove/clip the effect of such very strong outliers in measures like the above (although not the most robust, continue reading to the end of this tutorial in the next sections).
@mymath{\sigma}-clipping is defined as the very simple iteration below.
In each iteration, the number of input values used might decrease.
When the outliers are as strong as above, the outliers will be removed through this iteration.
@enumerate
@item
Calculate the standard deviation (@mymath{\sigma}) and median (@mymath{m}) of a distribution.
The median is used because, as shown above, the mean is too significantly affected by the presence of outliers.
@item
Remove all points that are smaller or larger than @mymath{m\pm\alpha\sigma}.
@item
Go back to step 1, unless the selected exit criteria is reached.
There are commonly two types of exit criteria (to stop the @mymath{\sigma}-clipping iteration).
Within Gnuastro's programs that use sigma-clipping, the exit criteria is the second value to the @option{--sclipparams} option (the first value is the @mymath{\alpha} above):
@itemize
@item
When a certain number of iterations has taken place (exit criteria is an integer, larger than 1).
For example @option{--sclipparams=5,3} means that the @mymath{5\sigma} clipping will stop after 3 clips.
@item
When the new measured standard deviation is within a certain tolerance level of the previous iteration (exit criteria is floating point and less than 1.0).
The tolerance level is defined by:
@dispmath{\sigma_{old}-\sigma_{new} \over \sigma_{new}}
In each clipping, the dispersion in the distribution is either less or equal.
So @mymath{\sigma_{old}\geq\sigma_{new}}.
For example @option{--sclipparams=5,0.2} means that the @mymath{5\sigma} clipping will stop the old and new standard deviations are equal within a factor of @mymath{0.2}.
@end itemize
@end enumerate
Let's see the algorithm in practice with the @option{--sigmaclip} option of Gnuastro's Statistics program (using the default configuration of @mymath{3\sigma} clipping and tolerance of 0.1):
@example
$ aststatistics build/in-9.fits --sigmaclip
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: build/in-9.fits (hdu: 1)
-------
3-sigma clipping steps until relative change in STD is less than 0.1:
round number median STD
1 40401 1.09306e+01 2.88896e+01
2 37660 1.00306e+01 1.07153e+01
3 37539 1.00080e+01 9.93741e+00
-------
Statistics (after clipping):
Number of input elements: 40401
Number of clips: 2
Final number of elements: 37539
Median: 1.000803e+01
Mean: 1.001822e+01
Standard deviation: 9.937410e+00
Median Absolute Deviation: 6.772760e+00
@end example
After the basic information about the input and settings, the Statistics program has printed the information for each round (iteration) of clipping.
Initially, there was 40401 elements (the image is @mymath{201\times201} pixels).
After the first round of clipping, only 37660 elements remained and because the difference in standard deviation was larger than the tolerance level, a third clipping was one.
But the change in standard deviation after the third clip (in relation to the second) was smaller than the tolerance level, so the exit criteria was activated and the clipping finished with 37539 elements.
In the end, we see that the final median, mean and standard deviation are very similar to the data without any outlier (@file{build/1.fits} in the example above).
Therefore, through clipping we were able to remove the second ``outlier'' distribution from the bimodal histogram above (because it was so nicely separated from the main/noise).
The example above provided a single statistic from a single dataset.
Other scenarios where sigma-clipping becomes necessary are coadding and collapsing (that was the main goal of the script in @ref{Building inputs and analysis without clipping}).
To generate @mymath{\sigma}-clipped coadds and collapsed tables, you just need to change the values of the three variables of the script (shown below).
After making this change in your favorite text editor, have a look at the outputs.
By the way, if you have still not read (and understood) the commands in that script, this is a good time to do it so the steps below do not appear as a black box to you (for more on writing shell scripts, see @ref{Writing scripts to automate the steps}).
@example
$ grep ^clip_ script.sh
clip_operator=sigclip # These variables will be used more
clip_multiple=3 # effectively with the clipping
clip_tolerance=0.1 # operators of the next sections.
$ bash ./script.sh
$ astscript-fits-view build/coadd-std.fits \
build/coadd-sigclip-std.fits \
build/coadd-*mean.fits \
build/coadd-*median.fits \
--ds9extra="-tile grid layout 2 3 -scale minmax"
@end example
You will see 6 images arranged in two columns: the first column is the normal coadd (without @mymath{\sigma}-clipping) and the second column is the @mymath{\sigma}-clipped coadd of the same statistic (first row: standard deviation, second row: mean, third row: median).
It is clear that the @mymath{\sigma}-clipped (right column in DS9) results have improved in all three measures (compared to the left column).
This was achieved by clipping/removing outliers.
To see how many input images were used in each pixel of the clipped coadd, you should look into the second HDU of any clipping output which shows the number of inputs that were used for each pixel:
@example
$ astscript-fits-view build/coadd-sigclip-median.fits \
--ds9extra="-scale minmax"
@end example
In the ``Cube'' window of opened DS9, click on the ``Next'' button.
The pixels in this image only have two values: 8 or 9.
Over the footprint of the circle, most pixels have a value of 8: only 8 inputs were used for these (one of the inputs was clipped out).
In the other regions of the image, you see that the pixels almost consistently have a value of 9 (except for some noisy pixels here and there).
It is the ``holes'' (with value 9) within the footprint of the circle that keep the circle visible in the final coadd of the output (as we saw previously in the 2-column DS9 command before).
Spoiler alert: in a later section of this tutorial (@ref{Contiguous outliers}) you will see how we fix this problem.
But please be patient and continue reading and running the commands for now.
So far, @mymath{\sigma}-clipping seems to have preformed nicely.
However, there are important caveats to @mymath{\sigma}-clipping that are listed in the box below and further elaborated (with examples) afterwards.
@cartouche
@noindent
@strong{Caveats of @mymath{\sigma}-clipping}: continue this section to visually see the effect of both these caveats:
@itemize
@item
The standard deviation is itself heavily influenced by the presence of outliers (as we have shown above).
Therefore a sufficiently small number of outliers can cause an over-estimation of the standard deviation.
This can be strong enough to keep those from getting clipped!
@item
When the outliers do not constitute a clearly distinct distribution like the example here, @mymath{\sigma}-clipping will not be able to separate them (see the bimodal histogram above for situations that @mymath{\sigma}-clipping is useful).
@end itemize
@end cartouche
To demonstrate the caveats above, let's decrease the brightness (total sum of values) in the circle by decreasing the value of the @code{profsum} variable in the script:
@example
$ grep ^profsum script.sh
profsum=1e5
$ bash ./script.sh
@end example
First, let's have a look at the new circle in noise with the first command below.
With the second command, let's view its pixel value histogram (recall that previously, the circle had a clearly separate distribution):
@example
$ astscript-fits-view build/in-9.fits
$ aststatistics build/in-9.fits --asciihist --numasciibins=65
ASCII Histogram:
Number: 40401
Y: (linear: 0 to 2654)
X: (linear: -31.9714 -- 79.4266, in 65 bins)
| **
| *****
| *********
| **********
| *************
| **************
| *****************
| *******************
| ***********************
| ****************************************
|*****************************************************************
|-----------------------------------------------------------------
@end example
We see that even tough the circle is still clearly visible in the noise in DS9, we don't have a bimodal histogram any more; the circle's pixels have blended into the noise, and just caused a skewness in the (otherwise symmetric) noise distribution.
Let's try running the @option{--sigmaclip} option as above:
@example
$ aststatistics build/in-9.fits --sigmaclip
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: build/in-9.fits (hdu: 1)
-------
3-sigma clipping steps until relative change in STD is less than 0.1:
round number median STD
1 40401 1.09295e+01 1.34784e+01
2 39618 1.06762e+01 1.19852e+01
3 39126 1.05265e+01 1.12983e+01
-------
Statistics (after clipping):
Number of input elements: 40401
Number of clips: 2
Final number of elements: 39126
Median: 1.052652e+01
Mean: 1.114819e+01
Standard deviation: 1.129831e+01
Median Absolute Deviation: 7.106166e+00
@end example
We see that the median, mean and standard deviation are over estimated (each worse than the time when the circle was brighter!).
Let's look at the @mymath{\sigma}-clipping coadd:
@example
$ astscript-fits-view build/coadd-std.fits \
build/coadd-sigclip-std.fits \
build/coadd-*mean.fits \
build/coadd-*median.fits \
--ds9extra="-tile grid layout 2 3 -scale minmax"
@end example
Compared to the previous run (where the outlying circle was brighter), we see that @mymath{\sigma}-clipping is now less successful in removing the outlying circle from the coadds; or in the single value measurements.
To see the reason, we can have a look at the numbers image (note that here, we are using @option{-h2} to only see the numbers image)
@example
$ astscript-fits-view build/coadd-sigclip-median.fits -h2 \
--ds9extra="-scale minmax"
@end example
Unlike before (where the density of pixels with 8 images was very high over the circle's footprint), the circle is barely visible in the numbers image!
There is only a very weak clustering of pixels with a value of 8 over the circle's footprint.
This has happened because the outliers have biased the standard deviation itself to a level that includes them with this multiple of the standard deviation.
To gauge if @mymath{\sigma}-clipping will be useful for your dataset, you should inspect the bimodality of its histogram like we did above.
But you can't do this manually in every case (as in the coadding which involved more than forty thousand separate @mymath{\sigma}-clippings: one for every output)!
Clipping outliers should be based on a different (from standard deviation) measure of scatter/dispersion, one that is more robust (less affected by outliers).
Therefore, in Gnuastro we also have median absolute deviation (MAD) clipping which is described in the next section (@ref{MAD clipping}).
@node MAD clipping, Contiguous outliers, Sigma clipping, Clipping outliers
@subsection MAD clipping
@cindex Median absolute deviation (MAD)
@cindex MAD (median absolute deviation)
When clipping outliers, it is important that the used measure of dispersion is itself not strongly affected by the outliers.
Previously (in @ref{Sigma clipping}), we saw that the standard deviation is not a good measure of dispersion because of its strong dependency on outliers.
In this section, we'll introduce clipping operators that are based on the @url{https://en.wikipedia.org/wiki/Median_absolute_deviation, median absolute deviation} (MAD).
The median absolute deviation is defined as the median of the differences from the median (MAD requires taking two rounds of medians).
As mathematically derived in the Wikipedia page above, for a pure Gaussian distribution, the median absolute deviation will be roughly @mymath{0.67449\sigma}.
We can confirm this numerically from the images with pure noise that we created previously in @ref{Building inputs and analysis without clipping}.
With the first command below we can see the raw standard deviation and median absolute deviation values and the second command shows their division:
@example
$ aststatistics build/in-1.fits --std --mad
1.00137568561776e+01 6.74662296703343e+00
$ aststatistics build/in-1.fits --std --mad | awk '@{print $2/$1@}'
0.673735
@end example
The algorithm of MAD-clipping is identical to @mymath{\sigma}-clipping, except that instead of @mymath{\sigma}, it uses the median absolute deviation (also see the next paragraph).
Since the median absolute deviation is smaller than the standard deviation by roughly 0.67, if you regularly use @mymath{3\sigma} there, you should use @mymath{(3/0.67)\rm{MAD}=(4.48)\rm{MAD}} when doing MAD-clipping.
The usual tolerance should also be changed due to the differing (discrete) nature of the median absolute deviation (based on sorted differences) in relation to the standard deviation (based on the sum of squared differences; which is more smooth).
A tolerance of 0.01 is better suited to the termination criteria of MAD-clipping.
Another difference of the MAD-clipping algorithm in Gnuastro with @mymath{\sigma}-clipping is a special condition when the MAD becomes zero (will be more common in integer datasets).
This can happen when the median value is repeated in the input dataset.
In such cases, we will find the two elements before and after the median that have a different value than the median; it will then use their difference as the MAD@footnote{For more specialized conditions in Gnuastro's implementation of MAD-clipping, see the heavily commented code of @code{MAD_ALTERNATIVE} in @file{lib/statistics.c} of Gnuastro's source.}.
To demonstrate the steps in practice, let's assume you have the original script in @ref{Building inputs and analysis without clipping} with the changes shown in the first command below and With the second command we'll execute the script.
@example
$ grep '^clip_\|^profsum' script.sh
profsum=1e5
clip_operator=madclip
clip_multiple=4.5
clip_tolerance=0.01
$ bash ./script.sh
@end example
@noindent
Let's start by applying MAD-clipping on the image with the bright circle:
@example
$ aststatistics build/in-9.fits --madclip
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: build/in-9.fits (hdu: 1)
-------
4.5-MAD clipping steps until relative change in MAD
(median absolute deviation) is less than 0.01:
round number median MAD
1 40401 1.09295e+01 7.38609e+00
2 38812 1.04313e+01 7.04036e+00
3 38567 1.03497e+01 6.98680e+00
-------
Statistics (after clipping):
Number of input elements: 40401
Number of clips: 2
Final number of elements: 38567
Median: 1.034968e+01
Mean: 1.070246e+01
Standard deviation: 1.063998e+01
Median Absolute Deviation: 6.986797e+00
@end example
We see that the median, mean and standard deviation after MAD-clipping are much better than the @mymath{\sigma}-clipping (see @ref{Sigma clipping}): the median is now 10.3 (was 10.5 in @mymath{\sigma}-clipping), mean is 10.7 (was 10.11) and the standard deviation is 10.6 (was 10.12).
Let's compare the MAD-clipped coadds with the results of the previous section.
Since we want the images shown in a certain order, we'll first construct the list of images (with a @code{for} loop that will fill the @file{imgs} variable).
Note that this assumes you have ran and carefully read/understand all the commands in the previous sections (@ref{Building inputs and analysis without clipping} and @ref{Sigma clipping}).
@example
$ imgs=""
$ p=build/coadd # 'p' is short for "prefix"
$ for m in std mean median mad; do \
imgs="$imgs $p-$m.fits $p-sigclip-$m.fits $p-madclip-$m.fits"; \
done
$ astscript-fits-view $imgs --ds9extra="-tile grid layout 3 4"
@end example
The first column shows the non-clipped coadds for each statistic (generated in @ref{Building inputs and analysis without clipping}), the second column are @mymath{\sigma}-clipped coadds (generated in @ref{Sigma clipping}), and the third column shows the newly created MAD-clipped coadds.
We see that the circle is much more weaker in the MAD-clipped coadds than in the @mymath{\sigma}-clipped coadds in all rows (different statistics).
Let's confirm this with the numbers images of the two clipping methods:
@example
$ astscript-fits-view -g2 \
build/coadd-sigclip-median.fits \
build/coadd-madclip-median.fits -g2 \
--ds9extra="-scale limits 1 9 -lock scalelimits yes"
@end example
In the numbers image of the MAD-clipped coadd, you see the circle much more clearly.
However, you also see that in the regions outside the circle, many random pixels have also been coadded with less than 9 input images!
This is a caveat of MAD clipping and is expected: by nature MAD clipping is much more ``noisy''.
With the command below, let's have a look at the statistics of the numbers image of the MAD-clipping.
With the second, let's see how many pixels used fewer than 5 input images:
@example
$ aststatistics build/coadd-madclip-median.fits -h2 \
--numasciibins=60
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: build/coadd-madclip-median.fits (hdu: 2)
-------
Number of elements: 40401
Minimum: 2
Maximum: 9
Median: 9
Mean: 8.500284646
Standard deviation: 1.1275244
-------
| *
| *
| *
| *
| *
| *
| *
| *
| * *
| * * *
|* * * * * * * *
|------------------------------------------------------------
$ aststatistics build/coadd-madclip-median.fits -h2 \
--lessthan=5 --number
686
@end example
Almost 700 pixels were made with less than 5 inputs!
The large number of random pixels that have been clipped is not good and can make it hard to understand the noise statistics of the coadd.
Ultimately, even MAD-clipping is not perfect and even though the circle is weaker, we still see the circle in all four cases, even with the MAD-clipped median (more clearly: after smoothing/blocking).
The reason is similar to what was described in @mymath{\sigma}-clipping (using the original @code{profsum=3e5}: the ``holes'' in the numbers image.
Because the circle's pixel values are not too distant from the noise and the noisy nature of the MAD, some of its elements do not get clipped, and their coadded value gets systematically higher than the rest of the image.
Fortunately all is not lost!
In Gnuastro, we have a fix for such contiguous outliers that is described fully in the next section (@ref{Contiguous outliers}).
@node Contiguous outliers, , MAD clipping, Clipping outliers
@subsection Contiguous outliers
When source of the outlier covers more than one element, and its flux is close to the noise level, not all of its elements will be clipped: because of noise, some of its elements will remain un-clipped; and thus affecting the output.
Examples of this were created and thoroughly discussed in previous sections with @mymath{\sigma}-clipping and MAD-clipping (see @ref{Sigma clipping} and @ref{MAD clipping}).
@mymath{\sigma}-clipping had good purity (very few non-outliers were clipped) but at the cost of bad completeness (many outliers remained).
MAD-clipping was the opposite: it masked many outliers (good completeness), but at the cost of clipping many pixels that shouldn't have been clipped (bad purity).
Fortunately there is a good way to benefit from the best of both worlds.
Recall that in the numbers image of the MAD-clipping output, the wrongly clipped pixels were randomly distributed are barely connected.
On the other hand, those that covered the circle were nicely connected, with unclipped pixels scattered within it.
Therefore, using their spatial distribution, we can improve the completeness (not have any ``holes'' within the masked circle) and purity (remove the false clips).
This is done through the @code{madclip-maskfilled} operator:
@enumerate
@item
MAD-clipping is applied (@mymath{\sigma}-clipping is also possible, but less effective).
@item
A binary image is created for each input: any outlying pixel of each input is set to 1 (foreground); the rest are set to 0 (background).
Mathematical morphology operators are then used in preparation to filling the holes (to close the boundary of the contiguous outlier):
@itemize
@item
For 2D images (were each pixel has 8 neighbors) the foreground pixels are dilated with a ``connectivity'' of 1 (only the nearest neighbors: 4-connectivity in a 2D image).
@item
For 1D arrays (where each element only has two neighbors), the foreground is eroded.
This is necessary because the next step (where the holes are filled), two pixels that have been clipped purely due to noise with a large distance between them can wrongly mask a very large range of the input data.
@end itemize
@item
Any background pixel that is fully surrounded by the foreground (or a ``hole'') is filled (given a value of 1: becoming a foreground pixel).
@item
One step of 8-connected opening (an erosion, followed by a dilation) is applied to remove (set to background) any non-contiguous foreground pixel of each input.
@item
All the foreground pixels of the binary images are set to NaN in the respective input data (that the binary image corresponds to).
@end enumerate
Let's have a look at the output of this process with the first command below.
Note that because @code{madclip-maskfilled} returns its main input operands back to the coadd, we need to call Arithmetic with @option{--writeall} (which will produce a multi-HDU output file).
With the second, open the output in DS9:
@example
$ astarithmetic build/in-*.fits 9 4.5 0.01 madclip-maskfilled \
-g1 --writeall --output=inputs-masked.fits
$ astscript-fits-view inputs-masked.fits
@end example
In the small ``Cube'' window, you will see that 9 HDUs are present in @file{inputs-masked.fits}.
Click on the ``Next'' button to see each input.
When you get to the last (9th) HDU, you will notice that the circle has been masked there (it is filled with NaN values).
Now that all the contiguous outlier(s) of the inputs are masked, we can use more pure coadding operators (like @mymath{\sigma}-clipping) to remove any strong, single-pixel outlier:
@example
$ astarithmetic build/in-*.fits 9 4.5 0.01 madclip-maskfilled \
9 5 0.1 sigclip-mean \
-g1 --writeall --output=coadd-good.fits
$ astscript-fits-view coadd-good.fits --ds9scale=minmax
@end example
You see a clean noisy coadd in the first HDU (note that we used the @mymath{\sigma}-clipped mean here; which was strongly affected by outliers as we saw before in @ref{Sigma clipping}).
When you go to the next HDU, you see that over the circle only 8 images were used and that there are no ``holes'' there.
But the operator that was most affected by outliers was the standard deviation.
To test it, repeat the first command above and use @code{sigclip-std} instead of @code{sigclip-mean} and have a look at the output: again, you don't see any footprint of the circle.
Of course, if the potential outlier signal can become weaker, there are some solutions: use more inputs if you can (to decrease the noise), or decrease the multiple MAD in the @code{madclip-maskfilled} call above: it will decrease your purity, but to some level, this may be fine (depends on your usage of the coadd).
@c Before we finish, recall that the original script of @ref{Building inputs and analysis without clipping} also generated collapsed columns.
@c Let's have a look at those outputs:
@c
@c @example
@c $ astscript-fits-view build/collapsed-madclip-9.fits \
@c build/collapsed-sigclip-9.fits
@c @end example
@c
@c If you plot both tables together in one plot, you will see that the peak is slightly weaker in the MAD-clipped output, but the outlying circle has been able to significantly bias both in the central pixel columns (that it is present in).
@c The 4.5 multiple of MAD-clipping (corresponding to 3 sigma), was therefore not successful in removing the many outlying pixels due to the circle in the central pixels of the image.
@c
@c This is a relatively high threshold and was used because for the images, we only had 9 elements in each clipping for every pixel.
@c But for the collapsing, we have many more pixels in each vertical direction of the image (201 pixels).
@c Let's decrease the threshold to 3 and calculate the collapsed mean after MAD-clipping, once with filled re-clipping and once without it:
@c
@c @example
@c $ for m in mean number; do \
@c for clip in madclip madclip-fill; do \
@c astarithmetic build/in-9.fits 3 0.01 2 collapse-$clip-$m \
@c counter --writeall -ocollapse-$clip-$m.fits; \
@c done; \
@c done
@c @end example
@c
@c The two loops above created four tables.
@c First, with the command below, let's look at the two measured mean values (one with filling and the other without it):
@c
@c @example
@c $ astscript-fits-view collapse-*-mean.fits
@c @end example
@c
@c In the table without filled re-clipping, you see a small shift in the center of the image (around 100 in the horizontal axis).
@c Let's have a look at the final number of pixels used in each clipping:
@c
@c @example
@c $ astscript-fits-view collapse-*-number.fits
@c @end example
@c
@c The difference is now clearly visible when you plot both in one ``Plane plot'' window.
@c In the filled re-clipping case, we see a clear dip in the number of pixels that very nicely corresponds to the number of pixels associated to the circle.
@c But the dip is much more noisy in the simple MAD-clipping.
@node Installation, Common program behavior, Tutorials, Top
@chapter Installation
@c This link is put here because the `Quick start' section of the first
@c chapter is not the most eye-catching part of the manual and some users
@c were seen to follow this ``Installation'' chapter title in search of the
@c tarball and fast instructions.
@cindex Installation
The latest released version of Gnuastro source code is always available at the following URL:
@url{http://ftpmirror.gnu.org/gnuastro/gnuastro-latest.tar.gz}
@noindent
@ref{Quick start} describes the commands necessary to configure, build, and install Gnuastro on your system.
This chapter will be useful in cases where the simple procedure above is not sufficient, for example, your system lacks a mandatory/optional dependency (in other words, you cannot pass the @command{$ ./configure} step), or you want greater customization, or you want to build and install Gnuastro from other random points in its history, or you want a higher level of control on the installation.
Thus if you were happy with downloading the tarball and following @ref{Quick start}, then you can safely ignore this chapter and come back to it in the future if you need more customization.
@ref{Dependencies} describes the mandatory, optional and bootstrapping dependencies of Gnuastro.
Only the first group are required/mandatory when you are building Gnuastro using a tarball (see @ref{Release tarball}), they are very basic and low-level tools used in most astronomical software, so you might already have them installed, if not they are very easy to install as described for each.
@ref{Downloading the source} discusses the two methods you can obtain the source code: as a tarball (a significant snapshot in Gnuastro's history), or the full history@footnote{@ref{Bootstrapping dependencies} are required if you clone the full history.}.
The latter allows you to build Gnuastro at any random point in its history (for example, to get bug fixes or new features that are not released as a tarball yet).
The building and installation of Gnuastro is heavily customizable, to learn more about them, see @ref{Build and install}.
This section is essentially a thorough explanation of the steps in @ref{Quick start}.
It discusses ways you can influence the building and installation.
If you encounter any problems in the installation process, it is probably already explained in @ref{Known issues}.
In @ref{Other useful software} the installation and usage of some other free software that are not directly required by Gnuastro but might be useful in conjunction with it is discussed.
@menu
* Dependencies:: Necessary packages for Gnuastro.
* Downloading the source:: Ways to download the source code.
* Build and install:: Configure, build and install Gnuastro.
@end menu
@node Dependencies, Downloading the source, Installation, Installation
@section Dependencies
A minimal set of dependencies are mandatory for building Gnuastro from the standard tarball release.
If they are not present you cannot pass Gnuastro's configuration step.
The mandatory dependencies are therefore very basic (low-level) tools which are easy to obtain, build and install, see @ref{Mandatory dependencies} for a full discussion.
If you have the packages of @ref{Optional dependencies}, Gnuastro will have additional functionality (for example, converting FITS images to JPEG or PDF).
If you are installing from a tarball as explained in @ref{Quick start}, you can stop reading after this section.
If you are cloning the version controlled source (see @ref{Version controlled source}), an additional bootstrapping step is required before configuration and its dependencies are explained in @ref{Bootstrapping dependencies}.
Your operating system's package manager is an easy and convenient way to download and install the dependencies that are already pre-built for your operating system.
In @ref{Dependencies from package managers}, we will list some common operating system package manager commands to install the optional and mandatory dependencies.
@menu
* Mandatory dependencies:: Gnuastro will not install without these.
* Optional dependencies:: Adding more functionality.
* Bootstrapping dependencies:: If you have the version controlled source.
* Dependencies from package managers:: Installing from OS package managers.
@end menu
@node Mandatory dependencies, Optional dependencies, Dependencies, Dependencies
@subsection Mandatory dependencies
@cindex Dependencies, Gnuastro
@cindex GNU build system
The mandatory Gnuastro dependencies are very basic and low-level tools.
They all follow the same basic GNU based build system (like that shown in @ref{Quick start}), so even if you do not have them, installing them should be pretty straightforward.
In this section we explain each program and any specific note that might be necessary in the installation.
@menu
* GNU Scientific Library:: Installing GSL.
* CFITSIO:: C interface to the FITS standard.
* WCSLIB:: C interface to the WCS standard of FITS.
@end menu
@node GNU Scientific Library, CFITSIO, Mandatory dependencies, Mandatory dependencies
@subsubsection GNU Scientific Library
@cindex GNU Scientific Library
The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is a large collection of functions that are very useful in scientific applications, for example, integration, random number generation, and Fast Fourier Transform among many others.
To download and install GSL from source, you can run the following commands.
@example
$ wget https://ftp.gnu.org/gnu/gsl/gsl-latest.tar.gz
$ tar -xf gsl-latest.tar.gz
$ cd gsl-X.X # Replace X.X with version number.
$ ./configure CFLAGS="$CFLAGS -g0 -O3"
$ make -j8 # Replace 8 with no. CPU threads.
$ make check
$ sudo make install
@end example
@node CFITSIO, WCSLIB, GNU Scientific Library, Mandatory dependencies
@subsubsection CFITSIO
@cindex CFITSIO
@cindex FITS standard
@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can get to the pixels in a FITS image while remaining faithful to the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}.
It is written by William Pence, the principal author of the FITS standard@footnote{Pence, W.D. et al. Definition of the Flexible Image Transport System (FITS), version 3.0. (2010) Astronomy and Astrophysics, Volume 524, id.A42, 40 pp.}, and is regularly updated.
Setting the definitions for all other software packages using FITS images.
@vindex --enable-reentrant
@cindex Reentrancy, multiple file opening
@cindex Multiple file opening, reentrancy
Some GNU/Linux distributions have CFITSIO in their package managers, if it is available and updated, you can use it.
One problem that might occur is that CFITSIO might not be configured with the @option{--enable-reentrant} option by the distribution.
This option allows CFITSIO to open a file in multiple threads, it can thus provide great speed improvements.
If CFITSIO was not configured with this option, any program which needs this capability will warn you and abort when you ask for multiple threads (see @ref{Multi-threaded operations}).
To install CFITSIO from source, we strongly recommend that you have a look through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and understand the options you can pass to @command{$ ./configure} (they are not too much).
This is a very basic package for most astronomical software and it is best that you configure it nicely with your system.
Once you download the source and unpack it, the following configure script should be enough for most purposes.
Do not forget to read chapter two of the manual though, for example, the second option is only for 64bit systems.
The manual also explains how to check if it has been installed correctly.
CFITSIO comes with two executable files called @command{fpack} and @command{funpack}.
From their manual: they ``are standalone programs for compressing and uncompressing images and tables that are stored in the FITS (Flexible Image Transport System) data format.
They are analogous to the gzip and gunzip compression programs except that they are optimized for the types of astronomical images that are often stored in FITS format''.
The commands below will compile and install them on your system along with CFITSIO.
They are not essential for Gnuastro, since they are just wrappers for functions within CFITSIO, but they can come in handy.
The @command{make utils} command is only available for versions above 3.39, it will build these executable files along with several other executable test files which are deleted in the following commands before the installation (otherwise the test files will also be installed).
The commands necessary to download the source, decompress, build and install CFITSIO from source are described below.
@example
$ urlbase=http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c
$ wget $urlbase/cfitsio_latest.tar.gz
$ tar -xf cfitsio_latest.tar.gz
$ cd cfitsio-X.XX # Replace X.XX with version
$ ./configure --prefix=/usr/local --enable-sse2 --enable-reentrant \
CFLAGS="$CFLAGS -g0 -O3"
$ make
$ make utils
$ ./testprog > testprog.lis # See below if this has an error
$ diff testprog.lis testprog.out # Should have no output
$ cmp testprog.fit testprog.std # Should have no output
$ rm cookbook fitscopy imcopy smem speed testprog
$ sudo make install
@end example
In the @code{./testprog > testprog.lis} step, you may confront an error, complaining that it cannot find @file{libcfitsio.so.AAA} (where @code{AAA} is an integer).
This is the library that you just built and have not yet installed.
But unfortunately some versions of CFITSIO do not account for this on some OSs.
To fix the problem, you need to tell your OS to also look into current CFITSIO build directory with the first command below, afterwards, the problematic command (second below) should run properly.
@example
$ export LD_LIBRARY_PATH="$(pwd):$LD_LIBRARY_PATH"
$ ./testprog > testprog.lis
@end example
Recall that the modification above is ONLY NECESSARY FOR THIS STEP.
@emph{Do not} put the @code{LD_LIBRARY_PATH} modification command in a permanent place (like your bash startup file).
After installing CFITSIO, close your terminal and continue working on a new terminal (so @code{LD_LIBRARY_PATH} has its default value).
For more on @code{LD_LIBRARY_PATH}, see @ref{Installation directory}.
@node WCSLIB, , CFITSIO, Mandatory dependencies
@subsubsection WCSLIB
@cindex WCS
@cindex WCSLIB
@cindex World Coordinate System
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and maintained by one of the authors of the World Coordinate System (WCS) definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of world coordinates in FITS.
Astronomy and Astrophysics, 395, 1061-1075.}, Mark Calabretta.
It might be already built and ready in your distribution's package management system.
However, here the installation from source is explained, for the advantages of installation from source please see @ref{Mandatory dependencies}.
To install WCSLIB you will need to have CFITSIO already installed, see @ref{CFITSIO}.
@vindex --without-pgplot
WCSLIB also has plotting capabilities which use PGPLOT (a plotting library for C).
If you wan to use those capabilities in WCSLIB, @ref{PGPLOT} provides the PGPLOT installation instructions.
However PGPLOT is old@footnote{As of early June 2016, its most recent version was uploaded in February 2001.}, so its installation is not easy, there are also many great modern WCS plotting tools (mostly in written in Python).
Hence, if you will not be using those plotting functions in WCSLIB, you can configure it with the @option{--without-pgplot} option as shown below.
If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your system and you installed CFITSIO version 3.42 or later, you will need to also link with the cURL library at configure time (through the @code{-lcurl} option as shown below).
CFITSIO uses the cURL library for its HTTPS (or HTTP Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}}) support and if it is present on your system, CFITSIO will depend on it.
Therefore, if @command{./configure} command below fails (you do not have the cURL library), then remove this option and rerun it.
To download, configure, build, check and install WCSLIB from source, you can follow the steps below.
@example
## Download and unpack the source tarball
$ wget ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2
$ tar -xf wcslib.tar.bz2
## In the `cd' command, replace `X.X' with version number.
$ cd wcslib-X.X
## If `./configure' fails, remove `-lcurl' and run again.
$ ./configure LIBS="-pthread -lcurl -lm" --without-pgplot \
--disable-fortran CFLAGS="$CFLAGS -g0 -O3"
$ make
$ make check
$ sudo make install
@end example
@node Optional dependencies, Bootstrapping dependencies, Mandatory dependencies, Dependencies
@subsection Optional dependencies
The libraries listed here are only used for very specific applications, therefore they are optional and Gnuastro can be built without them (with only those specific features disabled).
Since these are pretty low-level tools, they are not too hard to install from source, but you can also use your operating system's package manager to easily install all of them.
For more, see @ref{Dependencies from package managers}.
@cindex GPL Ghostscript
If the @command{./configure} script cannot find any of these optional dependencies, it will notify you of the operation(s) you cannot do due to not having them.
If you continue the build and request an operation that uses a missing library, Gnuastro's programs will warn that the optional library was missing at build-time and abort.
Since Gnuastro was built without that library, installing the library afterwards will not help.
The only way is to rebuild Gnuastro from scratch (after the library has been installed).
However, for program dependencies (like cURL or Ghostscript) things are easier: you can install them after building Gnuastro also.
This is because libraries are used to build the internal structure of Gnuastro's executables.
However, a program dependency is called by Gnuastro's programs at run-time and has no effect on their internal structure.
So if a dependency program becomes available later, it will be used next time it is requested.
@table @asis
@item GNU Libtool
@cindex GNU Libtool
Libtool is a program to simplify managing of the libraries to build an executable (a program).
GNU Libtool has some added functionality compared to other implementations.
If GNU Libtool is not present on your system at configuration time, a warning will be printed and @ref{BuildProgram} will not be built or installed.
The configure script will look into your search path (@code{PATH}) for GNU Libtool through the following executable names: @command{libtool} (acceptable only if it is the GNU implementation) or @command{glibtool}.
See @ref{Installation directory} for more on @code{PATH}.
GNU Libtool (the binary/executable file) is a low-level program that is probably already present on your system, and if not, is available in your operating system package manager@footnote{Note that we want the binary/executable Libtool program which can be run on the command-line.
In Debian-based operating systems which separate various parts of a package, you want want @code{libtool-bin}, the @code{libtool} package will not contain the executable program.}.
If you want to install GNU Libtool's latest version from source, please visit its @url{https://www.gnu.org/software/libtool/, web page}.
Gnuastro's tarball is shipped with an internal implementation of GNU Libtool.
Even if you have GNU Libtool, Gnuastro's internal implementation is used for the building and installation of Gnuastro.
As a result, you can still build, install and use Gnuastro even if you do not have GNU Libtool installed on your system.
However, this internal Libtool does not get installed.
Therefore, after Gnuastro's installation, if you want to use @ref{BuildProgram} to compile and link your own C source code which uses the @ref{Gnuastro library}, you need to have GNU Libtool available on your system (independent of Gnuastro).
See @ref{Review of library fundamentals} to learn more about libraries.
@item GNU Make extension headers
@cindex GNU Make
GNU Make is a workflow management system that can be used to run a series of commands in a specific order, and in parallel if you want.
GNU Make offers special features to extend it with custom functions within a dynamic library.
They are defined in the @file{gnumake.h} header.
If @file{gnumake.h} can be found on your system at configuration time, Gnuastro will build a custom library that GNU Make can use for extended functionality in (astronomical) data analysis scenarios.
@item libgit2
@cindex Git
@pindex libgit2
@cindex Version control systems
Git is one of the most common version control systems (see @ref{Version controlled source}).
When @file{libgit2} is present, and Gnuastro's programs are run within a version controlled directory, outputs will contain the version number of the working directory's repository for future reproducibility.
See the @command{COMMIT} keyword header in @ref{Output FITS files} for a discussion.
@item libjpeg
@pindex libjpeg
@cindex JPEG format
libjpeg is only used by ConvertType to read from and write to JPEG images, see @ref{Recognized file formats}.
@url{http://www.ijg.org/, libjpeg} is a very basic library that provides tools to read and write JPEG images, most Unix-like graphic programs and libraries use it.
Therefore you most probably already have it installed.
@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative to libjpeg.
It uses Single instruction, multiple data (SIMD) instructions for ARM based systems that significantly decreases the processing time of JPEG compression and decompression algorithms.
@item libtiff
@pindex libtiff
@cindex TIFF format
libtiff is used by ConvertType and the libraries to read TIFF images, see @ref{Recognized file formats}.
@url{http://www.simplesystems.org/libtiff/, libtiff} is a very basic library that provides tools to read and write TIFF images, most Unix-like operating system graphic programs and libraries use it.
Therefore even if you do not have it installed, it must be easily available in your package manager.
@item cURL
@cindex cURL (downloading tool)
cURL's executable (@command{curl}) is called by @ref{Query} for submitting queries to remote datasets and retrieving the results.
It is not necessary for the build of Gnuastro from source (only a warning will be printed if it cannot be found at configure time), so if you do not have it at build-time there is no problem.
Just be sure to have it when you run @command{astquery}, otherwise you'll get an error about not finding @command{curl}.
@item GPL Ghostscript
@cindex GPL Ghostscript
GPL Ghostscript's executable (@command{gs}) is called by ConvertType to compile a PDF file from a source PostScript file, see @ref{ConvertType}.
Therefore its headers (and libraries) are not needed.
@item Python3 with Numpy
@cindex Numpy
@cindex Python3
Python is a high-level programming language and Numpy is the most commonly used library within Python to add multi-dimensional arrays and matrices.
If you configure Gnuastro with @option{--with-python} @emph{and} version 3 of Python is available with a corresponding Numpy Library, Gnuastro's library will be built with some Python-related helper functions.
Python wrappers for Gnuastro's library (for example, `pyGnuastro') can use these functions when being built from source.
For more on Gnuastro's Python helper functions, see @ref{Python interface}.
@cindex PyPI
This Python interface is only relevant if you want to build the Python wrappers (like `pyGnuastro') from source.
If you install the Gnuastro Python wrapper from a pre-built repository like PyPI, this feature of your Gnuastro library won't be used.
Pre-built libraries contain the full Gnuastro library that they need within them (you don't even need to have Gnuastro at all!).
@cartouche
@noindent
@strong{Can't find the Python3 and Numpy of a virtual environment:} make sure to set the @code{$PYTHON} variable to point to the @code{python3} command of the virtual environment before running @code{./configure}.
Note that you don't need to activate the virtual env, just point @code{PYTHON} to its Python3 executable, like the example below:
@example
$ python3 -m venv test-env # Setting up the virtual env.
$ export PYTHON="$(pwd)/test-env/bin/python3"
$ ./configure # Gnuastro's configure script.
@end example
@end cartouche
@item SAO DS9
SAO DS9 (@command{ds9}) is a visualization tool for FITS images.
Gnuastro's @command{astscript-fits-view} program calls DS9 to visualize FITS images.
We have a full appendix on it and how to install it in @ref{SAO DS9}.
Since it is a run-time dependency, it can be installed at any later time (after building and installing Gnuastro).
@item TOPCAT
TOPCAT (@command{topcat}) is a visualization tool for astronomical tables (most commonly: plotting).
Gnuastro's @command{astscript-fits-view} program calls TOPCAT it to visualize tables.
We have a full appendix on it and how to install it in @ref{TOPCAT}.
Since it is a run-time dependency, it can be installed at any later time (after building and installing Gnuastro).
@end table
@node Bootstrapping dependencies, Dependencies from package managers, Optional dependencies, Dependencies
@subsection Bootstrapping dependencies
Bootstrapping is only necessary if you have decided to obtain the full version controlled history of Gnuastro, see @ref{Version controlled source} and @ref{Bootstrapping}.
Using the version controlled source enables you to always be up to date with the most recent development work of Gnuastro (bug fixes, new functionalities, improved algorithms, etc.).
If you have downloaded a tarball (see @ref{Downloading the source}), then you can ignore this subsection.
To successfully run the bootstrapping process, there are some additional dependencies to those discussed in the previous subsections.
These are low level tools that are used by a large collection of Unix-like operating systems programs, therefore they are most probably already available in your system.
If they are not already installed, you should be able to easily find them in any GNU/Linux distribution package management system (@command{apt-get}, @command{yum}, @command{pacman}, etc.).
The short names in parenthesis in @command{typewriter} font after the package name can be used to search for them in your package manager.
For the GNU Portability Library, GNU Autoconf Archive and @TeX{} Live, it is recommended to use the instructions here, not your operating system's package manager.
@table @asis
@item GNU Portability Library (Gnulib)
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
To ensure portability for a wider range of operating systems (those that do not include GNU C library, namely glibc), Gnuastro depends on the GNU portability library, or Gnulib.
Gnulib keeps a copy of all the functions in glibc, implemented (as much as possible) to be portable to other operating systems.
The @file{bootstrap} script can automatically clone Gnulib (as a @file{gnulib/} directory inside Gnuastro), however, as described in @ref{Bootstrapping} this is not recommended.
The recommended way to bootstrap Gnuastro is to first clone Gnulib and the Autoconf archives (see below) into a local directory outside of Gnuastro.
Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or Autoconf archives, @file{DEVDIR} can be a directory that you do not backup.
In this way the large number of files in these projects will not slow down your backup process or take bandwidth (if you backup to a remote server).} (which you can set to any directory; preferentially where you keep your other development projects).
Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in the same top directory@footnote{If you already have the Autoconf archives in a separate directory, or cannot clone it in the same directory as Gnulib, or you have it with another directory name (not @file{autoconf-archive/}), you can follow this short step.
Set @file{AUTOCONFARCHIVES} to your desired address.
Then define a symbolic link in @file{DEVDIR} with the following command so Gnuastro's bootstrap script can find it:@*@command{$ ln -s $AUTOCONFARCHIVES $DEVDIR/autoconf-archive}.} like the case here@footnote{If your internet connection is active, but Git complains about the network, it might be due to your network setup not recognizing the git protocol.
In that case use the following URL for the HTTP protocol instead (for Autoconf archives, replace the name): @command{http://git.sv.gnu.org/r/gnulib.git}}:
@example
$ DEVDIR=/home/yourname/Development ## Select any location.
$ mkdir $DEVDIR ## If it doesn't exist!
$ cd $DEVDIR
$ git clone https://git.sv.gnu.org/git/gnulib.git
$ git clone https://git.sv.gnu.org/git/autoconf-archive.git
@end example
Gnulib is a source-based dependency of Gnuastro's bootstrapping process, so simply having it is enough on your computer, there is no need to install, and thus check anything.
@noindent
You now have the full version controlled source of these two repositories in separate directories.
Both these packages are regularly updated, so every once in a while, you can run @command{$ git pull} within them to get any possible updates.
@item GNU Automake (@command{automake})
@cindex GNU Automake
GNU Automake will build the @file{Makefile.in} files in each sub-directory using the (hand-written) @file{Makefile.am} files.
The @file{Makefile.in}s are subsequently used to generate the @file{Makefile}s when the user runs @command{./configure} before building.
To check that you have a working GNU Automake in your system, you can try this command:
@example
$ automake --version
@end example
@item GNU Autoconf (@command{autoconf})
@cindex GNU Autoconf
GNU Autoconf will build the @file{configure} script using the configurations we have defined (hand-written) in @file{configure.ac}.
To check that you have a working GNU Autoconf in your system, you can try this command:
@example
$ autoconf --version
@end example
@item GNU Autoconf Archive
@cindex GNU Autoconf Archive
These are a large collection of tests that can be called to run at @command{./configure} time.
See the explanation under GNU Portability Library (Gnulib) above for instructions on obtaining it and keeping it up to date.
GNU Autoconf Archive is a source-based dependency of Gnuastro's bootstrapping process, so simply having it is enough on your computer, there is no need to install, and thus check anything.
Just do not forget that it has to be in the same directory as Gnulib (described above).
@item GNU Texinfo (@command{texinfo})
@cindex GNU Texinfo
GNU Texinfo is the tool that formats this manual into the various output formats.
To bootstrap Gnuastro you need all of Texinfo's command-line programs.
However, some operating systems package them separately, for example, in Fedora, @command{makeinfo} is packaged in the @command{texinfo-tex} package.
To check that you have a working GNU Texinfo in your system, you can try this command:
@example
$ makeinfo --version
@end example
@item GNU Libtool (@command{libtool})
@cindex GNU Libtool
GNU Libtool is in charge of building all the libraries in Gnuastro.
The libraries contain functions that are used by more than one program and are installed for use in other programs.
They are thus put in a separate directory (@file{lib/}).
To check that you have a working GNU Libtool in your system, you can try this command (and from the output, make sure it is GNU's libtool)
@example
$ libtool --version
@end example
@item GNU help2man (@command{help2man})
@cindex GNU help2man
GNU help2man is used to convert the output of the @option{--help} option
(@ref{--help}) to the traditional Man page (@ref{Man pages}).
To check that you have a working GNU Help2man in your system, you can try this command:
@example
$ help2man --version
@end example
@item @LaTeX{} and some @TeX{} packages
@cindex @LaTeX{}
@cindex @TeX{} Live
Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ package).
The @LaTeX{} source for those figures is version controlled for easy maintenance not the actual figures.
So the @file{./boostrap} script will run @LaTeX{} to build the figures.
The best way to install @LaTeX{} and all the necessary packages is through @url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager for @TeX{} related tools that is independent of any operating system.
It is thus preferred to the @TeX{} Live versions distributed by your operating system.
To install @TeX{} Live, first download its installer, unpack it, enter the unpacked directory (which includes the date it was generated!),
@example
$ baseurl=http://mirrors.rit.edu/CTAN/systems/texlive/tlnet
$ wget $baseurl/install-tl-unx.tar.gz
$ tar -xf install-tl-unx.tar.gz
$ cd install-tl-20*
$ ./install-tl
@end example
The output of the last command above is an interactive shell that will allow you to customize the installation.
There are two important parts which you need to edit:
@table @asis
@item Basic scheme
By default the full package repository will be downloaded and installed (around 9 Gigabytes!) which can take @emph{very} long to download and to update later.
However, most packages are not needed by everyone!
So it is easier, faster and better to install only the ``Basic scheme'' (consisting of only the most basic @TeX{} and @LaTeX{} packages, which is almost 300 Megabytes).
To do this, from the top interactive installer environment press the following keys:
@enumerate
@item
`@key{S}' to select the installation scheme.
@item
`@key{d}` to select the basic scheme.
@item
`@key{R}` to return to the main menu.
@end enumerate
@item Installation path
By default, TeXLive will try to install in a system-wide location which will require root permissions.
A better solution is to install TeXLive in your user-accessible directories to avoid the need to become root when installing packages or updating.
This also helps in using servers where you do not have root access at all. To install it in another directory take the following steps from the main menu:
@enumerate
@item
`@key{D}` to enter the directory settings.
@item
`@key{1}` to select the ``main tree'' (base of all the other directories by default).
@item
You will be prompted to enter a new directory.
To get the directory name, let's open a new/different terminal.
Run the following commands to make and get the absolute location of the directory (no problem if the first command says that @file{$HOME/.local} already exists).
If you already have a different place to host custom-built software, you can modify this step accordingly.
@example
$ mkdir $HOME/.local; mkdir $HOME/.local/texlive
$ cd $HOME/.local/texlive
$ pwd
@end example
Copy the result of the last command above, come back to the terminal with the TeXLive interactive installation and paste the directory there.
Afterwards, you can close the temporary directory.
@item
`@key{R}' to return to the main menu.
@end enumerate
@end table
After the customizations above are implemented simply press the `@key{I}' key to start the installation of TeXLive.
After the installation finishes, be sure to set the environment variables as suggested in the end of the outputs.
For more on how to insert the given directories for the given environment PATHs, see @ref{Installation directory}.
After the installation and setup of the PATHs are complete, you need install all the necessary @TeX{} packages for a successful Gnuastro bootstrap.
To do that, run the command below (in case it complains about not finding @code{tlmgr}, there is a problem in the PATHs above).
@example
$ tlmgr install epsf jknapltx caption biblatex biber \
logreq xstring xkeyval pgf xcolor \
pgfplots times rsfs ps2eps epspdf
@end example
Generally (outside of Gnuastro's development), any time you confront (need) a package you do not have@footnote{After running @TeX{}, or @LaTeX{}, you might get a warning complaining about a @file{missingfile}.
Run `@command{tlmgr info missingfile}' to see the package(s) containing that file which you can install.}, simply install it with the @code{tlmgr} command above.
It is very similar to how you install software from your operating system's package manager).
@item ImageMagick (@command{imagemagick})
@cindex ImageMagick
ImageMagick is a wonderful and robust program for image manipulation on the command-line.
@file{bootstrap} uses it to convert the book images into the formats necessary for the various book formats.
Since ImageMagick version 7, it is necessary to edit the policy file (@file{/etc/ImageMagick-7/policy.xml}) to have the following line (it maybe present, but commented, in this case un-comment it):
@example
<policy domain="coder" rights="read|write" pattern="@{PS,PDF,XPS@}"/>
@end example
If the following line is present, it is also necessary to comment/remove it.
@example
<policy domain="delegate" rights="none" pattern="gs" />
@end example
To learn more about the ImageMagick security policy please see: @url{https://imagemagick.org/script/security-policy.php}.
To check that you have a working ImageMagick in your system, you can try this command:
@example
$ convert --version
@end example
@end table
@node Dependencies from package managers, , Bootstrapping dependencies, Dependencies
@subsection Dependencies from package managers
@cindex Package managers
@cindex Source code building
@cindex Building from source
@cindex Compiling from source
@cindex Source code compilation
@cindex Distributions, GNU/Linux
The most basic way to install a package on your system is to build the packages from source yourself.
Alternatively, you can use your operating system's package manager to download pre-compiled files and install them.
The latter choice is easier and faster.
However, we recommend that you build the @ref{Mandatory dependencies} yourself from source (all necessary commands and links are given in the respective section).
Here are some basic reasons behind this recommendation.
@enumerate
@item
Your operating system's pre-built software might not be the most recent release.
For example, Gnuastro itself is also packaged in some package managers.
For the list see: @url{https://repology.org/project/gnuastro/versions}.
You will notice that Gnuastro's version in some operating systems is more than 10 versions old!
It is the same for all the dependencies of Gnuastro.
@item
For each package, Gnuastro might preform better (or require) certain configuration options that your distribution's package managers did not add for you.
If present, these configuration options are explained during the installation of each in the sections below (for example, in @ref{CFITSIO}).
When the proper configuration has not been set, the programs should complain and inform you.
@item
For the libraries, they might separate the binary file from the header files which can cause confusion, see @ref{Known issues}.
@item
Like any other tool, the science you derive from Gnuastro's tools highly depend on these lower level dependencies, so generally it is much better to have a close connection with them.
By reading their manuals, installing them and staying up to date with changes/bugs in them, your scientific results and understanding (of what is going on, and thus how you interpret your scientific results) will also correspondingly improve.
@end enumerate
Based on your package manager, you can use any of the following commands to install the mandatory and optional dependencies.
If your package manager is not included in the list below, please send us the respective command, so we add it.
For better archivability and compression ratios, Gnuastro's recommended tarball compression format is with the @url{http://lzip.nongnu.org/lzip.html, Lzip} program, see @ref{Release tarball}.
Therefore, the package manager commands below also contain Lzip.
@table @asis
@item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, etc.)
@cindex Debian
@cindex Ubuntu
@cindex Linux Mint
@cindex @command{apt-get}
@cindex Advanced Packaging Tool (APT, Debian)
@url{https://en.wikipedia.org/wiki/Debian,Debian} is one of the oldest
GNU/Linux
distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
It thus has a very extended user community and a robust internal structure and standards.
All of it is free software and based on the work of volunteers around the world.
Many distributions are thus derived from it, for example, Ubuntu and Linux Mint.
This arguably makes Debian-based OSs the largest, and most used, class of GNU/Linux distributions.
All of them use Debian's Advanced Packaging Tool (APT, for example, @command{apt-get}) for managing packages.
@table @asis
@item Development features (Ubuntu or derivatives)
By default, a newly installed Ubuntu does not contain the low-level tools that are necessary for building a software from source.
Therefore, if you are using Ubuntu, please run the following command.
@example
$ sudo apt-get install gcc make zlib1g-dev lzip
@end example
@item Mandatory dependencies
Without these, Gnuastro cannot be built, they are necessary for input/output and low-level mathematics (see @ref{Mandatory dependencies})!
@example
$ sudo apt-get install libgsl-dev libcfitsio-dev \
wcslib-dev
@end example
@item Optional dependencies
If present, these libraries can be used in Gnuastro's build for extra features, see @ref{Optional dependencies}.
@example
$ sudo apt-get install ghostscript libtool-bin \
libjpeg-dev libtiff-dev \
libgit2-dev curl
@end example
@item Programs to view FITS images or tables
These are not used in Gnuastro's build.
They can just help in viewing the inputs/outputs independent of Gnuastro!
@example
$ sudo apt-get install saods9 topcat
@end example
@end table
@noindent
Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in Debian (and thus some of its derivate operating systems).
Just make sure it is the most recent version.
@item @command{dnf}
@itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific Linux, etc.)
@cindex RHEL
@cindex Fedora
@cindex CentOS
@cindex Red Hat
@cindex @command{dnf}
@cindex @command{yum}
@cindex Scientific Linux
@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL) is released by Red Hat Inc.
RHEL requires paid subscriptions for use of its binaries and support.
But since it is free software, many other teams use its code to spin-off their own distributions based on RHEL.
Red Hat-based GNU/Linux distributions initially used the ``Yellowdog Updated, Modifier'' (YUM) package manager, which has been replaced by ``Dandified yum'' (DNF).
If the latter is not available on your system, you can use @command{yum} instead of @command{dnf} in the command below.
@table @asis
@item Mandatory dependencies
Without these, Gnuastro cannot be built, they are necessary for input/output and low-level mathematics (see @ref{Mandatory dependencies})!
@example
$ sudo dnf install gsl-devel cfitsio-devel \
wcslib-devel
@end example
@item Optional dependencies
If present, these libraries can be used in Gnuastro's build for extra features, see @ref{Optional dependencies}.
@example
$ sudo dnf install ghostscript libtool \
libjpeg-devel libtiff-devel \
libgit2-devel lzip curl
@end example
@item Programs to view FITS images or tables
These are not used in Gnuastro's build.
They can just help in viewing the inputs/outputs independent of Gnuastro!
@example
$ sudo dnf install saods9 topcat
@end example
@end table
@item @command{brew} (macOS)
@cindex macOS
@cindex Homebrew
@cindex MacPorts
@cindex @command{brew}
@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system used on Apple devices.
macOS does not come with a package manager pre-installed, but several widely used, third-party package managers exist, such as Homebrew or MacPorts.
Both are free software.
Currently we have only tested Gnuastro's installation with Homebrew as described below.
If not already installed, first obtain Homebrew by following the instructions at @url{https://brew.sh}.
@table @asis
@item Mandatory dependencies
Without these, Gnuastro cannot be built, they are necessary for input/output and low-level mathematics (see @ref{Mandatory dependencies})!
Homebrew manages packages in different `taps'.
To install WCSLIB via Homebrew you will need to @command{tap} into @command{brewsci/science} first (the tap may change in the future, but can be found by calling @command{brew search wcslib}).
@example
$ brew tap brewsci/science
$ brew install wcslib gsl cfitsio
@end example
@item Optional dependencies
If present, these libraries can be used in Gnuastro's build for extra features, see @ref{Optional dependencies}.
@example
$ brew install ghostscript libtool libjpeg \
libtiff libgit2 curl lzip
@end example
@item Programs to view FITS images or tables
These are not used in Gnuastro's build.
They can just help in viewing the inputs/outputs independent of Gnuastro!
@example
$ brew install saoimageds9 topcat
@end example
@end table
@item @command{pacman} (Arch Linux)
@cindex Arch GNU/Linux
@cindex @command{pacman}
@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller GNU/Linux distribution, which follows the KISS principle (``keep it simple, stupid'') as a general guideline.
It ``focuses on elegance, code correctness, minimalism and simplicity, and expects the user to be willing to make some effort to understand the system's operation''.
Arch GNU/Linux uses ``Package manager'' (Pacman) to manage its packages/components.
@table @asis
@item Mandatory dependencies
Without these, Gnuastro cannot be built, they are necessary for input/output and low-level mathematics (see @ref{Mandatory dependencies})!
@example
$ sudo pacman -S gsl cfitsio wcslib
@end example
@item Optional dependencies
If present, these libraries can be used in Gnuastro's build for extra features, see @ref{Optional dependencies}.
@example
$ sudo pacman -S ghostscript libtool libjpeg \
libtiff libgit2 curl lzip
@end example
@item Programs to view FITS images or tables
These are not used in Gnuastro's build.
They can just help in viewing the inputs/outputs independent of Gnuastro!
SAO DS9 and TOPCAT are not available in the standard Arch GNU/Linux repositories.
However, installing and using both is very easy from their own web pages, as described in @ref{SAO DS9} and @ref{TOPCAT}.
@end table
@item @command{zypper} (openSUSE and SUSE Linux Enterprise Server)
@cindex openSUSE
@cindex SUSE Linux Enterprise Server
@cindex @command{zypper}, OpenSUSE package manager
SUSE Linux Enterprise Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the commercial offering which shares code and tools.
Many additional packages are offered in the Build Service@footnote{@url{https://build.opensuse.org}}.
openSUSE and SLES use @command{zypper} (cli) and YaST (GUI) for managing repositories and packages.
@table @asis
@item Configuration
When building Gnuastro, run the configure script with the following @code{CPPFLAGS} environment variable:
@example
$ ./configure CPPFLAGS="-I/usr/include/cfitsio"
@end example
@item Mandatory dependencies
Without these, Gnuastro cannot be built, they are necessary for input/output and low-level mathematics (see @ref{Mandatory dependencies})!
@example
$ sudo zypper install gsl-devel cfitsio-devel \
wcslib-devel
@end example
@item Optional dependencies
If present, these libraries can be used in Gnuastro's build for extra features, see @ref{Optional dependencies}.
@example
$ sudo zypper install ghostscript_any libtool \
pkgconfig libcurl-devel \
libgit2-devel \
libjpeg62-devel \
libtiff-devel curl
@end example
@item Programs to view FITS images or tables
These are not used in Gnuastro's build.
They can just help in viewing the inputs/outputs independent of Gnuastro!
@example
$ sudo zypper install ds9 topcat
@end example
@end table
@c Gnuastro is @url{https://software.opensuse.org/package/gnuastro,packaged}
@c in @command{zypper}. Just make sure it is the most recent version.
@end table
Usually, when libraries are installed by operating system package managers, there should be no problems when configuring and building other programs from source (that depend on the libraries: Gnuastro in this case).
However, in some special conditions, problems may pop-up during the configuration, building, or checking/running any of Gnuastro's programs.
The most common of such problems and their solution are discussed below.
@cartouche
@noindent
@strong{Not finding library during configuration:} If a library is installed, but during Gnuastro's @command{configure} step the library is not found, then configure Gnuastro like the command below (correcting @file{/path/to/lib}).
For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ ./configure LDFLAGS="-L/path/to/lib"
@end example
@end cartouche
@cartouche
@noindent
@strong{Not finding header (.h) files while building:} If a library is installed, but during Gnuastro's @command{make} step, the library's header (file with a @file{.h} suffix) is not found, then configure Gnuastro like the command below (correcting @file{/path/to/include}).
For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ ./configure CPPFLAGS="-I/path/to/include"
@end example
@end cartouche
@cartouche
@noindent
@strong{Gnuastro's programs do not run during check or after install:}
If a library is installed, but the programs do not run due to linking problems, set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is installed in @file{/path/to/installed}).
For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
@end example
@end cartouche
@node Downloading the source, Build and install, Dependencies, Installation
@section Downloading the source
Gnuastro's source code can be downloaded in two ways.
As a tarball, ready to be configured and installed on your system (as described in @ref{Quick start}), see @ref{Release tarball}.
If you want official releases of stable versions this is the best, easiest and most common option.
Alternatively, you can clone the version controlled history of Gnuastro, run one extra bootstrapping step and then follow the same steps as the tarball.
This will give you access to all the most recent work that will be included in the next release along with the full project history.
The process is thoroughly introduced in @ref{Version controlled source}.
@menu
* Release tarball:: Download a stable official release.
* Version controlled source:: Get and use the version controlled source.
@end menu
@node Release tarball, Version controlled source, Downloading the source, Downloading the source
@subsection Release tarball
A release tarball (commonly compressed) is the most common way of obtaining free and open source software.
A tarball is a snapshot of one particular moment in the Gnuastro development history along with all the necessary files to configure, build, and install Gnuastro easily (see @ref{Quick start}).
It is very straightforward and needs the least set of dependencies (see @ref{Mandatory dependencies}).
Gnuastro has tarballs for official stable releases and pre-releases for testing.
See @ref{Version numbering} for more on the two types of releases and the formats of the version numbers.
The URLs for each type of release are given below.
@table @asis
@item Official stable releases (@url{http://ftp.gnu.org/gnu/gnuastro}):
This URL hosts the official stable releases of Gnuastro.
Always use the most recent version (see @ref{Version numbering}).
By clicking on the ``Last modified'' title of the second column, the files will be sorted by their date which you can also use to find the latest version.
It is recommended to use a mirror to download these tarballs, please visit @url{http://ftpmirror.gnu.org/gnuastro/} and see below.
@item Pre-release tarballs (@url{http://alpha.gnu.org/gnu/gnuastro}):
This URL contains unofficial pre-release versions of Gnuastro.
The pre-release versions of Gnuastro here are for enthusiasts to try out before an official release.
If there are problems, or bugs then the testers will inform the developers to fix before the next official release.
See @ref{Version numbering} to understand how the version numbers here are formatted.
If you want to remain even more up-to-date with the developing activities, please clone the version controlled source as described in @ref{Version controlled source}.
@end table
@cindex Gzip
@cindex Lzip
Gnuastro's official/stable tarball is released with two formats: Gzip (with suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}).
The pre-release tarballs (after version 0.3) are released only as an Lzip tarball.
Gzip is a very well-known and widely used compression program created by GNU and available in most systems.
However, Lzip provides a better compression ratio and more robust archival capacity.
For example, Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip web page} for more.
Lzip might not be pre-installed in your operating system, if so, installing it from your operating system's package manager or from source is very easy and fast (it is a very small program).
The GNU FTP server is mirrored (has backups) in various locations on the globe (@url{http://www.gnu.org/order/ftp.html}).
You can use the closest mirror to your location for a more faster download.
Note that only some mirrors keep track of the pre-release (alpha) tarballs.
Also note that if you want to download immediately after and announcement (see @ref{Announcements}), the mirrors might need some time to synchronize with the main GNU FTP server.
@node Version controlled source, , Release tarball, Downloading the source
@subsection Version controlled source
@cindex Git
@cindex Version control
The publicly distributed Gnuastro tarball (for example, @file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a snapshot of the source code at one significant instant of Gnuastro's history (specified by the version number, see @ref{Version numbering}), ready to be configured and built.
To be able to develop successfully, the revision history of the code can be very useful to track when something was added or changed, also some updates that are not yet officially released might be in it.
We use Git for the version control of Gnuastro.
For those who are not familiar with it, we recommend the @url{https://git-scm.com/book/en, ProGit book}.
The whole book is publicly available for online reading and downloading and does a wonderful job at explaining the concepts and best practices.
Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory (can be any directory, change the value below).
The full version controlled history of Gnuastro can be cloned in @file{TOPGNUASTRO/gnuastro} by running the following commands@footnote{If your internet connection is active, but Git complains about the network, it might be due to your network setup not recognizing the Git protocol.
In that case use the following URL which uses the HTTP protocol instead: @command{http://git.sv.gnu.org/r/gnuastro.git}}:
@example
$ TOPGNUASTRO=/home/yourname/Research/projects/
$ cd $TOPGNUASTRO
$ git clone git://git.sv.gnu.org/gnuastro.git
@end example
@noindent
The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written (version controlled) source code for Gnuastro's programs, libraries, this book and the tests.
All are divided into sub-directories with standard and very descriptive names.
The version controlled files in the top cloned directory are either mainly in capital letters (for example, @file{THANKS} and @file{README}) or mainly written in small-caps (for example, @file{configure.ac} and @file{Makefile.am}).
The former are non-programming, standard writing for human readers containing high-level information about the whole package.
The latter are instructions to customize the GNU build system for Gnuastro.
For more on Gnuastro's source code structure, please see @ref{Developing}.
We will not go any deeper here.
The cloned Gnuastro source cannot immediately be configured, compiled, or installed since it only contains hand-written files, not automatically generated or imported files which do all the hard work of the build process.
See @ref{Bootstrapping} for the process of generating and importing those files (it is not too hard!).
Once you have bootstrapped Gnuastro, you can run the standard procedures (in @ref{Quick start}).
Very soon after you have cloned it, Gnuastro's main @file{master} branch will be updated on the main repository (since the developers are actively working on Gnuastro), for the best practices in keeping your local history in sync with the main repository see @ref{Synchronizing}.
@menu
* Bootstrapping:: Adding all the automatically generated files.
* Synchronizing:: Keep your local clone up to date.
@end menu
@node Bootstrapping, Synchronizing, Version controlled source, Version controlled source
@subsubsection Bootstrapping
@cindex Bootstrapping
@cindex GNU Autoconf Archive
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
@cindex Automatically created build files
@noindent
The version controlled source code lacks the source files that we have not written or are automatically built.
These automatically generated files are included in the distributed tarball for each distribution (for example, @file{gnuastro-X.X.tar.gz}, see @ref{Version numbering}) and make it easy to immediately configure, build, and install Gnuastro.
However from the perspective of version control, they are just bloatware and sources of confusion (since they are not changed by Gnuastro developers).
The process of automatically building and importing necessary files into the cloned directory is known as @emph{bootstrapping}.
After bootstrapping is done you are ready to follow the default GNU build steps that you normally run on the tarball (@command{./configure && make} for example, described more in @ref{Quick start}).
Some known issues with bootstrapping may occur during the process, to see how to fix them, please see @ref{Known issues}.
All the instructions for an automatic bootstrapping are available in @file{bootstrap} and configured using @file{bootstrap.conf}.
@file{bootstrap} and @file{COPYING} (which contains the software copyright notice) are the only files not written by Gnuastro developers but under version control to enable simple bootstrapping and legal information on usage immediately after cloning.
@file{bootstrap.conf} is maintained by the GNU Portability Library (Gnulib) and this file is an identical copy, so do not make any changes in this file since it will be replaced when Gnulib releases an update.
Make all your changes in @file{bootstrap.conf}.
The bootstrapping process has its own separate set of dependencies, the full list is given in @ref{Bootstrapping dependencies}.
They are generally very low-level and used by a very large set of commonly used programs, so they are probably already installed on your system.
The simplest way to bootstrap Gnuastro is to simply run the bootstrap script within your cloned Gnuastro directory as shown below.
However, please read the next paragraph before doing so (see @ref{Version controlled source} for @file{TOPGNUASTRO}).
@example
$ cd TOPGNUASTRO/gnuastro
$ ./bootstrap # Requires internet connection
@end example
Without any options, @file{bootstrap} will clone Gnulib within your cloned Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the necessary Autoconf archives macros.
So if you run bootstrap like this, you will need an internet connection every time you decide to bootstrap.
Also, Gnulib is a large package and cloning it can be slow.
It will also keep the full Gnulib repository within your Gnuastro repository, so if another one of your projects also needs Gnulib, and you insist on running bootstrap like this, you will have two copies.
In case you regularly backup your important files, Gnulib will also slow down the backup process.
Therefore while the simple invocation above can be used with no problem, it is not recommended.
To do better, see the next paragraph.
The recommended way to get these two packages is thoroughly discussed in @ref{Bootstrapping dependencies} (in short: clone them in the separate @file{DEVDIR/} directory).
The following commands will take you into the cloned Gnuastro directory and run the @file{bootstrap} script, while telling it to copy some files (instead of making symbolic links, with the @option{--copy} option, this is not mandatory@footnote{The @option{--copy} option is recommended because some backup systems might do strange things with symbolic links.}) and where to look for Gnulib (with the @option{--gnulib-srcdir} option).
Please note that the address given to @option{--gnulib-srcdir} has to be an absolute address (so do not use @file{~} or @file{../} for example).
@example
$ cd $TOPGNUASTRO/gnuastro
$ ./bootstrap --copy --gnulib-srcdir=$DEVDIR/gnulib
@end example
@cindex GNU Texinfo
@cindex GNU Libtool
@cindex GNU Autoconf
@cindex GNU Automake
@cindex GNU C library
@cindex GNU build system
Since Gnulib and Autoconf archives are now available in your local directories, you do not need an internet connection every time you decide to remove all un-tracked files and redo the bootstrap (see box below).
You can also use the same command on any other project that uses Gnulib.
All the necessary GNU C library functions, Autoconf macros and Automake inputs are now available along with the book figures.
The standard GNU build system (@ref{Quick start}) will do the rest of the job.
@cartouche
@noindent
@strong{Undoing the bootstrap:}
During the development, it might happen that you want to remove all the automatically generated and imported files.
In other words, you might want to reverse the bootstrap process.
Fortunately Git has a good program for this job: @command{git clean}.
Run the following command and every file that is not version controlled will be removed.
@example
git clean -fxd
@end example
@noindent
It is best to commit any recent change before running this command.
You might have created new files since the last commit and if they have not been committed, they will all be gone forever (using @command{rm}).
To get a list of the non-version controlled files instead of deleting them, add the @option{n} option to @command{git clean}, so it becomes @option{-fxdn}.
@end cartouche
Besides the @file{bootstrap} and @file{bootstrap.conf}, the @file{bootstrapped/} directory and @file{README-hacking} file are also related to the bootstrapping process.
The former hosts all the imported (bootstrapped) directories.
Thus, in the version controlled source, it only contains a @file{README} file, but in the distributed tarball it also contains sub-directories filled with all bootstrapped files.
@file{README-hacking} contains a summary of the bootstrapping process discussed in this section.
It is a necessary reference when you have not built this book yet.
It is thus not distributed in the Gnuastro tarball.
@node Synchronizing, , Bootstrapping, Version controlled source
@subsubsection Synchronizing
The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed: you mainly need it after you have cloned Gnuastro (once) and whenever you want to re-import the files from Gnulib, or Autoconf archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is defined for you to check if significant (for Gnuastro) updates are made in these repositories, since the last time you pulled from them.} (not too common).
However, Gnuastro developers are constantly working on Gnuastro and are pushing their changes to the official repository.
Therefore, your local Gnuastro clone will soon be out-dated.
Gnuastro has two mailing lists dedicated to its developing activities (see @ref{Developing mailing lists}).
Subscribing to them can help you decide when to synchronize with the official repository.
To pull all the most recent work in Gnuastro, run the following command from the top Gnuastro directory.
If you do not already have a built system, ignore @command{make distclean}.
The separate steps are described in detail afterwards.
@example
$ make distclean && git pull && autoreconf -f
@end example
@noindent
You can also run the commands separately:
@example
$ make distclean
$ git pull
$ autoreconf -f
@end example
@cindex GNU Autoconf
@cindex Mailing list: info-gnuastro
@cindex @code{info-gnuastro@@gnu.org}
If Gnuastro was already built in this directory, you do not want some outputs from the previous version being mixed with outputs from the newly pulled work.
Therefore, the first step is to clean/delete all the built files with @command{make distclean}.
Fortunately the GNU build system allows the separation of source and built files (in separate directories).
This is a great feature to keep your source directory clean and you can use it to avoid the cleaning step.
Gnuastro comes with a script with some useful options for this job.
It is useful if you regularly pull recent changes, see @ref{Separate build and source directories}.
After the pull, we must re-configure Gnuastro with @command{autoreconf -f} (part of GNU Autoconf).
It will update the @file{./configure} script and all the @file{Makefile.in}@footnote{In the GNU build system, @command{./configure} will use the @file{Makefile.in} files to create the necessary @file{Makefile} files that are later read by @command{make} to build the package.} files based on the hand-written configurations (in @file{configure.ac} and the @file{Makefile.am} files).
After running @command{autoreconf -f}, a warning about @code{TEXI2DVI} might show up, you can ignore that.
The most important reason for rebuilding Gnuastro's build system is to generate/update the version number for your updated Gnuastro snapshot.
This generated version number will include the commit information (see @ref{Version numbering}).
The version number is included in nearly all outputs of Gnuastro's programs, therefore it is vital for reproducing an old result.
As a summary, be sure to run `@command{autoreconf -f}' after every change in the Git history.
This includes synchronization with the main server or even a commit you have made yourself.
If you would like to see what has changed since you last synchronized your local clone, you can take the following steps instead of the simple command above (do not type anything after @code{#}):
@example
$ git checkout master # Confirm if you are on master.
$ git fetch origin # Fetch all new commits from server.
$ git log master..origin/master # See all the new commit messages.
$ git merge origin/master # Update your master branch.
$ autoreconf -f # Update the build system.
@end example
@noindent
By default @command{git log} prints the most recent commit first, add the @option{--reverse} option to see the changes chronologically.
To see exactly what has been changed in the source code along with the commit message, add a @option{-p} option to the @command{git log}.
If you want to make changes in the code, have a look at @ref{Developing} to get started easily.
Be sure to commit your changes in a separate branch (keep your @code{master} branch to follow the official repository) and re-run @command{autoreconf -f} after the commit.
If you intend to send your work to us, you can safely use your commit since it will be ultimately recorded in Gnuastro's official history.
If not, please upload your separate branch to a public hosting service, for example, @url{https://codeberg.org, Codeberg}, and link to it in your report/paper.
Alternatively, run @command{make distcheck} and upload the output @file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible web page so your results can be considered scientific (reproducible) later.
@node Build and install, , Downloading the source, Installation
@section Build and install
This section is basically a longer explanation to the sequence of commands given in @ref{Quick start}.
If you did not have any problems during the @ref{Quick start} steps, you want to have all the programs of Gnuastro installed in your system, you do not want to change the executable names during or after installation, you have root access to install the programs in the default system wide directory, the Letter paper size of the print book is fine for you or as a summary you do not feel like going into the details when everything is working, you can safely skip this section.
If you have any of the above problems or you want to understand the details for a better control over your build and install, read along.
The dependencies which you will need prior to configuring, building and installing Gnuastro are explained in @ref{Dependencies}.
The first three steps in @ref{Quick start} need no extra explanation, so we will skip them and start with an explanation of Gnuastro specific configuration options and a discussion on the installation directory in @ref{Configuring}, followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and @ref{Known issues} which explains the solutions to known problems you might encounter in the installation steps and ways you can solve them.
@menu
* Configuring:: Configure Gnuastro
* Separate build and source directories:: Keeping derivate/build files separate.
* Tests:: Run tests to see if it is working.
* A4 print book:: Customize the print book.
* Known issues:: Issues you might encounter.
@end menu
@node Configuring, Separate build and source directories, Build and install, Build and install
@subsection Configuring
@pindex ./configure
@cindex Configuring
The @command{$ ./configure} step is the most important step in the build and install process.
All the required packages, libraries, headers and environment variables are checked in this step.
The behaviors of make and make install can also be set through command-line options to this command.
@cindex Configure options
@cindex Customizing installation
@cindex Installation, customizing
The configure script accepts various arguments and options which enable the final user to highly customize whatever she is building.
The options to configure are generally very similar to normal program options explained in @ref{Arguments and options}.
Similar to all GNU programs, you can get a full list of the options along with a short explanation by running
@example
$ ./configure --help
@end example
@noindent
@cindex GNU Autoconf
A complete explanation is also included in the @file{INSTALL} file.
Note that this file was written by the authors of GNU Autoconf (which builds the @file{configure} script), therefore it is common for all programs which use the @command{$ ./configure} script for building and installing, not just Gnuastro.
Here we only discuss cases where you do not have superuser access to the system and if you want to change the executable names.
But before that, a review of the options to configure that are particular to Gnuastro are discussed.
@menu
* Gnuastro configure options:: Configure options particular to Gnuastro.
* Installation directory:: Specify the directory to install.
* Executable names:: Changing executable names.
* Configure and build in RAM:: For minimal use of HDD or SSD, and clean source.
@end menu
@node Gnuastro configure options, Installation directory, Configuring, Configuring
@subsubsection Gnuastro configure options
@cindex @command{./configure} options
@cindex Configure options particular to Gnuastro
Most of the options to configure (which are to do with building) are similar for every program which uses this script.
Here the options that are particular to Gnuastro are discussed.
The next topics explain the usage of other configure options which can be applied to any program using the GNU build system (through the configure script).
@vtable @option
@item --enable-debug
@cindex Valgrind
@cindex Debugging
@cindex GNU Debugger
Compile/build Gnuastro with debugging information, no optimization and without shared libraries.
In order to allow more efficient programs when using Gnuastro (after the installation), by default Gnuastro is built with a 3rd level (a very high level) optimization and no debugging information.
By default, libraries are also built for static @emph{and} shared linking (see @ref{Linking}).
However, when there are crashes or unexpected behavior, these three features can hinder the process of localizing the problem.
This configuration option is identical to manually calling the configuration script with @code{CFLAGS="-g -O0" --disable-shared}.
In the (rare) situations where you need to do your debugging on the shared libraries, do not use this option.
Instead run the configure script by explicitly setting @code{CFLAGS} like this:
@example
$ ./configure CFLAGS="-g -O0"
@end example
@item --enable-check-with-valgrind
@cindex Valgrind
Do the @command{make check} tests through Valgrind.
Therefore, if any crashes or memory-related issues (segmentation faults in particular) occur in the tests, the output of Valgrind will also be put in the @file{tests/test-suite.log} file without having to manually modify the check scripts.
This option will also activate Gnuastro's debug mode (see the @option{--enable-debug} configure-time option described above).
Valgrind is free software.
It is a program for easy checking of memory-related issues in programs.
It runs a program within its own controlled environment and can thus identify the exact line-number in the program's source where a memory-related issue occurs.
However, it can significantly slow-down the tests.
So this option is only useful when a segmentation fault is found during @command{make check}.
@item --enable-progname
Only build and install @file{progname} along with any other program that is enabled in this fashion.
@file{progname} is the name of the executable without the @file{ast}, for example, @file{crop} for Crop (with the executable name of @file{astcrop}).
Note that by default all the programs will be installed.
This option (and the @option{--disable-progname} options) are only relevant when you do not want to install all the programs.
Therefore, if this option is called for any of the programs in Gnuastro, any program which is not explicitly enabled will not be built or installed.
@item --disable-progname
@itemx --enable-progname=no
Do not build or install the program named @file{progname}.
This is very similar to the @option{--enable-progname}, but will build and install all the other programs except this one.
@cartouche
@noindent
@strong{Note:} If some programs are enabled and some are disabled, it is equivalent to simply enabling those that were enabled.
Listing the disabled programs is redundant.
@end cartouche
@item --enable-gnulibcheck
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
Enable checks on the GNU Portability Library (Gnulib).
Gnulib is used by Gnuastro to enable users of non-GNU based operating systems (that do not use GNU C library or glibc) to compile and use the advanced features that this library provides.
We make extensive use of such functions.
If you give this option to @command{$ ./configure}, when you run @command{$ make check}, first the functions in Gnulib will be tested, then the Gnuastro executables.
If your operating system does not support glibc or has an older version of it and you have problems in the build process (@command{$ make}), you can give this flag to configure to see if the problem is caused by Gnulib not supporting your operating system or Gnuastro, see @ref{Known issues}.
@item --disable-guide-message
@itemx --enable-guide-message=no
Do not print a guiding message during the GNU Build process of @ref{Quick start}.
By default, after each step, a message is printed guiding the user what the next command should be.
Therefore, after @command{./configure}, it will suggest running @command{make}.
After @command{make}, it will suggest running @command{make check} and so on.
If Gnuastro is configured with this option, for example
@example
$ ./configure --disable-guide-message
@end example
Then these messages will not be printed after any step (like most programs).
For people who are not yet fully accustomed to this build system, these guidelines can be very useful and encouraging.
However, if you find those messages annoying, use this option.
@item --without-libgit2
@cindex Git
@pindex libgit2
@cindex Version control systems
Build Gnuastro without libgit2 (for including Git commit hashes in output files), see @ref{Optional dependencies}.
libgit2 is an optional dependency, with this option, Gnuastro will ignore any possibly existing libgit2 that may already be on the system.
@item --without-libjpeg
@pindex libjpeg
@cindex JPEG format
Build Gnuastro without libjpeg (for reading/writing to JPEG files), see @ref{Optional dependencies}.
libjpeg is an optional dependency, with this option, Gnuastro will ignore any possibly existing libjpeg that may already be on the system.
@item --without-libtiff
@pindex libtiff
@cindex TIFF format
Build Gnuastro without libtiff (for reading/writing to TIFF files), see @ref{Optional dependencies}.
libtiff is an optional dependency, with this option, Gnuastro will ignore any possibly existing libtiff that may already be on the system.
@item --with-python
@cindex PyPI
@cindex Python
Build the Python interface within Gnuastro's dynamic library.
This interface can be used for easy communication with Python wrappers (for example, the pyGnuastro package).
When you install the pyGnuastro package from PyPI, the correct configuration of the Gnuastro Library is already packaged with it (with the Python interface) and that is independent of your Gnuastro installation.
The Python interface is only necessary if you want to build pyGnuastro from source (which is only necessary for developers).
Therefore it has to be explicitly activated at configure time with this option.
For more on the interface functions, see @ref{Python interface}.
@end vtable
The tests of some programs might depend on the outputs of the tests of other programs.
For example, MakeProfiles is one the first programs to be tested when you run @command{$ make check}.
MakeProfiles' test outputs (FITS images) are inputs to many other programs (which in turn provide inputs for other programs).
Therefore, if you do not install MakeProfiles for example, the tests for many the other programs will be skipped.
To avoid this, in one run, you can install all the programs and run the tests but not install.
If everything is working correctly, you can run configure again with only the programs you want.
However, do not run the tests and directly install after building.
@node Installation directory, Executable names, Gnuastro configure options, Configuring
@subsubsection Installation directory
@vindex --prefix
@cindex Superuser, not possible
@cindex Root access, not possible
@cindex No access to superuser install
@cindex Install with no superuser access
One of the most commonly used options to @file{./configure} is @option{--prefix}, it is used to define the directory that will host all the installed files (or the ``prefix'' in their final absolute file name).
For example, when you are using a server and you do not have administrator or root access.
In this example scenario, if you do not use the @option{--prefix} option, you will not be able to install the built files and thus access them from anywhere without having to worry about where they are installed.
However, once you prepare your startup file to look into the proper place (as discussed thoroughly below), you will be able to easily use this option and benefit from any software you want to install without having to ask the system administrators or install and use a different version of a software that is already installed on the server.
The most basic way to run an executable is to explicitly write its full file name (including all the directory information) and run it.
One example is running the configuration script with the @command{$ ./configure} command (see @ref{Quick start}).
By giving a specific directory (the current directory or @file{./}), we are explicitly telling the shell to look in the current directory for an executable file named `@file{configure}'.
Directly specifying the directory is thus useful for executables in the current (or nearby) directories.
However, when the program (an executable file) is to be used a lot, specifying all those directories will become a significant burden.
For example, the @file{ls} executable lists the contents in a given directory and it is (usually) installed in the @file{/usr/bin/} directory by the operating system maintainers.
Therefore, if using the full address was the only way to access an executable, each time you wanted a listing of a directory, you would have to run the following command (which is very inconvenient, both in writing and in remembering the various directories).
@example
$ /usr/bin/ls
@end example
@cindex Shell variables
@cindex Environment variables
To address this problem, we have the @file{PATH} environment variable.
To understand it better, we will start with a short introduction to the shell variables.
Shell variable values are basically treated as strings of characters.
For example, it does not matter if the value is a name (string of @emph{alphabetic} characters), or a number (string of @emph{numeric} characters), or both.
You can define a variable and a value for it by running
@example
$ myvariable1=a_test_value
$ myvariable2="a test value"
@end example
@noindent
As you see above, if the value contains white space characters, you have to put the whole value (including white space characters) in double quotes (@key{"}).
You can see the value it represents by running
@example
$ echo $myvariable1
$ echo $myvariable2
@end example
@noindent
@cindex Environment
@cindex Environment variables
If a variable has no value or it was not defined, the last command will only print an empty line.
A variable defined like this will be known as long as this shell or terminal is running.
Other terminals will have no idea it existed.
The main advantage of shell variables is that if they are exported@footnote{By running @command{$ export myvariable=a_test_value} instead of the simpler case in the text}, subsequent programs that are run within that shell can access their value.
So by changing their value, you can change the ``environment'' of a program which uses them.
The shell variables which are accessed by programs are therefore known as ``environment variables''@footnote{You can use shell variables for other actions too, for example, to temporarily keep some names or run loops on some files.}.
You can see the full list of exported variables that your shell recognizes by running:
@example
$ printenv
@end example
@cindex @file{HOME}
@cindex @file{HOME/.local/}
@cindex Environment variable, @code{HOME}
@file{HOME} is one commonly used environment variable, it is any user's (the one that is logged in) top directory.
Try finding it in the command above.
It is used so often that the shell has a special expansion (alternative) for it: `@file{~}'.
Whenever you see file names starting with the tilde sign, it actually represents the value to the @file{HOME} environment variable, so @file{~/doc} is the same as @file{$HOME/doc}.
@vindex PATH
@pindex ./configure
@cindex Setting @code{PATH}
@cindex Default executable search directory
@cindex Search directory for executables
Another one of the most commonly used environment variables is @file{PATH}, it is a list of directories to search for executable names.
Its value is a list of directories (separated by a colon, or `@key{:}').
When the address of the executable is not explicitly given (like @file{./configure} above), the system will look for the executable in the directories specified by @file{PATH}.
If you have a computer nearby, try running the following command to see which directories your system will look into when it is searching for executable (binary) files, one example is printed here (notice how @file{/usr/bin}, in the @file{ls} example above, is one of the directories in @command{PATH}):
@example
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin
@end example
By default @file{PATH} usually contains system-wide directories, which are readable (but not writable) by all users, like the above example.
Therefore if you do not have root (or administrator) access, you need to add another directory to @file{PATH} which you actually have write access to.
The standard directory where you can keep installed files (not just executables) for your own user is the @file{~/.local/} directory.
The names of hidden files start with a `@key{.}' (dot), so it will not show up in your common command-line listings, or on the graphical user interface.
You can use any other directory, but this is the most recognized.
The top installation directory will be used to keep all the package's components: programs (executables), libraries, include (header) files, shared data (like manuals), or configuration files (see @ref{Review of library fundamentals} for a thorough introduction to headers and linking).
So it commonly has some of the following sub-directories for each class of installed components respectively: @file{bin/}, @file{lib/}, @file{include/} @file{man/}, @file{share/}, @file{etc/}.
Since the @file{PATH} variable is only used for executables, you can add the @file{~/.local/bin} directory (which keeps the executables/programs or more generally, ``binary'' files) to @file{PATH} with the following command.
As defined below, first the existing value of @file{PATH} is used, then your given directory is added to its end and the combined value is put back in @file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was added).
@example
$ PATH=$PATH:~/.local/bin
@end example
@cindex GNU Bash
@cindex Startup scripts
@cindex Scripts, startup
Any executable that you installed in @file{~/.local/bin} will now be usable without having to remember and write its full address.
However, as soon as you leave/close your current terminal session, this modified @file{PATH} variable will be forgotten.
Adding the directories which contain executables to the @file{PATH} environment variable each time you start a terminal is also very inconvenient and prone to errors.
Fortunately, there are standard `startup files' defined by your shell precisely for this (and other) purposes.
There is a special startup file for every significant starting step:
@table @asis
@cindex GNU Bash
@item @file{/etc/profile} and everything in @file{/etc/profile.d/}
These startup scripts are called when your whole system starts (for example, after you turn on your computer).
Therefore you need administrator or root privileges to access or modify them.
@item @file{~/.bash_profile}
If you are using (GNU) Bash as your shell, the commands in this file are run, when you log in to your account @emph{through Bash}.
Most commonly when you login through the virtual console (where there is no graphic user interface).
@item @file{~/.bashrc}
If you are using (GNU) Bash as your shell, the commands here will be run each time you start a terminal and are already logged in.
For example, when you open your terminal emulator in the graphic user interface.
@end table
For security reasons, it is highly recommended to directly type in your @file{HOME} directory value by hand in startup files instead of using variables.
So in the following, let's assume your user name is `@file{name}' (so @file{~} may be replaced with @file{/home/name}).
To add @file{~/.local/bin} to your @file{PATH} automatically on any startup file, you have to ``export'' the new value of @command{PATH} in the startup file that is most relevant to you by adding this line:
@example
export PATH=$PATH:/home/name/.local/bin
@end example
@cindex GNU build system
@cindex Install directory
@cindex Directory, install
Now that you know your system will look into @file{~/.local/bin} for executables, you can tell Gnuastro's configure script to install everything in the top @file{~/.local} directory using the @option{--prefix} option.
When you subsequently run @command{$ make install}, all the install-able files will be put in their respective directory under @file{~/.local/} (the executables in @file{~/.local/bin}, the compiled library files in @file{~/.local/lib}, the library header files in @file{~/.local/include} and so on, to learn more about these different files, please see @ref{Review of library fundamentals}).
Note that tilde (`@key{~}') expansion will not happen if you put a `@key{=}' between @option{--prefix} and @file{~/.local}@footnote{If you insist on using `@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided the @key{=} character here which is optional in GNU-style options, see @ref{Options}.
@example
$ ./configure --prefix ~/.local
@end example
@cindex @file{MANPATH}
@cindex @file{INFOPATH}
@cindex @file{LD_LIBRARY_PATH}
@cindex Library search directory
@cindex Default library search directory
You can install everything (including libraries like GSL, CFITSIO, or WCSLIB which are Gnuastro's mandatory dependencies, see @ref{Mandatory dependencies}) locally by configuring them as above.
However, recall that @command{PATH} is only for executable files, not libraries and that libraries can also depend on other libraries.
For example, WCSLIB depends on CFITSIO and Gnuastro needs both.
Therefore, when you installed a library in a non-recognized directory, you have to guide the program that depends on them to look into the necessary library and header file directories.
To do that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS} environment variables respectively.
This can be done while calling @file{./configure} as shown below:
@example
$ ./configure LDFLAGS=-L/home/name/.local/lib \
CPPFLAGS=-I/home/name/.local/include \
--prefix ~/.local
@end example
It can be annoying/buggy to do this when configuring every software that depends on such libraries.
Hence, you can define these two variables in the most relevant startup file (discussed above).
The convention on using these variables does not include a colon to separate values (as @command{PATH}-like variables do).
They use white space characters and each value is prefixed with a compiler option@footnote{These variables are ultimately used as options while building the programs.
Therefore every value has be an option name followed be a value as discussed in @ref{Options}.}.
Note the @option{-L} and @option{-I} above (see @ref{Options}), for @option{-I} see @ref{Headers}, and for @option{-L}, see @ref{Linking}.
Therefore we have to keep the value in double quotation signs to keep the white space characters and adding the following two lines to the startup file of choice:
@example
export LDFLAGS="$LDFLAGS -L/home/name/.local/lib"
export CPPFLAGS="$CPPFLAGS -I/home/name/.local/include"
@end example
@cindex Dynamic libraries
Dynamic libraries are linked to the executable every time you run a program that depends on them (see @ref{Linking} to fully understand this important concept).
Hence dynamic libraries also require a special path variable called @command{LD_LIBRARY_PATH} (same formatting as @command{PATH}).
To use programs that depend on these libraries, you need to add @file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable by adding the following line to the relevant start-up file:
@example
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/.local/lib
@end example
If you also want to access the Info (see @ref{Info}) and man pages (see @ref{Man pages}) documentations add @file{~/.local/share/info} and @file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the following convention: ``If the value of @command{INFOPATH} ends with a colon [or it is not defined] ..., the initial list of directories is constructed by appending the build-time default to the value of @command{INFOPATH}.''
So when installing in a non-standard directory and if @command{INFOPATH} was not initially defined, add a colon to the end of @command{INFOPATH} as shown below.
Otherwise Info will not be able to find system-wide installed documentation:
@*@command{echo 'export INFOPATH=$INFOPATH:/home/name/.local/share/info:' >> ~/.bashrc}@*
Note that this is only an internal convention of Info: do not use it for other @command{*PATH} variables.} and @command{MANPATH} environment variables respectively.
@cindex Search directory order
@cindex Order in search directory
A final note is that order matters in the directories that are searched for all the variables discussed above.
In the examples above, the new directory was added after the system specified directories.
So if the program, library or manuals are found in the system wide directories, the user directory is no longer searched.
If you want to search your local installation first, put the new directory before the already existing list, like the example below.
@example
export LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
@end example
@noindent
This is good when a library, for example, CFITSIO, is already present on the system, but the system-wide install was not configured with the correct configuration flags (see @ref{CFITSIO}), or you want to use a newer version and you do not have administrator or root access to update it on the whole system/server.
If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first (like above), the linker will first find the CFITSIO you installed for yourself and link with it.
It thus will never reach the system-wide installation.
There are important security problems with using local installations first: all important system-wide executables and libraries (important executables like @command{ls} and @command{cp}, or libraries like the C library) can be replaced by non-secure versions with the same file names and put in the customized directory (@file{~/.local} in this example).
So if you choose to search in your customized directory first, please @emph{be sure} to keep it clean from executables or libraries with the same names as important system programs or libraries.
@cartouche
@noindent
@strong{Summary:} When you are using a server which does not give you administrator/root access AND you would like to give priority to your own built programs and libraries, not the version that is (possibly already) present on the server, add these lines to your startup file.
See above for which startup file is best for your case and for a detailed explanation on each.
Do not forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for example, `@file{/home/your-id}'):
@example
export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
export LDFLAGS="-L/YOUR-HOME-DIR/.local/lib $LDFLAGS"
export MANPATH="/YOUR-HOME-DIR/.local/share/man/:$MANPATH"
export CPPFLAGS="-I/YOUR-HOME-DIR/.local/include $CPPFLAGS"
export INFOPATH="/YOUR-HOME-DIR/.local/share/info/:$INFOPATH"
export LD_LIBRARY_PATH="/YOUR-HOME-DIR/.local/lib:$LD_LIBRARY_PATH"
@end example
@noindent
Afterwards, you just need to add an extra @option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command of the software that you intend to install.
Everything else will be the same as a standard build and install, see @ref{Quick start}.
@end cartouche
@node Executable names, Configure and build in RAM, Installation directory, Configuring
@subsubsection Executable names
@cindex Executable names
@cindex Names of executables
At first sight, the names of the executables for each program might seem to be uncommonly long, for example, @command{astnoisechisel} or @command{astcrop}.
We could have chosen terse (and cryptic) names like most programs do.
We chose this complete naming convention (something like the commands in @TeX{}) so you do not have to spend too much time remembering what the name of a specific program was.
Such complete names also enable you to easily search for the programs.
@cindex Shell auto-complete
@cindex Auto-complete in the shell
To facilitate typing the names in, we suggest using the shell auto-complete.
With this facility you can find the executable you want very easily.
It is very similar to file name completion in the shell.
For example, simply by typing the letters below (where @key{[TAB]} stands for the Tab key on your keyboard)
@example
$ ast[TAB][TAB]
@end example
@noindent
you will get the list of all the available executables that start with @command{ast} in your @command{PATH} environment variable directories.
So, all the Gnuastro executables installed on your system will be listed.
Typing the next letter for the specific program you want along with a Tab, will limit this list until you get to your desired program.
@cindex Names, customize
@cindex Customize executable names
In case all of this does not convince you and you still want to type short names, some suggestions are given below.
You should have in mind though, that if you are writing a shell script that you might want to pass on to others, it is best to use the standard name because other users might not have adopted the same customization.
The long names also serve as a form of documentation in such scripts.
A similar reasoning can be given for option names in scripts: it is good practice to always use the long formats of the options in shell scripts, see @ref{Options}.
@cindex Symbolic link
The simplest solution is making a symbolic link to the actual executable.
For example, let's assume you want to type @file{ic} to run Crop instead of @file{astcrop}.
Assuming you installed Gnuastro executables in @file{/usr/local/bin} (default) you can do this simply by running the following command as root:
@example
# ln -s /usr/local/bin/astcrop /usr/local/bin/ic
@end example
@noindent
In case you update Gnuastro and a new version of Crop is installed, the
default executable name is the same, so your custom symbolic link still
works.
@vindex --program-prefix
@vindex --program-suffix
@vindex --program-transform-name
The installed executable names can also be set using options to @command{$ ./configure}, see @ref{Configuring}.
GNU Autoconf (which configures Gnuastro for your particular system), allows the builder to change the name of programs with the three options @option{--program-prefix}, @option{--program-suffix} and @option{--program-transform-name}.
The first two are for adding a fixed prefix or suffix to all the programs that will be installed.
This will actually make all the names longer! You can use it to add versions of program names to the programs in order to simultaneously have two executable versions of a program.
@cindex SED, stream editor
@cindex Stream editor, SED
The third configure option allows you to set the executable name at install time using the SED program.
SED is a very useful `stream editor'.
There are various resources on the internet to use it effectively.
However, we should caution that using configure options will change the actual executable name of the installed program and on every re-install (an update for example), you have to also add this option to keep the old executable name updated.
Also note that the documentation or configuration files do not change from their standard names either.
@cindex Removing @file{ast} from executables
For example, let's assume that typing @file{ast} on every invocation of every program is really annoying you! You can remove this prefix from all the executables at configure time by adding this option:
@example
$ ./configure --program-transform-name='s/ast/ /'
@end example
@node Configure and build in RAM, , Executable names, Configuring
@subsubsection Configure and build in RAM
@cindex File I/O
@cindex Input/Output, file
Gnuastro's configure and build process (the GNU build system) involves the creation, reading, and modification of a large number of files (input/output, or I/O).
Therefore file I/O issues can directly affect the work of developers who need to configure and build Gnuastro numerous times.
Some of these issues are listed below:
@itemize
@cindex HDD
@cindex SSD
@item
I/O will cause wear and tear on both the HDDs (mechanical failures) and
SSDs (decreasing the lifetime).
@cindex Backup
@item
Having the built files mixed with the source files can greatly affect backing up (synchronization) of source files (since it involves the management of a large number of small files that are regularly changed.
Backup software can of course be configured to ignore the built files and directories.
However, since the built files are mixed with the source files and can have a large variety, this will require a high level of customization.
@end itemize
@cindex tmpfs file system
@cindex file systems, tmpfs
One solution to address both these problems is to use the @url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}.
Any file in tmpfs is actually stored in the RAM (and possibly SWAP), not on HDDs or SSDs.
The RAM is built for extensive and fast I/O.
Therefore the large number of file I/Os associated with configuring and building will not harm the HDDs or SSDs.
Due to the volatile nature of RAM, files in the tmpfs file-system will be permanently lost after a power-off.
Since all configured and built files are derivative files (not files that have been directly written by hand) there is no problem in this and this feature can be considered as an automatic cleanup.
@cindex Linux kernel
@cindex GNU C library
@cindex GNU build system
The modern GNU C library (and thus the Linux kernel) defines the @file{/dev/shm} directory for this purpose in the RAM (POSIX shared memory).
To build in it, you can use the GNU build system's ability to build in a separate directory (not necessarily in the source directory) as shown below.
Just set @file{SRCDIR} as the address of Gnuastro's top source directory (for example, where there is the unpacked tarball).
@example
$ SRCDIR=/home/username/gnuastro
$ mkdir /dev/shm/tmp-gnuastro-build
$ cd /dev/shm/tmp-gnuastro-build
$ $SRCDIR/configure --srcdir=$SRCDIR
$ make
@end example
Gnuastro comes with a script to simplify this process of configuring and building in a different directory (a ``clean'' build), for more see @ref{Separate build and source directories}.
@node Separate build and source directories, Tests, Configuring, Build and install
@subsection Separate build and source directories
The simple steps of @ref{Quick start} will mix the source and built files.
This can cause inconvenience for developers or enthusiasts following the most recent work (see @ref{Version controlled source}).
The current section is mainly focused on this later group of Gnuastro users.
If you just install Gnuastro on major releases (following @ref{Announcements}), you can safely ignore this section.
@cindex GNU build system
When it is necessary to keep the source (which is under version control), but not the derivative (built) files (after checking or installing), the best solution is to keep the source and the built files in separate directories.
One application of this is already discussed in @ref{Configure and build in RAM}.
To facilitate this process of configuring and building in a separate directory, Gnuastro comes with the @file{developer-build} script.
It is available in the top source directory and is @emph{not} installed.
It will make a directory under a given top-level directory (given to @option{--top-build-dir}) and build Gnuastro there.
It thus keeps the source completely separated from the built files.
For easy access to the built files, it also makes a symbolic link to the built directory in the top source files called @file{build}.
When running the developer-build script without any options in the Gnuastro's top source directory, default values will be used for its configuration.
As with Gnuastro's programs, you can inspect the default values with @option{-P} (or @option{--printparams}, the output just looks a little different here).
The default top-level build directory is @file{/dev/shm}: the shared memory directory in RAM on GNU/Linux systems as described in @ref{Configure and build in RAM}.
@cindex Debug
Besides these, it also has some features to facilitate the job of developers or bleeding edge users like the @option{--debug} option to do a fast build, with debug information, no optimization, and no shared libraries.
Here is the full list of options you can feed to this script to configure its operations.
@cartouche
@noindent
@strong{Not all Gnuastro's common program behavior usable here:}
@file{developer-build} is just a non-installed script with a very limited scope as described above.
It thus does not have all the common option behaviors or configuration files for example.
@end cartouche
@cartouche
@noindent
@strong{White space between option and value:} @file{developer-build}
does not accept an @key{=} sign between the options and their values.
It also needs at least one character between the option and its value.
Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while @option{-n4}, @option{-n=4}, or @option{--numthreads=4} are not.
Finally multiple short option names cannot be merged: for example, you can say @option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not acceptable.
@end cartouche
@cartouche
@noindent
@strong{Reusable for other packages:} This script can be used in any software which is configured and built using the GNU Build System.
Just copy it in the top source directory of that software and run it from there.
@end cartouche
@cartouche
@noindent
@strong{Example usage:} See @ref{Forking tutorial} for an example usage of this script in some scenarios.
@end cartouche
@table @option
@item -b STR
@itemx --top-build-dir STR
The top build directory to make a directory for the build.
If this option is not called, the top build directory is @file{/dev/shm} (only available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).
@item -V
@itemx --version
Print the version string of Gnuastro that will be used in the build.
This string will be appended to the directory name containing the built files.
@item -a
@itemx --autoreconf
Run @command{autoreconf -f} before building the package.
In Gnuastro, this is necessary when a new commit has been made to the project history.
In Gnuastro's build system, the Git description will be used as the version, see @ref{Version numbering} and @ref{Synchronizing}.
@item -c
@itemx --clean
@cindex GNU Autoreconf
Delete the contents of the build directory (clean it) before starting the configuration and building of this run.
This is useful when you have recently pulled changes from the main Git repository, or committed a change yourself and ran @command{autoreconf -f}, see @ref{Synchronizing}.
After running GNU Autoconf, the version will be updated and you need to do a clean build.
@item -d
@itemx --debug
@cindex Valgrind
@cindex GNU Debugger (GDB)
Build with debugging flags (for example, to use in GNU Debugger, also known as GDB, or Valgrind), disable optimization and also the building of shared libraries.
Similar to running the configure script of below
@example
$ ./configure --enable-debug
@end example
Besides all the debugging advantages of building with this option, it will also be significantly speed up the build (at the cost of slower built programs).
So when you are testing something small or working on the build system itself, it will be much faster to test your work with this option.
@item -v
@itemx --valgrind
@cindex Valgrind
Build all @command{make check} tests within Valgrind.
For more, see the description of @option{--enable-check-with-valgrind} in @ref{Gnuastro configure options}.
@item -j INT
@itemx --jobs INT
The maximum number of threads/jobs for Make to build at any moment.
As the name suggests (Make has an identical option), the number given to this option is directly passed on to any call of Make with its @option{-j} option.
@item -C
@itemx --check
After finishing the build, also run @command{make check}.
By default, @command{make check} is not run because the developer usually has their own checks to work on (for example, defined in @file{tests/during-dev.sh}).
@item -i
@itemx --install
After finishing the build, also run @command{make install}.
@item -D
@itemx --dist
Run @code{make dist-lzip pdf} to build a distribution tarball (in @file{.tar.lz} format) and a PDF manual.
This can be useful for archiving, or sending to colleagues who do not use Git for an easy build and manual.
@item -u STR
@item --upload STR
Activate the @option{--dist} (@option{-D}) option, then use secure copy (@command{scp}, part of the SSH tools) to copy the tarball and PDF to the @file{src} and @file{pdf} sub-directories of the specified server and its directory (value to this option).
For example, @command{--upload my-server:dir}, will copy the tarball in the @file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
It will then make a symbolic link in the top server directory to the tarball that is called @file{gnuastro-latest.tar.lz}.
@item -p STR
@itemx --publish=STR
Clean, bootstrap, build, check and upload the checked tarball and PDF of the book to the URL given as @code{STR}.
This option is just a wrapper for @option{--autoreconf --clean --debug --check --upload STR}.
@option{--debug} is added because it will greatly speed up the build.
@option{--debug} will have no effect on the produced tarball (people who later download will be building with the default optimized, and non-debug mode).
This option is good when you have made a commit and are ready to publish it on your server (if nothing crashes).
Recall that if any of the previous steps fail the script aborts.
@item -I
@item --install-archive
Short for @option{--autoreconf --clean --check --install --dist}.
This is useful when you actually want to install the commit you just made (if the build and checks succeed).
It will also produce a distribution tarball and PDF manual for easy access to the installed tarball on your system at a later time.
Ideally, Gnuastro's Git version history makes it easy for a prepared system to revert back to a different point in history.
But Gnuastro also needs to bootstrap files and also your collaborators might (usually do!) find it too much of a burden to do the bootstrapping themselves.
So it is convenient to have a tarball and PDF manual of the version you have installed (and are using in your research) handily available.
@item -h
@itemx --help
@itemx -P
@itemx --printparams
Print a description of this script along with all the options and their
current values.
@end table
@node Tests, A4 print book, Separate build and source directories, Build and install
@subsection Tests
@cindex @command{make check}
@cindex @file{mock.fits}
@cindex Tests, running
@cindex Checking tests
After successfully building (compiling) the programs with the @command{$ make} command you can check the installation before installing.
To run the tests, run
@example
$ make check
@end example
For every program some tests are designed to check some possible operations.
Running the command above will run those tests and give you a final report.
If everything is OK and you have built all the programs, all the tests should pass.
In case any of the tests fail, please have a look at @ref{Known issues} and if that still does not fix your problem, look that the @file{./tests/test-suite.log} file to see if the source of the error is something particular to your system or more general.
If you feel it is general, please contact us because it might be a bug.
Note that the tests of some programs depend on the outputs of other program's tests, so if you have not installed them they might be skipped or fail.
Prior to releasing every distribution all these tests are checked.
If you have a reasonably modern terminal, the outputs of the successful tests will be colored green and the failed ones will be colored red.
These scripts can also act as a good set of examples for you to see how the programs are run.
All the tests are in the @file{tests/} directory.
The tests for each program are shell scripts (ending with @file{.sh}) in a sub-directory of this directory with the same name as the program.
See @ref{Test scripts} for more detailed information about these scripts in case you want to inspect them.
@node A4 print book, Known issues, Tests, Build and install
@subsection A4 print book
@cindex A4 print book
@cindex Modifying print book
@cindex A4 paper size
@cindex US letter paper size
@cindex Paper size, A4
@cindex Paper size, US letter
The default print version of this book is provided in the letter paper size.
If you would like to have the print version of this book on paper and you are living in a country which uses A4, then you can rebuild the book.
The great thing about the GNU build system is that the book source code which is in Texinfo is also distributed with the program source code, enabling you to do such customization (hacking).
@cindex GNU Texinfo
In order to change the paper size, you will need to have GNU Texinfo installed.
Open @file{doc/gnuastro.texi} with any text editor.
This is the source file that created this book.
In the first few lines you will see this line:
@example
@@c@@afourpaper
@end example
@noindent
In Texinfo, a line is commented with @code{@@c}.
Therefore, un-comment this line by deleting the first two characters such that it changes to:
@example
@@afourpaper
@end example
@noindent
Save the file and close it.
You can now run the following command
@example
$ make pdf
@end example
@noindent
and the new PDF book will be available in @file{SRCdir/doc/gnuastro.pdf}.
By changing the @command{pdf} in @command{$ make pdf} to @command{ps} or @command{dvi} you can have the book in those formats.
Note that you can do this for any book that is in Texinfo format, they might not have @code{@@afourpaper} line, so you can add it close to the top of the Texinfo source file.
@node Known issues, , A4 print book, Build and install
@subsection Known issues
Depending on your operating system and the version of the compiler you are using, you might confront some known problems during the configuration (@command{$ ./configure}), compilation (@command{$ make}) and tests (@command{$ make check}).
Here, their solutions are discussed.
@itemize
@cindex Configuration, not finding library
@cindex Development packages
@item
@command{$ ./configure}: @emph{Configure complains about not finding a library even though you have installed it.}
The possible solution is based on how you installed the package:
@itemize
@item
From your distribution's package manager.
Most probably this is because your distribution has separated the header files of a library from the library parts.
Please also install the `development' packages for those libraries too.
Just add a @file{-dev} or @file{-devel} to the end of the package name and re-run the package manager.
This will not happen if you install the libraries from source.
When installed from source, the headers are also installed.
@item
@cindex @command{LDFLAGS}
From source.
Then your linker is not looking where you installed the library.
If you followed the instructions in this chapter, all the libraries will be installed in @file{/usr/local/lib}.
So you have to tell your linker to look in this directory.
To do so, configure Gnuastro like this:
@example
$ ./configure LDFLAGS="-L/usr/local/lib"
@end example
If you want to use the libraries for your other programming projects, then
export this environment variable in a start-up script similar to the case
for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation
directory}.
@end itemize
@item
@vindex --enable-gnulibcheck
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
@command{$ make}: @emph{Complains about an unknown function on a non-GNU based operating system.}
In this case, please run @command{$ ./configure} with the @option{--enable-gnulibcheck} option to see if the problem is from the GNU Portability Library (Gnulib) not supporting your system or if there is a problem in Gnuastro, see @ref{Gnuastro configure options}.
If the problem is not in Gnulib and after all its tests you get the same complaint from @command{make}, then please contact us at @file{bug-gnuastro@@gnu.org}.
The cause is probably that a function that we have used is not supported by your operating system and we did not included it along with the source tarball.
If the function is available in Gnulib, it can be fixed immediately.
@item
@cindex @command{CPPFLAGS}
@command{$ make}: @emph{Cannot find the headers (.h files) of installed libraries.}
Your C preprocessor (CPP) is not looking in the right place.
To fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below (assuming the library is installed in @file{/usr/local/include}:
@example
$ ./configure CPPFLAGS="-I/usr/local/include"
@end example
If you want to use the libraries for your other programming projects, then export this environment variable in a start-up script similar to the case for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation directory}.
@cindex Tests, only one passes
@cindex @file{LD_LIBRARY_PATH}
@item
@command{$ make check}: @emph{Only the first couple of tests pass, all the rest fail or get skipped.} It is highly likely that when searching for shared libraries, your system does not look into the @file{/usr/local/lib} directory (or wherever you installed Gnuastro or its dependencies).
To make sure it is added to the list of directories, add the following line to your @file{~/.bashrc} file and restart your terminal.
Do not forget to change @file{/usr/local/lib} if the libraries are installed in other (non-standard) directories.
@example
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
@end example
You can also add more directories by using a colon `@code{:}' to separate them.
See @ref{Installation directory} and @ref{Linking} to learn more on the @code{PATH} variables and dynamic linking respectively.
@cindex GPL Ghostscript
@item
@command{$ make check}: @emph{The tests relying on external programs (for example, @file{fitstopdf.sh} fail.)} This is probably due to the fact that the version number of the external programs is too old for the tests we have preformed.
Please update the program to a more recent version.
For example, to create a PDF image, you will need GPL Ghostscript, but older versions do not work, we have successfully tested it on version 9.15.
Older versions might cause a failure in the test result.
@item
@cindex @TeX{}
@cindex GNU Texinfo
@command{$ make pdf}: @emph{The PDF book cannot be made.}
To make a PDF book, you need to have the GNU Texinfo program (like any program, the more recent the better).
A working @TeX{} program is also necessary, which you can get from Tex Live@footnote{@url{https://www.tug.org/texlive/}}.
@item
@cindex GNU Libtool
After @code{make check}: do not copy the programs' executables to another (for example, the installation) directory manually (using @command{cp}, or @command{mv} for example).
In the default configuration@footnote{If you configure Gnuastro with the @option{--disable-shared} option, then the libraries will be statically linked to the programs and this problem will not exist, see @ref{Linking}.}, the program binaries need to link with Gnuastro's shared library which is also built and installed with the programs.
Therefore, to run successfully before and after installation, linking modifications need to be made by GNU Libtool at installation time.
@command{make install} does this internally, but a simple copy might give linking errors when you run it.
If you need to copy the executables, you can do so after installation.
@cindex Tests, error in converting images
@item
@command{$ make} (when bootstrapping): After you have bootstrapped Gnuastro from the version-controlled source, you may confront the following (or a similar) error when converting images (for more on bootstrapping, see @ref{Bootstrapping}):
@example
@code{convert: attempt to perform an operation not allowed by the
security policy `gs' @ error/delegate.c/ExternalDelegateCommand/378.}
@end example
This error is a known issue@footnote{@url{https://wiki.archlinux.org/title/ImageMagick}} with @code{ImageMagick} security policies in some operating systems.
In short, @code{imagemagick} uses Ghostscript for PDF, EPS, PS and XPS parsing.
However, because some security vulnerabilities have been found in Ghostscript@footnote{@url{https://security.archlinux.org/package/ghostscript}}, by default, ImageMagick may be compiled without Ghostscript library.
In such cases, if allowed, ImageMagick will fall back to the external @command{gs} command instead of the library.
But this may be disabled with the following (or a similar) lines in @code{/etc/ImageMagick-7/policy.xml} (anything related to PDF, PS, or Ghostscript).
@example
<policy domain="delegate" rights="none" pattern="gs" />
<policy domain="module" rights="none" pattern="@{PS,PDF,XPS@}" />
@end example
To fix this problem, simply comment such lines (by placing a @code{<!--} before each statement/line and @code{-->} at the end of that statement/line).
@item
@command{$ make dist}: @emph{Complains about "Numeric user ID too large" and aborts.} This is a problem related to the way that GNU Automake calls GNU Tar on some operating systems.
For more information see bug 65578@footnote{@url{https://savannah.gnu.org/bugs/index.php?65578}}.
Until this bug is fixed, you can build the tarball by following the steps below:
@example
## Generate the build directory; ready to be packaged.
$ ./developer-build -a -c -d -C
## Go to the build directory:
$ cd build
@end example
Open and edit the @file{Makefile} to add @code{--format=posix} to the definition of the @code{am__tar}.
So, the line
@example
am__tar = $$@{TAR-tar@} chof - "$$tardir"
@end example
becomes
@example
am__tar = $$@{TAR-tar@} --format=posix chof - "$$tardir"
@end example
Finally, create the tarball:
@example
$ make dist
@end example
@end itemize
@noindent
If your problem was not listed above, please file a bug report (@ref{Report a bug}).
@node Common program behavior, Data containers, Installation, Top
@chapter Common program behavior
All the programs in Gnuastro share a set of common behavior mainly to do with user interaction to facilitate their usage and development.
This includes how to feed input datasets into the programs, how to configure them, specifying the outputs, numerical data types, treating columns of information in tables, etc.
This chapter is devoted to describing this common behavior in all programs.
Because the behaviors discussed here are common to several programs, they are not repeated in each program's description.
In @ref{Command-line}, a very general description of running the programs on the command-line is discussed, like difference between arguments and options, as well as options that are common/shared between all programs.
None of Gnuastro's programs keep any internal configuration value (values for their different operational steps), they read their configuration primarily from the command-line, then from specific files in directory, user, or system-wide settings.
Using these configuration files can greatly help reproducible and robust usage of Gnuastro, see @ref{Configuration files} for more.
It is not possible to always have the different options and configurations of each program on the top of your head.
It is very natural to forget the options of a program, their current default values, or how it should be run and what it did.
Gnuastro's programs have multiple ways to help you refresh your memory in multiple levels (just an option name, a short description, or fast access to the relevant section of the manual.
See @ref{Getting help} for more for more on benefiting from this very convenient feature.
Many of the programs use the multi-threaded character of modern CPUs, in @ref{Multi-threaded operations} we will discuss how you can configure this behavior, along with some tips on making best use of them.
In @ref{Numeric data types}, we will review the various types to store numbers in your datasets: setting the proper type for the usage context@footnote{For example, if the values in your dataset can only be integers between 0 or 65000, store them in a unsigned 16-bit type, not 64-bit floating point type (which is the default in most systems).
It takes four times less space and is much faster to process.} can greatly improve the file size and also speed of reading, writing or processing them.
We will then look into the recognized table formats in @ref{Tables} and how large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
Finally, we will take a look at the behavior regarding output files: @ref{Automatic output} describes how the programs set a default name for their output when you do not give one explicitly (using @option{--output}).
When the output is a FITS file, all the programs also store some very useful information in the header that is discussed in @ref{Output FITS files}.
@menu
* Command-line:: How to use the command-line.
* Configuration files:: Values for unspecified variables.
* Getting help:: Getting more information on the go.
* Multi-threaded operations:: How threads are managed in Gnuastro.
* Numeric data types:: Different types and how to specify them.
* Memory management:: How memory is allocated (in RAM or HDD/SSD).
* Tables:: Recognized table formats.
* Tessellation:: Tile the dataset into non-overlapping bins.
* Automatic output:: About automatic output names.
* Output FITS files:: Common properties when outputs are FITS.
* Numeric locale:: Decimal point printed like 0.5 instead of 0,5.
@end menu
@node Command-line, Configuration files, Common program behavior, Common program behavior
@section Command-line
Gnuastro's programs are customized through the standard Unix-like command-line environment and GNU style command-line options.
Both are very common in many Unix-like operating system programs.
In @ref{Arguments and options} we will start with the difference between arguments and options and elaborate on the GNU style of options.
Afterwards, in @ref{Common options}, we will go into the detailed list of all the options that are common to all the programs in Gnuastro.
@menu
* Arguments and options:: Different ways to specify inputs and configuration.
* Common options:: Options that are shared between all programs.
* Shell TAB completion:: Customized TAB completion in Gnuastro.
* Standard input:: Using output of another program as input.
* Shell tips:: Useful tips and tricks for program usage.
@end menu
@node Arguments and options, Common options, Command-line, Command-line
@subsection Arguments and options
@cindex Shell
@cindex Options to programs
@cindex Command-line options
@cindex Arguments to programs
@cindex Command-line arguments
When you type a command on the command-line, it is passed onto the shell (a generic name for the program that manages the command-line) as a string of characters.
As an example, see the ``Invoking ProgramName'' sections in this manual for some examples of commands with each program, like @ref{Invoking asttable}, @ref{Invoking astfits}, or @ref{Invoking aststatistics}.
The shell then brakes up your string into separate @emph{tokens} or @emph{words} using any @emph{metacharacters} (like white-space, tab, @command{|}, @command{>} or @command{;}) that are in the string.
On the command-line, the first thing you usually enter is the name of the program you want to run.
After that, you can specify two types of tokens: @emph{arguments} and @emph{options}.
In the GNU-style, arguments are those tokens that are not preceded by any hyphens (@command{-}, see @ref{Arguments}).
Here is one example:
@example
$ astcrop --center=53.162551,-27.789676 -w10/3600 --mode=wcs udf.fits
@end example
In the example above, we are running @ref{Crop} to crop a region of width 10 arc-seconds centered at the given RA and Dec from the input Hubble Ultra-Deep Field (UDF) FITS image.
Here, the argument is @file{udf.fits}.
Arguments are most commonly the input file names containing your data.
Options start with one or two hyphens, followed by an identifier for the option (the option's name, for example, @option{--center}, @option{-w}, @option{--mode} in the example above) and its value (anything after the option name, or the optional @key{=} character).
Through options you can configure how the program runs (interprets the data you provided).
@vindex --help
@vindex --usage
@cindex Mandatory arguments
Arguments can be mandatory and optional and unlike options, they do not have any identifiers.
Hence, when there multiple arguments, their order might also matter (for example, in @command{cp} which is used for copying one file to another location).
The outputs of @option{--usage} and @option{--help} shows which arguments are optional and which are mandatory, see @ref{--usage}.
As their name suggests, @emph{options} can be considered to be optional and most of the time, you do not have to worry about what order you specify them in.
When the order does matter, or the option can be invoked multiple times, it is explicitly mentioned in the ``Invoking ProgramName'' section of each program (this is a very important aspect of an option).
@cindex Metacharacters on the command-line In case your arguments or option values contain any of the shell's meta-characters, you have to quote them.
If there is only one such character, you can use a backslash (@command{\}) before it.
If there are multiple, it might be easier to simply put your whole argument or option value inside of double quotes (@command{"}).
In such cases, everything inside the double quotes will be seen as one token or word.
@cindex HDU
@cindex Header data unit
For example, let's say you want to specify the header data unit (HDU) of your FITS file using a complex expression like `@command{3; images(exposure > 100)}'.
If you simply add these after the @option{--hdu} (@option{-h}) option, the programs in Gnuastro will read the value to the HDU option as `@command{3}' and run.
Then, the shell will attempt to run a separate command `@command{images(exposure > 100)}' and complain about a syntax error.
This is because the semicolon (@command{;}) is an `end of command' character in the shell.
To solve this problem you can simply put double quotes around the whole string you want to pass to @option{--hdu} as seen below:
@example
$ astcrop --hdu="3; images(exposure > 100)" image.fits
@end example
@menu
* Arguments:: For specifying the main input files/operations.
* Options:: For configuring the behavior of the program.
@end menu
@node Arguments, Options, Arguments and options, Arguments and options
@subsubsection Arguments
In Gnuastro, arguments are almost exclusively used as the input data file names.
Please consult the first few paragraph of the ``Invoking ProgramName'' section for each program for a description of what it expects as input, how many arguments, or input data, it accepts, or in what order.
Everything particular about how a program treats arguments, is explained under the ``Invoking ProgramName'' section for that program.
@cindex Filename suffix
@cindex Suffix (filename)
@cindex FITS filename suffixes
Generally, if there is a standard file name suffix for a particular format, that filename extension is checked to identify their format.
In astronomy (and thus Gnuastro), FITS is the preferred format for inputs and outputs, so the focus here and throughout this book is on FITS.
However, other formats are also accepted in special cases, for example, @ref{ConvertType} also accepts JPEG or TIFF inputs, and writes JPEG, EPS or PDF files.
The recognized suffixes for these formats are listed there.
The list below shows the recognized suffixes for FITS data files in Gnuastro's programs.
However, in some scenarios FITS writers may not append a suffix to the file, or use a non-recognized suffix (not in the list below).
Therefore if a FITS file is expected, but it does not have any of these suffixes, Gnuastro programs will look into the contents of the file and if it does conform with the FITS standard, the file will be used.
Just note that checking about 5 characters at the end of a name string is much more efficient than opening and checking the contents of a file, so it is generally recommended to have a recognized FITS suffix.
@itemize
@item
@file{.fits}: The standard file name ending of a FITS image.
@item
@file{.fit}: Alternative (3 character) FITS suffix.
@item
@file{.fits.Z}: A FITS image compressed with @command{compress}.
@item
@file{.fits.gz}: A FITS image compressed with GNU zip (gzip).
@item
@file{.fits.fz}: A FITS image compressed with @command{fpack}.
@item
@file{.imh}: IRAF format image file.
@end itemize
Throughout this book and in the command-line outputs, whenever we want to generalize all such astronomical data formats in a text place-holder, we will use @file{ASTRdata} and assume that the extension is also part of this name.
Any file ending with these names is directly passed on to CFITSIO to read.
Therefore you do not necessarily have to have these files on your computer, they can also be located on an FTP or HTTP server too, see the CFITSIO manual for more information.
CFITSIO has its own error reporting techniques, if your input file(s) cannot be opened, or read, those errors will be printed prior to the final error by Gnuastro.
@node Options, , Arguments, Arguments and options
@subsubsection Options
@cindex GNU style options
@cindex Options, GNU style
@cindex Options, short (@option{-}) and long (@option{--})
Command-line options allow configuring the behavior of a program in all GNU/Linux applications for each particular execution on a particular input data.
A single option can be called in two ways: @emph{long} or @emph{short}.
All options in Gnuastro accept the long format which has two hyphens an can have many characters (for example, @option{--hdu}).
Short options only have one hyphen (@key{-}) followed by one character (for example, @option{-h}).
You can see some examples in the list of options in @ref{Common options} or those for each program's ``Invoking ProgramName'' section.
Both formats are shown for those which support both.
First the short is shown then the long.
Usually, the short options are handy when you are writing on the command-line and want to save keystrokes and time.
The long options are good for shell scripts, where you are not usually rushing.
Long options provide a level of documentation, since they are more descriptive and less cryptic.
Usually after a few months of not running a program, the short options will be forgotten and reading your previously written script will not be easy.
@cindex On/Off options
@cindex Options, on/off
Some options need to be given a value if they are called and some do not.
You can think of the latter type of options as on/off options.
These two types of options can be distinguished using the output of the @option{--help} and @option{--usage} options, which are common to all GNU software, see @ref{Getting help}.
In Gnuastro we use the following strings to specify when the option needs a value and what format that value should be in.
More specific tests will be done in the program and if the values are out of range (for example, negative when the program only wants a positive value), an error will be reported.
@vtable @option
@item INT
The value is read as an integer.
@item FLT
The value is read as a float.
There are generally two types, depending on the context.
If they are for fractions, they will have to be less than or equal to unity.
@item STR
The value is read as a string of characters.
For example, column names in a table, or HDU names in a multi-extension FITS file.
Other examples include human-readable settings by some programs like the @option{--domain} option of the Convolve program that can be either @code{spatial} or @code{frequency} (to specify the type of convolution, see @ref{Convolve}).
@item FITS @r{or} FITS/TXT
The value should be a file (most commonly FITS).
In many cases, other formats may also be accepted (for example, input tables can be FITS or plain-text, see @ref{Recognized table formats}).
@end vtable
@noindent
@cindex Values to options
@cindex Option values
To specify a value in the short format, simply put the value after the option.
Note that since the short options are only one character long, you do not have to type anything between the option and its value.
For the long option you either need white space or an @option{=} sign, for example, @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are all equivalent.
The short format of on/off options (those that do not need values) can be concatenated for example, these two hypothetical sequences of options are equivalent: @option{-a -b -c4} and @option{-abc4}.
As an example, consider the following command to run Crop:
@example
$ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
@end example
@noindent
The @command{$} is the shell prompt, @command{astcrop} is the program name.
There are two arguments (@command{catalog.txt} and @command{ASTRdata}) and four options, two of them given in short format (@option{-D}, @option{-r}) and two in long format (@option{--width} and @option{--deccol}).
Three of them require a value and one (@option{-D}) is an on/off option.
@vindex --printparams
@cindex Options, abbreviation
@cindex Long option abbreviation
If an abbreviation is unique between all the options of a program, the long option names can be abbreviated.
For example, instead of typing @option{--printparams}, typing @option{--print} or maybe even @option{--pri} will be enough, if there are conflicts, the program will warn you and show you the alternatives.
Finally, if you want the argument parser to stop parsing arguments beyond a certain point, you can use two dashes: @option{--}.
No text on the command-line beyond these two dashes will be parsed.
@cindex Repeated options
@cindex Options, repeated
Gnuastro has two types of options with values, those that only take a single value are the most common type.
If these options are repeated or called more than once on the command-line, the value of the last time it was called will be assigned to it.
This is very useful when you are testing/experimenting.
Let's say you want to make a small modification to one option value.
You can simply type the option with a new value in the end of the command and see how the script works.
If you are satisfied with the change, you can remove the original option for human readability.
If the change was not satisfactory, you can remove the one you just added and not worry about forgetting the original value.
Without this capability, you would have to memorize or save the original value somewhere else, run the command and then change the value again which is not at all convenient and is potentially cause lots of bugs.
On the other hand, some options can be called multiple times in one run of a program and can thus take multiple values (for example, see the @option{--column} option in @ref{Invoking asttable}.
In these cases, the order of stored values is the same order that you specified on the command-line.
@cindex Configuration files
@cindex Default option values
Gnuastro's programs do not keep any internal default values, so some options are mandatory and if they do not have a value, the program will complain and abort.
Most programs have many such options and typing them by hand on every call is impractical.
To facilitate the user experience, after parsing the command-line, Gnuastro's programs read special configuration files to get the necessary values for the options you have not identified on the command-line.
These configuration files are fully described in @ref{Configuration files}.
@cartouche
@noindent
@cindex Tilde expansion as option values
@strong{CAUTION:} In specifying a file address, if you want to use the shell's tilde expansion (@command{~}) to specify your home directory, leave at least one space between the option name and your value.
For example, use @command{-o ~/test}, @command{--output ~/test} or @command{--output= ~/test}.
Calling them with @command{-o~/test} or @command{--output=~/test} will disable shell expansion.
@end cartouche
@cartouche
@noindent
@strong{CAUTION:} If you forget to specify a value for an option which requires one, and that option is the last one, Gnuastro will warn you.
But if it is in the middle of the command, it will take the text of the next option or argument as the value which can cause undefined behavior.
@end cartouche
@cartouche
@noindent
@cindex Counting from zero.
@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in others 1.
You can assume by default that counting starts from 1, if it starts from 0 for a special option, it will be explicitly mentioned.
@end cartouche
@node Common options, Shell TAB completion, Arguments and options, Command-line
@subsection Common options
@cindex Options common to all programs
@cindex Gnuastro common options
To facilitate the job of the users and developers, all the programs in Gnuastro share some basic command-line options for the options that are common to many of the programs.
The full list is classified as @ref{Input output options}, @ref{Processing options}, and @ref{Operating mode options}.
In some programs, some of the options are irrelevant, but still recognized (you will not get an unrecognized option error, but the value is not used).
Unless otherwise mentioned, these options are identical between all programs.
@menu
* Input output options:: Common input/output options.
* Processing options:: Options for common processing steps.
* Operating mode options:: Common operating mode options.
@end menu
@node Input output options, Processing options, Common options, Common options
@subsubsection Input/Output options
These options are to do with the input and outputs of the various
programs.
@vtable @option
@cindex Timeout
@cindex Standard input
@item --stdintimeout
Number of micro-seconds to wait for writing/typing in the @emph{first line} of standard input from the command-line (see @ref{Standard input}).
This is only relevant for programs that also accept input from the standard input, @emph{and} you want to manually write/type the contents on the terminal.
When the standard input is already connected to a pipe (output of another program), there will not be any waiting (hence no timeout, thus making this option redundant).
If the first line-break (for example, with the @key{ENTER} key) is not provided before the timeout, the program will abort with an error that no input was given.
Note that this time interval is @emph{only} for the first line that you type.
Once the first line is given, the program will assume that more data will come and accept rest of your inputs without any time limit.
You need to specify the ending of the standard input, for example, by pressing @key{CTRL-D} after a new line.
Note that any input you write/type into a program on the command-line with Standard input will be discarded (lost) once the program is finished.
It is only recoverable manually from your command-line (where you actually typed) as long as the terminal is open.
So only use this feature when you are sure that you do not need the dataset (or have a copy of it somewhere else).
@cindex HDU
@cindex Header data unit
@item -h STR/INT
@itemx --hdu=STR/INT
The name or number of the desired Header Data Unit, or HDU, in the FITS image.
A FITS file can store multiple HDUs or extensions, each with either an image or a table or nothing at all (only a header).
Note that counting of the extensions starts from 0(zero), not 1(one).
Counting from 0 is forced on us by CFITSIO which directly reads the value you give with this option (see @ref{CFITSIO}).
When specifying the name, case is not important so @command{IMAGE}, @command{image} or @command{ImAgE} are equivalent.
CFITSIO has many capabilities to help you find the extension you want, far beyond the simple extension number and name.
See CFITSIO manual's ``HDU Location Specification'' section for a very complete explanation with several examples.
A @code{#} is appended to the string you specify for the HDU@footnote{With the @code{#} character, CFITSIO will only read the desired HDU into your memory, not all the existing HDUs in the fits file.} and the result is put in square brackets and appended to the FITS file name before calling CFITSIO to read the contents of the HDU for all the programs in Gnuastro.
@cartouche
@noindent
@strong{Default HDU is HDU number 1 (counting from 0):} by default, Gnuastro’s programs assume that their (main/first) input is in HDU number 1 (counting from zero).
So if you don’t specify the HDU number, the program will read the input from this HDU.
For programs that can take multiple FITS datasets as input (like @ref{Arithmetic}) this default HDU applies to the first input, you still need to call @option{--hdu} for the other inputs.
Generally, all Gnuastro's programs write their outputs in HDU number 1 (HDU 0 is reserved for metadata like the configuration parameters that the program was run with).
For more on this, see @ref{Fits}.
@end cartouche
@item -s STR
@itemx --searchin=STR
Where to match/search for columns when the column identifier was not a number, see @ref{Selecting table columns}.
The acceptable values are @command{name}, @command{unit}, or @command{comment}.
This option is only relevant for programs that take table columns as input.
@item -I
@itemx --ignorecase
Ignore case while matching/searching column meta-data (in the field specified by the @option{--searchin}).
The FITS standard suggests to treat the column names as case insensitive, which is strongly recommended here also but is not enforced.
This option is only relevant for programs that take table columns as input.
This option is not relevant to @ref{BuildProgram}, hence in that program the short option @option{-I} is used for include directories, not to ignore case.
@item -o STR
@itemx --output=STR
The name of the output file or directory. With this option the automatic output names explained in @ref{Automatic output} are ignored.
@item -T STR
@itemx --type=STR
The data type of the output depending on the program context.
This option is not applicable to some programs like @ref{Fits} and will be ignored by them.
The different acceptable values to this option are fully described in @ref{Numeric data types}.
@item -D
@itemx --dontdelete
By default, if the output file already exists, Gnuastro's programs will silently delete it and put their own outputs in its place.
When this option is activated, if the output file already exists, the programs will not delete it, will warn you, and will abort.
@item -K
@itemx --keepinputdir
In automatic output names, do not remove the directory information of the input file names.
As explained in @ref{Automatic output}, if no output name is specified (with @option{--output}), then the output name will be made in the existing directory based on your input's file name (ignoring the directory of the input).
If you call this option, the directory information of the input will be kept and the automatically generated output name will be in the same directory as the input (usually with a suffix added).
Note that his is only relevant if you are running the program in a different directory than the input data.
@item -t STR
@itemx --tableformat=STR
The output table's type.
This option is only relevant when the output is a table and its format cannot be deduced from its filename.
For example, if a name ending in @file{.fits} was given to @option{--output}, then the program knows you want a FITS table.
But there are two types of FITS tables: FITS ASCII, and FITS binary.
Thus, with this option, the program is able to identify which type you want.
The currently recognized values to this option are:
@item --wcslinearmatrix=STR
Select the linear transformation matrix of the output's WCS.
This option only takes two values: @code{pc} (for the @code{PCi_j} formalism) and @code{cd} (for @code{CDi_j}).
For more on the different formalisms, please see Section 8.1 of the FITS standard@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}, version 4.0.
@cindex @code{CDELT}
In short, in the @code{PCi_j} formalism, we only keep the linear rotation matrix in these keywords and put the scaling factor (or the pixel scale in astronomical imaging) in the @code{CDELTi} keywords.
In the @code{CDi_j} formalism, we blend the scaling into the rotation into a single matrix and keep that matrix in these FITS keywords.
By default, Gnuastro uses the @code{PCi_j} formalism, because it greatly helps in human readability of the raw keywords and is also the default mode of WCSLIB.
However, in some circumstances it may be necessary to have the keywords in the CD format; for example, when you need to feed the outputs into other software that do not follow the full FITS standard and only recognize the @code{CDi_j} formalism.
@table @command
@item txt
A plain text table with white-space characters between the columns (see
@ref{Gnuastro text table format}).
@item fits-ascii
A FITS ASCII table (see @ref{Recognized table formats}).
@item fits-binary
A FITS binary table (see @ref{Recognized table formats}).
@end table
@item --outfitsnoconfig
Do not write any of the program's metadata (option values or versions and dates) into the 0-th HDU of the output FITS file, see @ref{Output FITS files}.
@item --outfitsnodate
Do not write the @code{DATE} or @code{DATEUTC} keywords into the 0-th HDU of the output FITS file, see @ref{Output FITS files}.
@item --outfitsnocommit
Do not write the @code{COMMIT} keyword into the 0-th HDU of the output FITS file, see @ref{Output FITS files}.
@item --outfitsnoversions
Do not write the versions of any dependency software into the 0-th HDU of the output FITS file, see @ref{Output FITS files}.
@end vtable
@node Processing options, Operating mode options, Input output options, Common options
@subsubsection Processing options
Some processing steps are common to several programs, so they are defined as common options to all programs.
Note that this class of common options is thus necessarily less common between all the programs than those described in @ref{Input output options}, or @ref{Operating mode options} options.
Also, if they are irrelevant for a program, these options will not display in the @option{--help} output of the program.
@table @option
@item --minmapsize=INT
The minimum size (in bytes) to memory-map a processing/internal array as a file (on the non-volatile HDD/SSD), and not use the system's RAM.
Before using this option, please read @ref{Memory management}.
By default processing arrays will only be memory-mapped to a file when the RAM is full.
With this option, you can force the memory-mapping, even when there is enough RAM.
To ensure this default behavior, the pre-defined value to this option is an extremely large value (larger than any existing RAM).
Please note that using a non-volatile file (in the HDD/SDD) instead of RAM can significantly increase the program's running time, especially on HDDs (where read/write is slower).
Also, note that the number of memory-mapped files that your kernel can support is limited.
So when this option is necessary, it is best to give it values larger than 1 megabyte (@option{--minmapsize=1000000}).
You can then decrease it for a specific program's invocation on a large input after you see memory issues arise (for example, an error, or the program not aborting and fully consuming your memory).
If you see randomly named files remaining in this directory when the program finishes normally, please send us a bug report so we address the problem, see @ref{Report a bug}.
@cartouche
@noindent
@strong{Limited number of memory-mapped files:} The operating system kernels usually support a limited number of memory-mapped files.
Therefore never set @code{--minmapsize} to zero or a small number of bytes (so too many files are created).
If the kernel capacity is exceeded, the program will crash.
@end cartouche
@item --quietmmap
Do not print any message when an array is stored in non-volatile memory
(HDD/SSD) and not RAM, see the description of @option{--minmapsize} (above)
for more.
@item -Z INT[,INT[,...]]
@itemx --tilesize=[,INT[,...]]
The size of regular tiles for tessellation, see @ref{Tessellation}.
For each dimension an integer length (in units of data-elements or pixels) is necessary.
If the number of input dimensions is different from the number of values given to this option, the program will stop with an error.
Values must be separated by commas (@key{,}) and can also be fractions (for example, @code{4/2}).
If they are fractions, the result must be an integer, otherwise an error will be printed.
@item -M INT[,INT[,...]]
@itemx --numchannels=INT[,INT[,...]]
The number of channels for larger input tessellation, see @ref{Tessellation}.
The number and types of acceptable values are similar to @option{--tilesize}.
The only difference is that instead of length, the integers values given to this option represent the @emph{number} of channels, not their size.
@item -F FLT
@itemx --remainderfrac=FLT
The fraction of remainder size along all dimensions to add to the first tile.
See @ref{Tessellation} for a complete description.
This option is only relevant if @option{--tilesize} is not exactly divisible by the input dataset's size in a dimension.
If the remainder size is larger than this fraction (compared to @option{--tilesize}), then the remainder size will be added with one regular tile size and divided between two tiles at the start and end of the given dimension.
@item --workoverch
Ignore the channel borders for the high-level job of the given application.
As a result, while the channel borders are respected in defining the small tiles (such that no tile will cross a channel border), the higher-level program operation will ignore them, see @ref{Tessellation}.
@item --checktiles
Make a FITS file with the same dimensions as the input but each pixel is replaced with the ID of the tile that it is associated with.
Note that the tile IDs start from 0.
See @ref{Tessellation} for more on Tiling an image in Gnuastro.
@item --oneelempertile
When showing the tile values (for example, with @option{--checktiles}, or when the program's output is tessellated) only use one element for each tile.
This can be useful when only the relative values given to each tile compared to the rest are important or need to be checked.
Since the tiles usually have a large number of pixels within them the output will be much smaller, and so easier to read, write, store, or send.
Note that when the full input size in any dimension is not exactly divisible by the given @option{--tilesize} in that dimension, the edge tile(s) will have different sizes (in units of the input's size), see @option{--remainderfrac}.
But with this option, all displayed values are going to have the (same) size of one data-element.
Hence, in such cases, the image proportions are going to be slightly different with this option.
If your input image is not exactly divisible by the tile size and you want one value per tile for some higher-level processing, all is not lost though.
You can see how many pixels were within each tile (for example, to weight the values or discard some for later processing) with Gnuastro's Statistics (see @ref{Statistics}) as shown below.
The output FITS file is going to have two extensions, one with the median calculated on each tile and one with the number of elements that each tile covers.
You can then use the @code{where} operator in @ref{Arithmetic} to set the values of all tiles that do not have the regular area to a blank value.
@example
$ aststatistics --median --number --ontile input.fits \
--oneelempertile --output=o.fits
$ REGULAR_AREA=1600 # Check second extension of `o.fits'.
$ astarithmetic o.fits o.fits $REGULAR_AREA ne nan where \
-h1 -h2
@end example
Note that if @file{input.fits} also has blank values, then the median on
tiles with blank values will also be ignored with the command above (which
is desirable).
@item --inteponlyblank
When values are to be interpolated, only change the values of the blank
elements, keep the non-blank elements untouched.
@item --interpmetric=STR
@cindex Radial metric
@cindex Taxicab metric
@cindex Manhattan metric
@cindex Metric: Manhattan, Taxicab, Radial
The metric to use for finding nearest neighbors.
Currently it only accepts the Manhattan (or taxicab) metric with @code{manhattan}, or the radial metric with @code{radial}.
The Manhattan distance between two points is defined with @mymath{|\Delta{x}|+|\Delta{y}|}.
Thus the Manhattan metric has the advantage of being fast, but at the expense of being less accurate.
The radial distance is the standard definition of distance in a Euclidean space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}.
It is accurate, but the multiplication and square root can slow down the processing.
@item --interpnumngb=INT
The number of nearby non-blank neighbors to use for interpolation.
@end table
@node Operating mode options, , Processing options, Common options
@subsubsection Operating mode options
Another group of options that are common to all the programs in Gnuastro are those to do with the general operation of the programs.
The explanation for those that are not only limited to Gnuastro but are common to all GNU programs start with (GNU option).
@vtable @option
@item --
(GNU option) Stop parsing the command-line.
This option can be useful in scripts or when using the shell history.
Suppose you have a long list of options, and want to see if removing some of them (to read from configuration files, see @ref{Configuration files}) can give a better result.
If the ones you want to remove are the last ones on the command-line, you do not have to delete them, you can just add @option{--} before them and if you do not get what you want, you can remove the @option{--} and get the same initial result.
@item --usage
(GNU option) Only print the options and arguments and abort.
This is very useful for when you know the what the options do, and have just forgot their long/short identifiers, see @ref{--usage}.
@item -?
@itemx --help
(GNU option) Print all options with an explanation and abort.
Adding this option will print all the options in their short and long formats, also displaying which ones need a value if they are called (with an @option{=} after the long format followed by a string specifying the format, see @ref{Options}).
A short explanation is also given for what the option is for.
The program will quit immediately after the message is printed and will not do any form of processing, see @ref{--help}.
@item -V
@itemx --version
(GNU option) Print a short message, showing the full name, version, copyright information and program authors and abort.
On the first line, it will print the official name (not executable name) and version number of the program.
Following this is a blank line and a copyright information.
The program will not run.
@item -q
@itemx --quiet
Do not report steps.
All the programs in Gnuastro that have multiple major steps will report their steps for you to follow while they are operating.
If you do not want to see these reports, you can call this option and only error/warning messages will be printed.
If the steps are done very fast (depending on the properties of your input) disabling these reports will also decrease running time.
@item --cite
Print all necessary information to cite and acknowledge Gnuastro in your published papers.
With this option, the programs will print the Bib@TeX{} entry to include in your paper for Gnuastro in general, and the particular program's paper (if that program comes with a separate paper).
It will also print the necessary acknowledgment statement to add in the respective section of your paper and it will abort.
For a more complete explanation, please see @ref{Acknowledgments and short history}.
Citations and acknowledgments are vital for the continued work on Gnuastro.
Gnuastro started, and is continued, based on separate research projects.
So if you find any of the tools offered in Gnuastro to be useful in your research, please use the output of this command to cite and acknowledge the program (and Gnuastro) in your research paper.
Thank you.
Gnuastro is still new, there is no separate paper only devoted to Gnuastro yet.
Therefore currently the paper to cite for Gnuastro is the paper for NoiseChisel which is the first published paper introducing Gnuastro to the astronomical community.
Upon reaching a certain point, a paper completely devoted to describing Gnuastro's many functionalities will be published, see @ref{GNU Astronomy Utilities 1.0}.
@item -P
@itemx --printparams
With this option, Gnuastro's programs will read your command-line options and all the configuration files.
If there is no problem (like a missing parameter or a value in the wrong format or range) and immediately before actually running, the programs will print the full list of option names, values and descriptions, sorted and grouped by context and abort.
They will also report the version number, the date they were configured on your system and the time they were reported.
As an example, you can give your full command-line options and even the input and output file names and finally just add @option{-P} to check if all the parameters are finely set.
If everything is OK, you can just run the same command (easily retrieved from the shell history, with the top arrow key) and simply remove the last two characters that showed this option.
No program will actually start its processing when this option is called.
The otherwise mandatory arguments for each program (for example, input image or catalog files) are no longer required when you call this option.
@item --config=STR
Parse @option{STR} as a configuration file name, immediately when this option is confronted (see @ref{Configuration files}).
The @option{--config} option can be called multiple times in one run of any Gnuastro program on the command-line or in the configuration files.
In any case, it will be immediately read (before parsing the rest of the options on the command-line, or lines in a configuration file).
If the given file does not exist or cannot be read for any reason, the program will print a warning and continue its processing.
The warning can be suppressed with @option{--quiet}.
Note that by definition, options on the command-line still take precedence over those in any configuration file, including the file(s) given to this option if they are called before it.
Also see @option{--lastconfig} and @option{--onlyversion} on how this option can be used for reproducible results.
You can use @option{--checkconfig} (below) to check/confirm the parsing of configuration files.
@item --checkconfig
Print options and their values, within the command-line or configuration files, as they are parsed (see @ref{Configuration file precedence}).
If an option has already been set, or is ignored by the program, this option will also inform you with special values like @code{--ALREADY-SET--}.
Only options that are parsed after this option are printed, so to see the parsing of all input options, it is recommended to put this option immediately after the program name before any other options.
@cindex Debug
This is a very good option to confirm where the value of each option is has been defined in scenarios where there are multiple configuration files (for debugging).
@item --config-prefix=STR
Accept option names in configuration files that start with the given prefix.
Since order matters when reading custom configuration files, this option should be called @strong{before} the @option{--config} option(s) that contain options with the given prefix.
This option does not affect the options within configuration files that have the standard name (without a prefix).
This gives unique features to Gnuastro's configuration files, especially in large pipelines.
Let's demonstrate this with the simple scenario below.
You have multiple configuration files for different instances of one program (let's assume @file{nc-a.conf} and @file{nc-b.conf}).
At the same time, want to load all the option names/values into your shell as environment variables (for example with @code{source}).
This happens when you want to use the options as shell variables in other parts of the your pipeline.
If the two configuration files have different values for the same option (as shown below), and you don't use @code{--config-prefix}, the shell will over-write the common option values between the configuration files.
But thanks to @code{--config-prefix}, you can give a different prefix for the different instances of the same option in different configuration files.
@example
$ cat nc-a.conf
a_tilesize=20,20
$ cat nc-b.conf
b_tilesize=40,40
## Load configuration files as shell scripts (to define the
## option name and values as shell variables with values).
## Just note that 'source' only takes one file at a time.
$ for c in nc-*.conf; do source $c; done
$ astnoisechisel img.fits \
--config=nc-a.conf --config-prefix=a_
$ echo "NoiseChisel run with --tilesize=$a_tilesize"
$ astnoisechisel img.fits \
--config=nc-b.conf --config-prefix=b_
$ echo "NoiseChisel run with --tilesize=$b_tilesize"
@end example
@item -S
@itemx --setdirconf
Update the current directory configuration file for the Gnuastro program and quit.
The full set of command-line and configuration file options will be parsed and options with a value will be written in the current directory configuration file for this program (see @ref{Configuration files}).
If the configuration file or its directory does not exist, it will be created.
If a configuration file exists it will be replaced (after it, and all other configuration files have been read).
In any case, the program will not run.
This is the recommended method@footnote{Alternatively, you can use your favorite text editor.} to edit/set the configuration file for all future calls to Gnuastro's programs.
It will internally check if your values are in the correct range and type and save them according to the configuration file format, see @ref{Configuration file format}.
So if there are unreasonable values to some options, the program will notify you and abort before writing the final configuration file.
When this option is called, the otherwise mandatory arguments, for
example input image or catalog file(s), are no longer mandatory (since
the program will not run).
@item -U
@itemx --setusrconf
Update the user configuration file and quit (see @ref{Configuration files}).
See explanation under @option{--setdirconf} for more details.
@item --lastconfig
This is the last configuration file that must be read.
When this option is confronted in any stage of reading the options (on the command-line or in a configuration file), no other configuration file will be parsed, see @ref{Configuration file precedence} and @ref{Current directory and User wide}.
Like all on/off options, on the command-line, this option does not take any values.
But in a configuration file, it takes the values of @option{0} or @option{1}, see @ref{Configuration file format}.
If it is present in a configuration file with a value of @option{0}, then all later occurrences of this option will be ignored.
@item --onlyversion=STR
Only run the program if Gnuastro's version is exactly equal to @option{STR} (see @ref{Version numbering}).
Note that it is not compared as a number, but as a string of characters, so @option{0}, or @option{0.0} and @option{0.00} are different.
If the running Gnuastro version is different, then this option will report an error and abort as soon as it is confronted on the command-line or in a configuration file.
If the running Gnuastro version is the same as @option{STR}, then the program will run as if this option was not called.
This is useful if you want your results to be exactly reproducible and not mistakenly run with an updated/newer or older version of the program.
Besides internal algorithmic/behavior changes in programs, the existence of options or their names might change between versions (especially in these earlier versions of Gnuastro).
Hence, when using this option (probably in a script or in a configuration file), be sure to call it before other options.
The benefit is that, when the version differs, the other options will not be parsed and you, or your collaborators/users, will not get errors saying an option in your configuration does not exist in the running version of the program.
Here is one example of how this option can be used in conjunction with the @option{--lastconfig} option.
Let's assume that you were satisfied with the results of this command: @command{astnoisechisel image.fits --snquant=0.95} (along with various options set in various configuration files).
You can save the state of NoiseChisel and reproduce that exact result on @file{image.fits} later by following these steps (the extra spaces, and @key{\}, are only for easy readability, if you want to try it out, only one space between each token is enough).
@example
$ echo "onlyversion X.XX" > reproducible.conf
$ echo "lastconfig 1" >> reproducible.conf
$ astnoisechisel image.fits --snquant=0.95 -P \
>> reproducible.conf
@end example
@option{--onlyversion} was available from Gnuastro 0.0, so putting it immediately at the start of a configuration file will ensure that later, you (or others using different version) will not get a non-recognized option error in case an option was added/removed.
@option{--lastconfig} will inform the installed NoiseChisel to not parse any other configuration files.
This is done because we do not want the user's user-wide or system wide option values affecting our results.
Finally, with the third command, which has a @option{-P} (short for @option{--printparams}), NoiseChisel will print all the option values visible to it (in all the configuration files) and the shell will append them to @file{reproduce.conf}.
Hence, you do not have to worry about remembering the (possibly) different options in the different configuration files.
Afterwards, if you run NoiseChisel as shown below (telling it to read this configuration file with the @file{--config} option).
You can be sure that there will either be an error (for version mismatch) or it will produce exactly the same result that you got before.
@example
$ astnoisechisel --config=reproducible.conf
@end example
@item --log
Some programs can generate extra information about their outputs in a log file.
When this option is called in those programs, the log file will also be printed.
If the program does not generate a log file, this option is ignored.
@cartouche
@noindent
@strong{@option{--log} is not thread-safe}: The log file usually has a fixed name.
Therefore if two simultaneous calls (with @option{--log}) of a program are made in the same directory, the program will try to write to he same file.
This will cause problems like unreasonable log file, undefined behavior, or a crash.
@end cartouche
@cindex CPU threads, set number
@cindex Number of CPU threads to use
@item -N INT
@itemx --numthreads=INT
Use @option{INT} CPU threads when running a Gnuastro program (see @ref{Multi-threaded operations}).
If the value is zero (@code{0}), or this option is not given on the command-line or any configuration file, the value will be determined at run-time: the maximum number of threads available to the system when you run a Gnuastro program.
Note that multi-threaded programming is only relevant to some programs.
In others, this option will be ignored.
@end vtable
@node Shell TAB completion, Standard input, Common options, Command-line
@subsection Shell TAB completion (highly customized)
@cartouche
@noindent
@strong{Under development:} Gnuastro's TAB completion in Bash already greatly improves usage of Gnuastro on the command-line, but still under development and not yet complete.
If you are interested to try it out, please go ahead and activate it (as described below), we encourage this.
But please have in mind that there are known issues@footnote{@url{http://savannah.gnu.org/bugs/index.php?group=gnuastro&category_id=128}} and you may find new issues.
If you do, please get in touch with us as described in @ref{Report a bug}.
TAB completion is currently only implemented in the following programs: Arithmetic, BuildProgram, ConvertType, Convolve, CosmicCalculator, Crop, Fits and Table.
For progress on this task, please see Task 15799@footnote{@url{https://savannah.gnu.org/task/?15799}}.
@end cartouche
@cindex Bash auto-complete
@cindex Completion in the shell
@cindex Bash programmable completion
@cindex Autocomplete (in the shell/Bash)
Bash provides a built-in feature called @emph{programmable completion}@footnote{@url{https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion.html}} to help increase interactive workflow efficiency and minimize the number of keystrokes @emph{and} the need to memorize things.
It is also known as TAB completion, bash completion, auto-completion, or word completion.
Completion is activated by pressing @key{[TAB]} while you are typing a command.
For file arguments this is the default behavior already and you have probably used it a lot with any command-line program.
Besides this simple/default mode, Bash also enables a high level of customization features for its completion.
These features have been extensively used in Gnuastro to improve your work efficiency@footnote{To learn how Gnuastro implements TAB completion in Bash, see @ref{Bash programmable completion}.}.
For example, if you are running @code{asttable} (which only accepts files containing a table), and you press @key{[TAB]}, it will only suggest files containing tables.
As another example, if an option needs image HDUs within a FITS file, pressing @key{[TAB]} will only suggest the image HDUs (and not other possibly existing HDUs that contain tables, or just metadata).
Just note that the file name has to be already given on the command-line before reaching such options (that look into the contents of a file).
But TAB completion is not limited to file types or contents.
Arguments/Options that take certain fixed string values will directly suggest those strings with TAB, and completely ignore the file structure (for example, spectral line names in @ref{Invoking astcosmiccal})!
As another example, the option @option{--numthreads} option (to specify the number of threads to use by the program), will find the number of available threads on the system, and suggest the possible numbers with a TAB!
To activate Gnuastro's custom TAB completion in Bash, you need to put the following line in one of your Bash startup files (for example, @file{~/.bashrc}).
If you installed Gnuastro using the steps of @ref{Quick start}, you should have already done this (the command just after @command{sudo make install}).
For a list of (and discussion on) Bash startup files and installation directories see @ref{Installation directory}.
Of course, if Gnuastro was installed in a custom location, replace the `@file{/usr/local}' part of the line below to the value that was given to @option{--prefix} during Gnuastro's configuration@footnote{In case you do not know the installation directory of Gnuastro on your system, you can find out with this command: @code{which astfits | sed -e"s|/bin/astfits||"}}.
@example
# Enable Gnuastro's TAB completion
source /usr/local/share/gnuastro/completion.bash
@end example
After adding the line above in a Bash startup file, TAB completion will always be activated in any new terminal.
To see if it has been activated, try it out with @command{asttable [TAB][TAB]} and @command{astarithmetic [TAB][TAB]} in a directory that contains tables and images.
The first will only suggest the files with a table, and the second, only those with an image.
@cartouche
@noindent
@strong{TAB completion only works with long option names:}
As described above, short options are much more complex to generalize, therefore TAB completion is only available for long options.
But do not worry!
TAB completion also involves option names, so if you just type @option{--a[TAB][TAB]}, you will get the list of options that start with an @option{--a}.
Therefore as a side-effect of TAB completion, your commands will be far more human-readable with minimal key strokes.
@end cartouche
@node Standard input, Shell tips, Shell TAB completion, Command-line
@subsection Standard input
@cindex Standard input
@cindex Stream: standard input
The most common way to feed the primary/first input dataset into a program is to give its filename as an argument (discussed in @ref{Arguments}).
When you want to run a series of programs in sequence, this means that each will have to keep the output of each program in a separate file and re-type that file's name in the next command.
This can be very slow and frustrating (mis-typing a file's name).
@cindex Standard output stream
@cindex Stream: standard output
To solve the problem, the founders of Unix defined pipes to directly feed the output of one program (its ``Standard output'' stream) into the ``standard input'' of a next program.
This removes the need to make temporary files between separate processes and became one of the best demonstrations of the Unix-way, or Unix philosophy.
Every program has three streams identifying where it reads/writes non-file inputs/outputs: @emph{Standard input}, @emph{Standard output}, and @emph{Standard error}.
When a program is called alone, all three are directed to the terminal that you are using.
If it needs an input, it will prompt you for one and you can type it in.
Or, it prints its results in the terminal for you to see.
For example, say you have a FITS table/catalog containing the B and V band magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of galaxies along with many other columns.
If you want to see only these two columns in your terminal, can use Gnuastro's @ref{Table} program like below:
@example
$ asttable cat.fits -cMAG_B,MAG_V
@end example
Through the Unix pipe mechanism, when the shell confronts the pipe character (@key{|}), it connects the standard output of the program before the pipe, to the standard input of the program after it.
So it is literally a ``pipe'': everything that you would see printed by the first program on the command (without any pipe), is now passed to the second program (and not seen by you).
@cindex AWK
@cindex GNU AWK
To continue the previous example, let's say you want to see the B-V color.
To do this, you can pipe Table's output to AWK (a wonderful tool for processing things like plain text tables):
@example
$ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}'
@end example
But understanding the distribution by visually seeing all the numbers under each other is not too useful! You can therefore feed this single column information into @ref{Statistics} to give you a general feeling of the distribution with the same command:
@example
$ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}' | aststatistics
@end example
Gnuastro's programs that accept input from standard input, only look into the Standard input stream if there is no first argument.
In other words, arguments take precedence over Standard input.
When no argument is provided, the programs check if the standard input stream is already full or not (output from another program is waiting to be used).
If data is present in the standard input stream, it is used.
When the standard input is empty, the program will wait @option{--stdintimeout} micro-seconds for you to manually enter the first line (ending with a new-line character, or the @key{ENTER} key, see @ref{Input output options}).
If it detects the first line in this time, there is no more time limit, and you can manually write/type all the lines for as long as it takes.
To inform the program that Standard input has finished, press @key{CTRL-D} after a new line.
If the program does not catch the first line before the time-out finishes, it will abort with an error saying that no input was provided.
@cartouche
@noindent
@strong{Manual input in Standard input is discarded:}
Be careful that when you manually fill the Standard input, the data will be discarded once the program finishes and reproducing the result will be impossible.
Therefore this form of providing input is only good for temporary tests.
@end cartouche
@cartouche
@noindent
@strong{Standard input currently only for plain text:}
Currently Standard input only works for plain text inputs like the example above.
We will later allow FITS files into the programs through standard input also.
@end cartouche
@node Shell tips, , Standard input, Command-line
@subsection Shell tips
Gnuastro's programs are primarily meant to be run on the command-line shell environment.
In this section, we will review some useful tips and tricks that can be helpful in the pipelines that you run.
@menu
* Separate shell variables for multiple outputs:: When you get values from one command.
* Truncating start of long string FITS keyword values:: When the end of the string matters.
@end menu
@node Separate shell variables for multiple outputs, Truncating start of long string FITS keyword values, Shell tips, Shell tips
@subsubsection Separate shell variables for multiple outputs
Sometimes your commands print multiple values and you want to use them as different shell variables.
Let's describe the problem (shown in the box below) with an example (that you can reproduce without any external data).
With the commands below, we'll first make a noisy (@mymath{\sigma=5}) image (@mymath{100\times100} pixels) using @ref{Arithmetic}.
Then, we'll measure@footnote{The actual printed values by @command{aststatistics} may slightly differ for you.
This is because of a different random number generator seed used in @command{astarithmetic}.
To get an exactly reproducible result, see @ref{Generating random numbers}} its mean and standard deviation using @ref{Statistics}.
@example
$ astarithmetic 100 100 2 makenew 5 mknoise-sigma -oimg.fits
$ aststatistics img.fits --mean --std
-3.10938611484039e-03 4.99607077069093e+00
@end example
@cartouche
@noindent
@strong{THE PROBLEM:} you want the first number printed above to be stored in a shell variable called @code{my_mean} and the second number to be stored as the @code{my_std} shell variable (you are free to choose any name!).
@end cartouche
@noindent
The first thing that may come to mind is to run Statistics two times, and write the output into separate variables like below:
@example
$ my_std=$(aststatistics img.fits --std) ## NOT SOLUTION! ##
$ my_mean=$(aststatistics img.fits --mean) ## NOT SOLUTION! ##
@end example
@cindex Global warming
@cindex Carbon footprint
But this is not a good solution because as @file{img.fits} becomes larger (more pixels), the time it takes for Statistics to simply load the data into memory can be significant.
This will slow down your pipeline and besides wasting your time, it contributes to global warming (by spending energy on an un-necessary action; take this seriously because your pipeline may scale up to involve thousands of large datasets)!
Furthermore, besides loading of the input data, Statistics (and Gnuastro in general) is designed to do multiple measurements in one pass over the data as much as possible (to further decrease Gnuastro's carbon footprint).
So when given @option{--mean --std}, it will measure both in one pass over the pixels (not two passes!).
In other words, in this case, you get the two measurements for the cost of one.
How do you separate the values from the first @command{aststatistics} command above?
One ugly way is to write the two-number output string into a single shell variable and then separate, or tokenize, the string with two subsequent commands like below:
@c Note that the comments aren't aligned in the Texinfo source because of
@c the '@' characters before the braces of AWK. In the output, they are
@c aligned.
@example
$ meanstd=$(aststatistics img.fits --mean --std) ## NOT SOLUTION! ##
$ my_mean=$(echo $meanstd | awk '@{print $1@}') ## NOT SOLUTION! ##
$ my_std=$(echo $meanstd | awk '@{print $2@}') ## NOT SOLUTION! ##
@end example
@cartouche
@noindent
@cindex Evaluate string as command (@command{eval})
@cindex @command{eval} to evaluate string as command
@strong{SOLUTION:} The solution is to formatted-print (@command{printf}) the numbers as shell variables definitions in a string, and evaluate (@command{eval}) that string as a command:
@example
$ eval "$(aststatistics img.fits --mean --std \
| xargs printf "my_mean=%s; my_std=%s")"
@end example
@end cartouche
@noindent
Let's review the solution (in more detail):
@enumerate
@item
@cindex Standard input
@cindex @command{xargs} (extended arguments)
We pipe the output into @command{xargs}@footnote{For more on @command{xargs}, see @url{https://en.wikipedia.org/wiki/Xargs}.
It will take the standard input (from the pipe in this scenario) and put it as arguments of the next program (@command{printf} in this scenario).
In other words, it is good for programs that don't take input from standard input (@command{printf} in this case; but also includes others like @command{cp}, @command{rm}, or @command{echo}).} (extended arguments) which puts the two numbers it gets from the pipe, as arguments for @command{printf} (formatted print; because @command{printf} doesn't take input from pipes).
@item
Within the @command{printf} call, we write the values after putting a variable name and equal-sign, and in between them we put a @key{;} (as if it was a shell command).
The @code{%s} tells @command{printf} to print each input as a string (not to interpret it as a number and loose precision).
Here is the output of this phase:
@example
$ aststatistics img.fits --mean --std \
| xargs printf "my_mean=%s; my_std=%s"
my_mean=-3.10938611484039e-03; my_std=4.99607077069093e+00
@end example
@item
But the output above is a string! To evaluate this string as a command, we give it to the eval command like above.
@end enumerate
@noindent
After the solution above, you will have the two @code{my_mean} and @code{my_std} variables to use separately in your pipeline:
@example
$ echo $my_mean
-3.10938611484039e-03
$ echo $my_std
4.99607077069093e+00
@end example
@cindex Zsh shell
@cindex Dash shell
@cindex Portable script
This @command{eval}-based solution has been tested in in GNU Bash, Dash and Zsh and it works nicely in them (is ``portable'').
This is because the constructs used here are pretty low-level (and widely available).
For examples usages of this technique, see the following sections: @ref{Extracting a single spectrum and plotting it} and @ref{Synthetic narrow-band images}.
@node Truncating start of long string FITS keyword values, , Separate shell variables for multiple outputs, Shell tips
@subsubsection Truncating start of long string FITS keyword values
When you want to put a string (not a number, for example a file name) into the keyword value, if it is longer than 68 characters, CFITSIO is going to truncate the end of the string.
The number 68 is the maximum allowable sting keyword length in the FITS standard@footnote{In the FITS standard, the full length of a keyword (including its name) is 80 characters.
The keyword name occupies 8 characters, which is followed by an @key{=} (1 character).
For strings, we need one SPACE after the @key{=}, and the string should be enclosed in two single quotes.
Accounting for all of these, we get @mymath{80-8-1-1-2=68} available characters.}.
A robust way to solve this problem is to break the keyword into multiple keywords and continue the file name there.
However, especially when dealing with file names, it is usually the last few characters that you want to preserve (the first ones are usually just basic operating system locations).
Below, you can see the three necessary commands to optionally (when the length is too long) truncate such long strings in GNU Bash.
When truncation is necessary, to inform the reader that the value has been truncated, we'll put `@code{...}' at the start of the string.
@verbatim
$ fname="/a/very/long/file/location"
$ if [ ${#fname} -gt 68 ]; then value="...${fname: -65}"; \
else value=$fname; \
fi
$ astfits image.fits --write=KEYNAME,"$value"
@end verbatim
@noindent
Here are the core handy constructs of Bash that we are using here:
@table @code
@item $@{#fname@}
Returns the length of the value given to the @code{fname} variable.
@item $@{fname: -65@}
Returns the last 65 characters in the value of the @code{fname} variable.
@end table
@node Configuration files, Getting help, Command-line, Common program behavior
@section Configuration files
@cindex @file{etc}
@cindex Configuration files
@cindex Necessary parameters
@cindex Default option values
@cindex File system Hierarchy Standard
Each program needs a certain number of parameters to run.
Supplying all the necessary parameters each time you run the program is very frustrating and prone to errors.
Therefore all the programs read the values for the necessary options you have not given in the command-line from one of several plain text files (which you can view and edit with any text editor).
These files are known as configuration files and are usually kept in a directory named @file{etc/} according to the file system hierarchy
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard}}.
@vindex --output
@vindex --numthreads
@cindex CPU threads, number
@cindex Internal default value
@cindex Number of CPU threads to use
The thing to have in mind is that none of the programs in Gnuastro keep any internal default value.
All the values must either be stored in one of the configuration files or explicitly called in the command-line.
In case the necessary parameters are not given through any of these methods, the program will print a missing option error and abort.
The only exception to this is @option{--numthreads}, whose default value is determined at run-time using the number of threads available to your system, see @ref{Multi-threaded operations}.
Of course, you can still provide a default value for the number of threads at any of the levels below, but if you do not, the program will not abort.
Also note that through automatic output name generation, the value to the @option{--output} option is also not mandatory on the command-line or in the configuration files for all programs which do not rely on that value as an input@footnote{One example of a program that uses the value given to @option{--output} as an input is ConvertType, this value specifies the type of the output through the value to @option{--output}, see @ref{Invoking astconvertt}.}, see @ref{Automatic output}.
@menu
* Configuration file format:: ASCII format of configuration file.
* Configuration file precedence:: Precedence of configuration files.
* Current directory and User wide:: Local and user configuration files.
* System wide:: System wide configuration files.
@end menu
@node Configuration file format, Configuration file precedence, Configuration files, Configuration files
@subsection Configuration file format
@cindex Configuration file suffix
The configuration files for each program have the standard program executable name with a `@file{.conf}' suffix.
When you download the source code, you can find them in the same directory as the source code of each program, see @ref{Program source}.
@cindex White space character
@cindex Configuration file format
Any line in the configuration file whose first non-white character is a @key{#} is considered to be a comment and is ignored.
An empty line is also similarly ignored.
The long name of the option should be used as an identifier.
The option name and option value should be separated by any number of `white-space' characters (space, tab or vertical tab) or an equal (@key{=}).
By default several space characters are used.
If the value of an option has space characters (most commonly for the @option{hdu} option), then the full value can be enclosed in double quotation signs (@key{"}, similar to the example in @ref{Arguments and options}).
If it is an option without a value in the @option{--help} output (on/off option, see @ref{Options}), then the value should be @option{1} if it is to be `on' and @option{0} otherwise.
In each non-commented and non-blank line, any text after the first two words (option identifier and value) is ignored.
If an option identifier is not recognized in the configuration file, the name of the file, the line number of the unrecognized option, and the unrecognized identifier name will be reported and the program will abort.
If a parameter is repeated more more than once in the configuration files, accepts only one value, and is not set on the command-line, then only the first value will be used, the rest will be ignored.
@cindex Writing configuration files
@cindex Automatic configuration file writing
@cindex Configuration files, writing
You can build or edit any of the directories and the configuration files yourself using any text editor.
However, it is recommended to use the @option{--setdirconf} and @option{--setusrconf} options to set default values for the current directory or this user, see @ref{Operating mode options}.
With these options, the values you give will be checked before writing in the configuration file.
They will also print a set of commented lines guiding the reader and will also classify the options based on their context and write them in their logical order to be more understandable.
@node Configuration file precedence, Current directory and User wide, Configuration file format, Configuration files
@subsection Configuration file precedence
@cindex Configuration file precedence
@cindex Configuration file directories
@cindex Precedence, configuration files
The option values in all the programs of Gnuastro will be filled in the following order.
If an option only takes one value which is given in an earlier step, any value for that option in a later step will be ignored.
Note that if the @option{lastconfig} option is specified in any step below, no other configuration files will be parsed (see @ref{Operating mode options}).
@enumerate
@item
Command-line options, for a particular run of ProgramName.
@item
@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current directory.
@item
@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the current directory.
@item
@file{$HOME/.local/etc/gnuastro/astprogname.conf} is parsed by ProgramName in the user's home directory (see @ref{Current directory and User wide}).
@item
@file{$HOME/.local/etc/gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the user's home directory (see @ref{Current directory and User wide}).
@item
@file{prefix/etc/gnuastro/astprogname.conf} is parsed by ProgramName in the system-wide installation directory (see @ref{System wide} for @file{prefix}).
@item
@file{prefix/etc/gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the system-wide installation directory (see @ref{System wide} for @file{prefix}).
@end enumerate
The basic idea behind setting this progressive state of checking for parameter values is that separate users of a computer or separate folders in a user's file system might need different values for some parameters.
@cartouche
@noindent
@strong{Checking the order:}
You can confirm/check the order of parsing configuration files using the @option{--checkconfig} option with any Gnuastro program, see @ref{Operating mode options}.
Just be sure to place this option immediately after the program name, before any other option.
@end cartouche
As you see above, there can also be a configuration file containing the common options in all the programs: @file{gnuastro.conf} (see @ref{Common options}).
If options specific to one program are specified in this file, there will be unrecognized option errors, or unexpected behavior if the option has different behavior in another program.
On the other hand, there is no problem with @file{astprogname.conf} containing common options@footnote{As an example, the @option{--setdirconf} and @option{--setusrconf} options will also write the common options they have read in their produced @file{astprogname.conf}.}.
@cartouche
@noindent
@strong{Manipulating the order:} You can manipulate this order or add new files with the following two options which are fully described in
@ref{Operating mode options}:
@table @option
@item --config
Allows you to define any file to be parsed as a configuration file on the command-line or within the any other configuration file.
Recall that the file given to @option{--config} is parsed immediately when this option is confronted (on the command-line or in a configuration file).
@item --lastconfig
Allows you to stop the parsing of subsequent configuration files.
Note that if this option is given in a configuration file, it will be fully read, so its position in the configuration does not matter (unlike @option{--config}).
@end table
@end cartouche
One example of benefiting from these configuration files can be this: raw telescope images usually have their main image extension in the second FITS extension, while processed FITS images usually only have one extension.
If your system-wide default input extension is 0 (the first), then when you want to work with the former group of data you have to explicitly mention it to the programs every time.
With this progressive state of default values to check, you can set different default values for the different directories that you would like to run Gnuastro in for your different purposes, so you will not have to worry about this issue any more.
The same can be said about the @file{gnuastro.conf} files: by specifying a behavior in this single file, all Gnuastro programs in the respective directory, user, or system-wide steps will behave similarly.
For example, to keep the input's directory when no specific output is given (see @ref{Automatic output}), or to not delete an existing file if it has the same name as a given output (see @ref{Input output options}).
@node Current directory and User wide, System wide, Configuration file precedence, Configuration files
@subsection Current directory and User wide
@cindex @file{$HOME}
@cindex @file{./.gnuastro/}
@cindex @file{$HOME/.local/etc/gnuastro}
For the current (local) and user-wide directories, the configuration files are stored in the hidden sub-directories named @file{.gnuastro/} and @file{$HOME/.local/etc/gnuastro/} respectively.
Unless you have changed it, the @file{$HOME} environment variable should point to your home directory.
You can check it by running @command{$ echo $HOME}.
Each time you run any of the programs in Gnuastro, this environment variable is read and placed in the above address.
So if you suddenly see that your home configuration files are not being read, probably you (or some other program) has changed the value of this environment variable.
@vindex --setdirconf
@vindex --setusrconf
Although it might cause confusions like above, this dependence on the @file{HOME} environment variable enables you to temporarily use a different directory as your home directory.
This can come in handy in complicated situations.
To set the user or current directory configuration files based on your command-line input, you can use the @option{--setdirconf} or @option{--setusrconf}, see @ref{Operating mode options}.
@node System wide, , Current directory and User wide, Configuration files
@subsection System wide
@cindex @file{prefix/etc/gnuastro/}
@cindex System wide configuration files
@cindex Configuration files, system wide
When Gnuastro is installed, the configuration files that are shipped with the distribution are copied into the (possibly system wide) @file{prefix/etc/gnuastro} directory.
For more details on @file{prefix}, see @ref{Installation directory} (by default it is: @file{/usr/local}).
This directory is the final place (with the lowest priority) that the programs in Gnuastro will check to retrieve parameter values.
If you remove an option and its value from the system wide configuration files, you either have to specify it in more immediate configuration files or set it each time in the command-line.
Recall that none of the programs in Gnuastro keep any internal default values and will abort if they do not find a value for the necessary parameters (except the number of threads and output file name).
So even though you might never expect to use an optional option, it safe to have it available in this system-wide configuration file even if you do not intend to use it frequently.
Note that in case you install Gnuastro from your distribution's repositories, @file{prefix} will either be set to @file{/} (the root directory) or @file{/usr}, so you can find the system wide configuration variables in @file{/etc/gnuastro/} or @file{/usr/etc/gnuastro/}.
The prefix of @file{/usr/local/} is conventionally used for programs you install from source by yourself as in @ref{Quick start}.
@node Getting help, Multi-threaded operations, Configuration files, Common program behavior
@section Getting help
@cindex Help
@cindex Book formats
@cindex Remembering options
@cindex Convenient book formats
Probably the first time you read this book, it is either in the PDF or HTML formats.
These two formats are very convenient for when you are not actually working, but when you are only reading.
Later on, when you start to use the programs and you are deep in the middle of your work, some of the details will inevitably be forgotten.
Going to find the PDF file (printed or digital) or the HTML web page is a major distraction.
@cindex Online help
@cindex Command-line help
GNU software have a very unique set of tools for aiding your memory on the command-line, where you are working, depending how much of it you need to remember.
In the past, such command-line help was known as ``online'' help, because they were literally provided to you `on' the command `line'.
However, nowadays the word ``online'' refers to something on the internet, so that term will not be used.
With this type of help, you can resume your exciting research without taking your hands off the keyboard.
@cindex Installed help methods
Another major advantage of such command-line based help routines is that they are installed with the software in your computer, therefore they are always in sync with the executable you are actually running.
Three of them are actually part of the executable.
You do not have to worry about the version of the book or program.
If you rely on external help (a PDF in your personal print or digital archive or HTML from the official web page) you have to check to see if their versions fit with your installed program.
If you only need to remember the short or long names of the options, @option{--usage} is advised.
If it is what the options do, then @option{--help} is a great tool.
Man pages are also provided for those who are use to this older system of documentation.
This full book is also available to you on the command-line in Info format.
If none of these seems to resolve the problems, there is a mailing list which enables you to get in touch with experienced Gnuastro users.
In the subsections below each of these methods are reviewed.
@menu
* --usage:: View option names and value formats.
* --help:: List all options with description.
* Man pages:: Man pages generated from --help.
* Info:: View complete book in terminal.
* help-gnuastro mailing list:: Contacting experienced users.
@end menu
@node --usage, --help, Getting help, Getting help
@subsection @option{--usage}
@vindex --usage
@cindex Usage pattern
@cindex Mandatory arguments
@cindex Optional and mandatory tokens
If you give this option, the program will not run.
It will only print a very concise message showing the options and arguments.
Everything within square brackets (@option{[]}) is optional.
For example, here are the first and last two lines of Crop's @option{--usage} is shown:
@example
$ astcrop --usage
Usage: astcrop [-Do?IPqSVW] [-d INT] [-h INT] [-r INT] [-w INT]
[-x INT] [-y INT] [-c INT] [-p STR] [-N INT] [--deccol=INT]
....
[--setusrconf] [--usage] [--version] [--wcsmode]
[ASCIIcatalog] FITSimage(s).fits
@end example
There are no explanations on the options, just their short and long names shown separately.
After the program name, the short format of all the options that do not require a value (on/off options) is displayed.
Those that do require a value then follow in separate brackets, each displaying the format of the input they want, see @ref{Options}.
Since all options are optional, they are shown in square brackets, but arguments can also be optional.
For example, in this example, a catalog name is optional and is only required in some modes.
This is a standard method of displaying optional arguments for all GNU software.
@node --help, Man pages, --usage, Getting help
@subsection @option{--help}
@vindex --help
If the command-line includes this option, the program will not be run.
It will print a complete list of all available options along with a short explanation.
The options are also grouped by their context.
Within each context, the options are sorted alphabetically.
Since the options are shown in detail afterwards, the first line of the @option{--help} output shows the arguments and if they are optional or not, similar to @ref{--usage}.
In the @option{--help} output of all programs in Gnuastro, the options for each program are classified based on context.
The first two contexts are always options to do with the input and output respectively.
For example, input image extensions or supplementary input files for the inputs.
The last class of options is also fixed in all of Gnuastro, it shows operating mode options.
Most of these options are already explained in @ref{Operating mode options}.
@cindex Long outputs
@cindex Redirection of output
@cindex Command-line, long outputs
The help message will sometimes be longer than the vertical size of your terminal.
If you are using a graphical user interface terminal emulator, you can scroll the terminal with your mouse, but we promised no mice distractions! So here are some suggestions:
@itemize
@item
@cindex Scroll command-line
@cindex Command-line scroll
@cindex @key{Shift + PageUP} and @key{Shift + PageDown}
@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll down.
For most help output this should be enough.
The problem is that it is limited by the number of lines that your terminal keeps in memory and that you cannot scroll by lines, only by whole screens.
@item
@cindex Pipe
@cindex @command{less}
Pipe to @command{less}.
A pipe is a form of shell re-direction.
The @command{less} tool in Unix-like systems was made exactly for such outputs of any length.
You can pipe (@command{|}) the output of any program that is longer than the screen to it and then you can scroll through (up and down) with its many tools.
For example:
@example
$ astnoisechisel --help | less
@end example
@noindent
Once you have gone through the text, you can quit @command{less} by pressing the @key{q} key.
@item
@cindex Save output to file
@cindex Redirection of output
Redirect to a file.
This is a less convenient way, because you will then have to open the file in a text editor!
You can do this with the shell redirection tool (@command{>}):
@example
$ astnoisechisel --help > filename.txt
@end example
@end itemize
@cindex GNU Grep
@cindex Searching text
@cindex Command-line searching text
In case you have a special keyword you are looking for in the help, you do not have to go through the full list.
GNU Grep is made for this job.
For example, if you only want the list of options whose @option{--help} output contains the word ``axis'' in Crop, you can run the following command:
@example
$ astcrop --help | grep axis
@end example
@cindex @code{ARGP_HELP_FMT}
@cindex Argp argument parser
@cindex Customize @option{--help} output
@cindex @option{--help} output customization
If the output of this option does not fit nicely within the confines of your terminal, GNU does enable you to customize its output through the environment variable @code{ARGP_HELP_FMT}, you can set various parameters which specify the formatting of the help messages.
For example, if your terminals are wider than 70 spaces (say 100) and you feel there is too much empty space between the long options and the short explanation, you can change these formats by giving values to this environment variable before running the program with the @option{--help} output.
You can define this environment variable in this manner:
@example
$ export ARGP_HELP_FMT=rmargin=100,opt-doc-col=20
@end example
@cindex @file{.bashrc}
This will affect all GNU programs using GNU C library's @file{argp.h} facilities as long as the environment variable is in memory.
You can see the full list of these formatting parameters in the ``Argp User Customization'' part of the GNU C library manual.
If you are more comfortable to read the @option{--help} outputs of all GNU software in your customized format, you can add your customization (similar to the line above, without the @command{$} sign) to your @file{~/.bashrc} file.
This is a standard option for all GNU software.
@node Man pages, Info, --help, Getting help
@subsection Man pages
@cindex Man pages
Man pages were the Unix method of providing command-line documentation to a program.
With GNU Info, see @ref{Info} the usage of this method of documentation is highly discouraged.
This is because Info provides a much more easier to navigate and read environment.
However, some operating systems require a man page for packages that are installed and some people are still used to this method of command-line help.
So the programs in Gnuastro also have Man pages which are automatically generated from the outputs of @option{--version} and @option{--help} using the GNU help2man program.
So if you run
@example
$ man programname
@end example
@noindent
You will be provided with a man page listing the options in the
standard manner.
@node Info, help-gnuastro mailing list, Man pages, Getting help
@subsection Info
@cindex GNU Info
@cindex Command-line, viewing full book
Info is the standard documentation format for all GNU software.
It is a very useful command-line document viewing format, fully equipped with links between the various pages and menus and search capabilities.
As explained before, the best thing about it is that it is available for you the moment you need to refresh your memory on any command-line tool in the middle of your work without having to take your hands off the keyboard.
This complete book is available in Info format and can be accessed from anywhere on the command-line.
To open the Info format of any installed programs or library on your system which has an Info format book, you can simply run the command below (change @command{executablename} to the executable name of the program or library):
@example
$ info executablename
@end example
@noindent
@cindex Learning GNU Info
@cindex GNU software documentation
In case you are not already familiar with it, run @command{$ info info}.
It does a fantastic job in explaining all its capabilities itself.
It is very short and you will become sufficiently fluent in about half an hour.
Since all GNU software documentation is also provided in Info, your whole GNU/Linux life will significantly improve.
@cindex GNU Emacs
@cindex GNU C library
Once you've become an efficient navigator in Info, you can go to any part of this book or any other GNU software or library manual, no matter how long it is, in a matter of seconds.
It also blends nicely with GNU Emacs (a text editor) and you can search manuals while you are writing your document or programs without taking your hands off the keyboard, this is most useful for libraries like the GNU C library.
To be able to access all the Info manuals installed in your GNU/Linux within Emacs, type @key{Ctrl-H + i}.
To see this whole book from the beginning in Info, you can run
@example
$ info gnuastro
@end example
@noindent
If you run Info with the particular program executable name, for
example @file{astcrop} or @file{astnoisechisel}:
@example
$ info astprogramname
@end example
@noindent
you will be taken to the section titled ``Invoking ProgramName'' which explains the inputs and outputs along with the command-line options for that program.
Finally, if you run Info with the official program name, for example, Crop or NoiseChisel:
@example
$ info ProgramName
@end example
@noindent
you will be taken to the top section which introduces the program.
Note that in all cases, Info is not case sensitive.
@node help-gnuastro mailing list, , Info, Getting help
@subsection help-gnuastro mailing list
@cindex help-gnuastro mailing list
@cindex Mailing list: help-gnuastro
Gnuastro maintains the help-gnuastro mailing list for users to ask any questions related to Gnuastro.
The experienced Gnuastro users and some of its developers are subscribed to this mailing list and your email will be sent to them immediately.
However, when contacting this mailing list please have in mind that they are possibly very busy and might not be able to answer immediately.
@cindex Mailing list archives
@cindex @code{help-gnuastro@@gnu.org}
To ask a question from this mailing list, send a mail to @code{help-gnuastro@@gnu.org}.
Anyone can view the mailing list archives at @url{http://lists.gnu.org/archive/html/help-gnuastro/}.
It is best that before sending a mail, you search the archives to see if anyone has asked a question similar to yours.
If you want to make a suggestion or report a bug, please do not send a mail to this mailing list.
We have other mailing lists and tools for those purposes, see @ref{Report a bug} or @ref{Suggest new feature}.
@node Multi-threaded operations, Numeric data types, Getting help, Common program behavior
@section Multi-threaded operations
@pindex nproc
@cindex pthread
@cindex CPU threads
@cindex GNU Coreutils
@cindex Using CPU threads
@cindex CPU, using all threads
@cindex Multi-threaded programs
@cindex Using multiple CPU cores
@cindex Simultaneous multithreading
Some of the programs benefit significantly when you use all the threads your computer's CPU has to offer to your operating system.
The number of threads available can be larger than the number of physical (hardware) cores in the CPU (also known as Simultaneous multithreading).
For example, in Intel's CPUs (those that implement its Hyper-threading technology) the number of threads is usually double the number of physical cores in your CPU.
On a GNU/Linux system, the number of threads available can be found with the command @command{$ nproc} command (part of GNU Coreutils).
@vindex --numthreads
@cindex Number of threads available
@cindex Available number of threads
@cindex Internally stored option value
Gnuastro's programs can find the number of threads available to your system internally at run-time (when you execute the program).
However, if a value is given to the @option{--numthreads} option, the given number will be used, see @ref{Operating mode options} and @ref{Configuration files} for ways to use this option.
Thus @option{--numthreads} is the only common option in Gnuastro's programs with a value that does not have to be specified anywhere on the command-line or in the configuration files.
@menu
* A note on threads:: Caution and suggestion on using threads.
* How to run simultaneous operations:: How to run things simultaneously.
@end menu
@node A note on threads, How to run simultaneous operations, Multi-threaded operations, Multi-threaded operations
@subsection A note on threads
@cindex Using multiple threads
@cindex Best use of CPU threads
@cindex Efficient use of CPU threads
Spinning off threads is not necessarily the most efficient way to run an application.
Creating a new thread is not a cheap operation for the operating system.
It is most useful when the input data are fixed and you want the same operation to be done on parts of it.
For example, one input image to Crop and multiple crops from various parts of it.
In this fashion, the image is loaded into memory once, all the crops are divided between the number of threads internally and each thread cuts out those parts which are assigned to it from the same image.
On the other hand, if you have multiple images and you want to crop the same region(s) out of all of them, it is much more efficient to set @option{--numthreads=1} (so no threads spin off) and run Crop multiple times simultaneously, see @ref{How to run simultaneous operations}.
@cindex Wall-clock time
You can check the boost in speed by first running a program on one of the data sets with the maximum number of threads and another time (with everything else the same) and only using one thread.
You will notice that the wall-clock time (reported by most programs at their end) in the former is longer than the latter divided by number of physical CPU cores (not threads) available to your operating system.
Asymptotically these two times can be equal (most of the time they are not).
So limiting the programs to use only one thread and running them independently on the number of available threads will be more efficient.
@cindex System Cache
@cindex Cache, system
Note that the operating system keeps a cache of recently processed data, so usually, the second time you process an identical data set (independent of the number of threads used), you will get faster results.
In order to make an unbiased comparison, you have to first clean the system's cache with the following command between the two runs.
@example
$ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
@end example
@cartouche
@noindent
@strong{SUMMARY: Should I use multiple threads?} Depends:
@itemize
@item
If you only have @strong{one} data set (image in most cases!), then yes, the more threads you use (with a maximum of the number of threads available to your OS) the faster you will get your results.
@item
If you want to run the same operation on @strong{multiple} data sets, it is best to set the number of threads to 1 and use Make, or GNU Parallel, as explained in @ref{How to run simultaneous operations}.
@end itemize
@end cartouche
@node How to run simultaneous operations, , A note on threads, Multi-threaded operations
@subsection How to run simultaneous operations
There are two@footnote{A third way would be to open multiple terminal emulator windows in your GUI, type the commands separately on each and press @key{Enter} once on each terminal, but this is far too frustrating, tedious and prone to errors.
It's therefore not a realistic solution when tens, hundreds or thousands of operations (your research targets, multiplied by the operations you do on each) are to be done.} approaches to simultaneously execute a program: using GNU Parallel or Make (GNU Make is the most common implementation).
The first is very useful when you only want to do one job multiple times and want to get back to your work without actually keeping the command you ran.
The second is usually for more important operations, with lots of dependencies between the different products (for example, a full scientific research).
@table @asis
@item GNU Parallel
@cindex GNU Parallel
When you only want to run multiple instances of a command on different threads and get on with the rest of your work, the best method is to use GNU parallel.
Surprisingly GNU Parallel is one of the few GNU packages that has no Info documentation but only a Man page, see @ref{Info}.
So to see the documentation after installing it please run
@example
$ man parallel
@end example
@noindent
As an example, let's assume we want to crop a region fixed on the pixels (500, 600) with the default width from all the FITS images in the @file{./data} directory ending with @file{sci.fits} to the current directory.
To do this, you can run:
@example
$ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: \
./data/*sci.fits
@end example
@noindent
GNU Parallel can help in many more conditions, this is one of the simplest, see the man page for lots of other examples.
For absolute beginners: the backslash (@command{\}) is only a line breaker to fit nicely in the page.
If you type the whole command in one line, you should remove it.
@item Make
@cindex Make
Make is a program for building ``targets'' (e.g., files) using ``recipes'' (a set of operations) when their known ``prerequisites'' (other files) have been updated.
It elegantly allows you to define dependency structures for building your final output and updating it efficiently when the inputs change.
It is the most common infra-structure to build software today.
Scientific research methodology is very similar to software development: you start by testing a hypothesis on a small sample of objects/targets with a simple set of steps.
As you are able to get promising results, you improve the method and use it on a larger, more general, sample.
In the process, you will confront many issues that have to be corrected (bugs in software development jargon).
Make is a wonderful tool to manage this style of development.
Besides the raw data analysis pipeline, Make has been used to for producing reproducible papers, for example, see @url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
In fact the NoiseChisel paper's Make-based workflow was the foundation of a parallel project called @url{http://maneage.org,Maneage} (@emph{Man}aging data lin@emph{eage}): @url{http://maneage.org} that is described more fully in Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}.
Therefore, it is a very useful tool for complex scientific workflows.
@cindex GNU Make
GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most common implementation which (similar to nearly all GNU programs, comes with a wonderful manual@footnote{@url{https://www.gnu.org/software/make/manual/}}).
Make is very basic and simple, and thus the manual is short (the most important parts are in the first roughly 100 pages) and easy to read/understand.
Make comes with a @option{--jobs} (@option{-j}) option which allows you to specify the maximum number of jobs that can be done simultaneously.
For example, if you have 8 threads available to your operating system.
You can run:
@example
$ make -j8
@end example
With this command, Make will process your @file{Makefile} and create all the targets (can be thousands of FITS images for example) simultaneously on 8 threads, while fully respecting their dependencies (only building a file/target when its prerequisites are successfully built).
Make is thus strongly recommended for managing scientific research where robustness, archiving, reproducibility and speed@footnote{Besides its multi-threaded capabilities, Make will only rebuild those targets that depend on a change you have made, not the whole work.
For example, if you have set the prerequisites properly, you can easily test the changing of a parameter on your paper's results without having to re-do everything (which is much faster).
This allows you to be much more productive in easily checking various ideas/assumptions of the different stages of your research and thus produce a more robust result for your exciting science.} are important.
@end table
@node Numeric data types, Memory management, Multi-threaded operations, Common program behavior
@section Numeric data types
@cindex Bit
@cindex Type
At the lowest level, the computer stores everything in terms of @code{1} or @code{0}.
For example, each program in Gnuastro, or each astronomical image you take with the telescope is actually a string of millions of these zeros and ones.
The space required to keep a zero or one is the smallest unit of storage, and is known as a @emph{bit}.
However, understanding and manipulating this string of bits is extremely hard for most people.
Therefore, different standards are defined to package the bits into separate @emph{type}s with a fixed interpretation of the bits in each package.
@cindex Byte
@cindex Signed integer
@cindex Unsigned integer
@cindex Integer, Signed
To store numbers, the most basic standard/type is for integers (@mymath{..., -2, -1, 0, 1, 2, ...}).
The common integer types are 8, 16, 32, and 64 bits wide (more bits will give larger limits).
Each bit corresponds to a power of 2 and they are summed to create the final number.
In the integer types, for each width there are two standards for reading the bits: signed and unsigned.
In the `signed' convention, one bit is reserved for the sign (stating that the integer is positive or negative).
The `unsigned' integers use that bit in the actual number and thus contain only positive numbers (starting from zero).
Therefore, at the same number of bits, both signed and unsigned integers can allow the same number of integers, but the positive limit of the @code{unsigned} types is double their @code{signed} counterparts with the same width (at the expense of not having negative numbers).
When the context of your work does not involve negative numbers (for example, counting, where negative is not defined), it is best to use the @code{unsigned} types.
For the full numerical range of all integer types, see below.
Another standard of converting a given number of bits to numbers is the floating point standard, this standard can @emph{approximately} store any real number with a given precision.
There are two common floating point types: 32-bit and 64-bit, for single and double precision floating point numbers respectively.
The former is sufficient for data with less than 8 significant decimal digits (most astronomical data), while the latter is good for less than 16 significant decimal digits.
The representation of real numbers as bits is much more complex than integers.
If you are interested to learn more about it, you can start with the @url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
Practically, you can use Gnuastro's Arithmetic program to convert/change the type of an image/data-cube (see @ref{Arithmetic}), or Gnuastro Table program to convert a table column's data type (see @ref{Column arithmetic}).
Conversion of a dataset's type is necessary in some contexts.
For example, the program/library, that you intend to feed the data into, only accepts floating point values, but you have an integer image/column.
Another situation that conversion can be helpful is when you know that your data only has values that fit within @code{int8} or @code{uint16}.
However it is currently formatted in the @code{float64} type.
The important thing to consider is that operations involving wider, floating point, or signed types can be significantly slower than smaller-width, integer, or unsigned types respectively.
Note that besides speed, a wider type also requires much more storage space (by 4 or 8 times).
Therefore, when you confront such situations that can be optimized and want to store/archive/transfer the data, it is best to use the most efficient type.
For example, if your dataset (image or table column) only has positive integers less than 65535, store it as an unsigned 16-bit integer for faster processing, faster transfer, and less storage space.
The short and long names for the recognized numeric data types in Gnuastro are listed below.
Both short and long names can be used when you want to specify a type.
For example, as a value to the common option @option{--type} (see @ref{Input output options}), or in the information comment lines of @ref{Gnuastro text table format}.
The ranges listed below are inclusive.
@table @code
@item u8
@itemx uint8
8-bit unsigned integers, range:@*
@mymath{[0\rm{\ to\ }2^8-1]} or @mymath{[0\rm{\ to\ }255]}.
@item i8
@itemx int8
8-bit signed integers, range:@*
@mymath{[-2^7\rm{\ to\ }2^7-1]} or @mymath{[-128\rm{\ to\ }127]}.
@item u16
@itemx uint16
16-bit unsigned integers, range:@*
@mymath{[0\rm{\ to\ }2^{16}-1]} or @mymath{[0\rm{\ to\ }65535]}.
@item i16
@itemx int16
16-bit signed integers, range:@* @mymath{[-2^{15}\rm{\ to\ }2^{15}-1]} or
@mymath{[-32768\rm{\ to\ }32767]}.
@item u32
@itemx uint32
32-bit unsigned integers, range:@* @mymath{[0\rm{\ to\ }2^{32}-1]} or
@mymath{[0\rm{\ to\ }4294967295]}.
@item i32
@itemx int32
32-bit signed integers, range:@* @mymath{[-2^{31}\rm{\ to\ }2^{31}-1]} or
@mymath{[-2147483648\rm{\ to\ }2147483647]}.
@item u64
@itemx uint64
64-bit unsigned integers, range@* @mymath{[0\rm{\ to\ }2^{64}-1]} or
@mymath{[0\rm{\ to\ }18446744073709551615]}.
@item i64
@itemx int64
64-bit signed integers, range:@* @mymath{[-2^{63}\rm{\ to\ }2^{63}-1]} or
@mymath{[-9223372036854775808\rm{\ to\ }9223372036854775807]}.
@item f32
@itemx float32
32-bit (single-precision) floating point types.
The maximum (minimum is its negative) possible value is @mymath{3.402823\times10^{38}}.
Single-precision floating points can accurately represent a floating point number up to @mymath{\sim7.2} significant decimals.
Given the heavy noise in astronomical data, this is usually more than sufficient for storing results.
For more, see @ref{Printing floating point numbers}.
@item f64
@itemx float64
64-bit (double-precision) floating point types.
The maximum (minimum is its negative) possible value is @mymath{\sim10^{308}}.
Double-precision floating points can accurately represent a floating point number @mymath{\sim15.9} significant decimals.
This is usually good for processing (mixing) the data internally, for example, a sum of single precision data (and later storing the result as @code{float32}).
For more, see @ref{Printing floating point numbers}.
@end table
@cartouche
@noindent
@strong{Some file formats do not recognize all types.} for example, the FITS standard (see @ref{Fits}) does not define @code{uint64} in binary tables or images.
When a type is not acceptable for output into a given file format, the respective Gnuastro program or library will let you know and abort.
On the command-line, you can convert the numerical type of an image, or table column into another type with @ref{Arithmetic} or @ref{Table} respectively.
If you are writing your own program, you can use the @code{gal_data_copy_to_new_type()} function in Gnuastro's library, see @ref{Copying datasets}.
@end cartouche
@node Memory management, Tables, Numeric data types, Common program behavior
@section Memory management
@cindex Memory management
@cindex Non-volatile memory
@cindex Memory, non-volatile
In this section we will review how Gnuastro manages your input data in your system's memory.
Knowing this can help you optimize your usage (in speed and memory consumption) when the data volume is large and approaches, or exceeds, your available RAM (usually in various calls to multiple programs simultaneously).
But before diving into the details, let's have a short basic introduction to memory in general and in particular the types of memory most relevant to this discussion.
Input datasets (that are later fed into programs for analysis) are commonly first stored in @emph{non-volatile memory}.
This is a type of memory that does not need a constant power supply to keep the data and is therefore primarily aimed for long-term storage, like HDDs or SSDs.
So data in this type of storage is preserved when you turn off your computer.
But by its nature, non-volatile memory is much slower, in reading or writing, than the speeds that CPUs can process the data.
Thus relying on this type of memory alone would create a bad bottleneck in the input/output (I/O) phase of any processing.
@cindex RAM
@cindex Volatile memory
@cindex Memory, volatile
The first step to decrease this bottleneck is to have a faster storage space, but with a much limited storage volume.
For this type of storage, computers have a Random Access Memory (or RAM).
RAM is classified as a @emph{volatile memory} because it needs a constant flow of electricity to keep the information.
In other words, the moment power is cut-off, all the stored information in your RAM is gone (hence the ``volatile'' name).
But thanks to that constant supply of power, it can access any random address with equal (and very high!) speed.
Hence, the general/simplistic way that programs deal with memory is the following (this is general to almost all programs, not just Gnuastro's):
1) Load/copy the input data from the non-volatile memory into RAM.
2) Use the copy of the data in RAM as input for all the internal processing as well as the intermediate data that is necessary during the processing.
3) Finally, when the analysis is complete, write the final output data back into non-volatile memory, and free/delete all the used space in the RAM (the initial copy and all the intermediate data).
Usually the RAM is most important for the data of the intermediate steps (that you never see as a user of a program!).
When the input dataset(s) to a program are small (compared to the available space in your system's RAM at the moment it is run) Gnuastro's programs and libraries follow the standard series of steps above.
The only exception is that deleting the intermediate data is not only done at the end of the program.
As soon as an intermediate dataset is no longer necessary for the next internal steps, the space it occupied is deleted/freed.
This allows Gnuastro programs to minimize their usage of your system's RAM over the full running time.
The situation gets complicated when the datasets are large (compared to your available RAM when the program is run).
For example, if a dataset is half the size of your system's available RAM, and the program's internal analysis needs three or more intermediately processed copies of it at one moment in its analysis.
There will not be enough RAM to keep those higher-level intermediate data.
In such cases, programs that do not do any memory management will crash.
But fortunately Gnuastro's programs do have a memory management plans for such situations.
@cindex Memory-mapped file
When the necessary amount of space for an intermediate dataset cannot be allocated in the RAM, Gnuastro's programs will not use the RAM at all.
They will use the ``memory-mapped file'' concept in modern operating systems to create a randomly-named file in your non-volatile memory and use that instead of the RAM.
That file will have the exact size (in bytes) of that intermediate dataset.
Any time the program needs that intermediate dataset, the operating system will directly go to that file, and bypass your RAM.
As soon as that file is no longer necessary for the analysis, it will be deleted.
But as mentioned above, non-volatile memory has much slower I/O speed than the RAM.
Hence in such situations, the programs will become noticeably slower (sometimes by factors of 10 times slower, depending on your non-volatile memory speed).
Because of the drop in I/O speed (and thus the speed of your running program), the moment that any to-be-allocated dataset is memory-mapped, Gnuastro's programs and libraries will notify you with a descriptive statement like below (can happen in any phase of their analysis).
It shows the location of the memory-mapped file, and its size, complemented with a small description of the cause, a pointer to this section of the book for more information on how to deal with it (if necessary), and how to suppress it.
@example
astarithmetic: ./gnuastro_mmap/Fu7Dhs: temporary memory-mapped file
(XXXXXXXXXXX bytes) created for intermediate data that is not stored
in RAM (see the "Memory management" section of Gnuastro's manual for
optimizing your project's memory management, and thus speed). To
disable this warning, please use the option '--quiet-mmap'
@end example
@noindent
Finally, when the intermediate dataset is no longer necessary, the program will automatically delete it and notify you with a statement like this:
@example
astarithmetic: ./gnuastro_mmap/Fu7Dhs: deleted
@end example
@noindent
To disable these messages, you can run the program with @code{--quietmmap}, or set the @code{quietmmap} variable in the allocating library function to be non-zero.
An important component of these messages is the name of the memory-mapped file.
Knowing that the file has been deleted is important for the user if the program crashes for any reason: internally (for example, a parameter is given wrongly) or externally (for example, you mistakenly kill the running job).
In the event of a crash, the memory-mapped files will not be deleted and you have to manually delete them because they are usually large and they may soon fill your full storage if not deleted in a long time due to successive crashes.
This brings us to managing the memory-mapped files in your non-volatile memory.
In other words: knowing where they are saved, or intentionally placing them in different places of your file system, or deleting them when necessary.
As the examples above show, memory-mapped files are stored in a sub-directory of the running directory called @file{gnuastro_mmap}.
If this directory does not exist, Gnuastro will automatically create it when memory mapping becomes necessary.
Alternatively, it may happen that the @file{gnuastro_mmap} sub-directory exists and is not writable, or it cannot be created.
In such cases, the memory-mapped file for each dataset will be created in the running directory with a @file{gnuastro_mmap_} prefix.
Therefore one easy way to delete all memory-mapped files in case of a crash, is to delete everything within the sub-directory (first command below), or all files stating with this prefix:
@example
rm -f gnuastro_mmap/*
rm -f gnuastro_mmap_*
@end example
A much more common issue when dealing with memory-mapped files is their location.
For example, you may be running a program in a partition that is hosted by an HDD.
But you also have another partition on an SSD (which has much faster I/O).
So you want your memory-mapped files to be created in the SSD to speed up your processing.
In this scenario, you want your project source directory to only contain your plain-text scripts and you want your project's built products (even the temporary memory-mapped files) to be built in a different location because they are large; thus I/O speed becomes important.
To host the memory-mapped files in another location (with fast I/O), you can set (@file{gnuastro_mmap}) to be a symbolic link to it.
For example, let's assume you want your memory-mapped files to be stored in @file{/path/to/dir/for/mmap}.
All you have to do is to run the following command before your Gnuastro analysis command(s).
@example
ln -s /path/to/dir/for/mmap gnuastro_mmap
@end example
The programs will delete a memory-mapped file when it is no longer needed, but they will not delete the @file{gnuastro_mmap} directory that hosts them.
So if your project involves many Gnuastro programs (possibly called in parallel) and you want your memory-mapped files to be in a different location, you just have to make the symbolic link above once at the start, and all the programs will use it if necessary.
Another memory-management scenario that may happen is this: you do not want a Gnuastro program to allocate internal datasets in the RAM at all.
For example, the speed of your Gnuastro-related project does not matter at that moment, and you have higher-priority jobs that are being run at the same time which need to have RAM available.
In such cases, you can use the @option{--minmapsize} option that is available in all Gnuastro programs (see @ref{Processing options}).
Any intermediate dataset that has a size larger than the value of this option will be memory-mapped, even if there is space available in your RAM.
For example, if you want any dataset larger than 100 megabytes to be memory-mapped, use @option{--minmapsize=100000000} (8 zeros!).
@cindex Linux kernel
@cindex Kernel, Linux
You should not set the value of @option{--minmapsize} to be too small, otherwise even small intermediate values (that are usually very numerous) in the program will be memory-mapped.
However the kernel can only host a limited number of memory-mapped files at every moment (by all running programs combined).
For example, in the default@footnote{If you need to host more memory-mapped files at one moment, you need to build your own customized Linux kernel.} Linux kernel on GNU/Linux operating systems this limit is roughly 64000.
If the total number of memory-mapped files exceeds this number, all the programs using them will crash.
Gnuastro's programs will warn you if your given value is too small and may cause a problem later.
Actually, the default behavior for Gnuastro's programs (to only use memory-mapped files when there is not enough RAM) is a side-effect of @option{--minmapsize}.
The pre-defined value to this option is an extremely large value in the lowest-level Gnuastro configuration file (the installed @file{gnuastro.conf} described in @ref{Configuration file precedence}).
This value is larger than the largest possible available RAM.
You can check by running any Gnuastro program with a @option{-P} option.
Because no dataset will be larger than this, by default the programs will first attempt to use the RAM for temporary storage.
But if writing in the RAM fails (for any reason, mainly due to lack of available space), then a memory-mapped file will be created.
@node Tables, Tessellation, Memory management, Common program behavior
@section Tables
``A table is a collection of related data held in a structured format within a database.
It consists of columns, and rows.'' (from Wikipedia).
Each column in the table contains the values of one property and each row is a collection of properties (columns) for one target object.
For example, let's assume you have just ran MakeCatalog (see @ref{MakeCatalog}) on an image to measure some properties for the labeled regions (which might be detected galaxies for example) in the image.
For each labeled region (detected galaxy), there will be a @emph{row} which groups its measured properties as @emph{columns}, one column for each property.
One such property can be the object's magnitude, which is the sum of pixels with that label, or its center can be defined as the light-weighted average value of those pixels.
Many such properties can be derived from the raw pixel values and their position, see @ref{Invoking astmkcatalog} for a long list.
As a summary, for each labeled region (or, galaxy) we have one @emph{row} and for each measured property we have one @emph{column}.
This high-level structure is usually the first step for higher-level analysis, for example, finding the stellar mass or photometric redshift from magnitudes in multiple colors.
Thus, tables are not just outputs of programs, in fact it is much more common for tables to be inputs of programs.
For example, to make a mock galaxy image, you need to feed in the properties of each galaxy into @ref{MakeProfiles} for it do the inverse of the process above and make a simulated image from a catalog, see @ref{Sufi simulates a detection}.
In other cases, you can feed a table into @ref{Crop} and it will crop out regions centered on the positions within the table, see @ref{Reddest clumps cutouts and parallelization}.
So to end this relatively long introduction, tables play a very important role in astronomy, or generally all branches of data analysis.
In @ref{Recognized table formats} the currently recognized table formats in Gnuastro are discussed.
You can use any of these tables as input or ask for them to be built as output.
The most common type of table format is a simple plain text file with each row on one line and columns separated by white space characters, this format is easy to read/write by eye/hand.
To give it the full functionality of more specific table types like the FITS tables, Gnuastro has a special convention which you can use to give each column a name, type, unit, and comments, while still being readable by other plain text table readers.
This convention is described in @ref{Gnuastro text table format}.
When tables are input to a program, the program reading it needs to know which column(s) it should use for its desired purposes.
Gnuastro's programs all follow a similar convention, on the way you can select columns in a table.
They are thoroughly discussed in @ref{Selecting table columns}.
@menu
* Recognized table formats:: Table formats that are recognized in Gnuastro.
* Gnuastro text table format:: Gnuastro's convention plain text tables.
* Selecting table columns:: Identify/select certain columns from a table
@end menu
@node Recognized table formats, Gnuastro text table format, Tables, Tables
@subsection Recognized table formats
The list of table formats that Gnuastro can currently read from and write to are described below.
Each has their own advantage and disadvantages, so a short review of the format is also provided to help you make the best choice based on how you want to define your input tables or later use your output tables.
@table @asis
@item Plain text table
This is the most basic and simplest way to create, view, or edit the table by hand on a text editor.
The other formats described below are less eye-friendly and have a more formal structure (for easier computer readability).
It is fully described in @ref{Gnuastro text table format}.
@cindex FITS Tables
@cindex Tables FITS
@cindex ASCII table, FITS
@item FITS ASCII tables
The FITS ASCII table extension is fully in ASCII encoding and thus easily readable on any text editor (assuming it is the only extension in the FITS file).
If the FITS file also contains binary extensions (for example, an image or binary table extensions), then there will be many hard to print characters.
The FITS ASCII format does not have new line characters to separate rows.
In the FITS ASCII table standard, each row is defined as a fixed number of characters (value to the @code{NAXIS1} keyword), so to visually inspect it properly, you would have to adjust your text editor's width to this value.
All columns start at given character positions and have a fixed width (number of characters).
Numbers in a FITS ASCII table are printed into ASCII format, they are not in binary (that the CPU uses).
Hence, they can take a larger space in memory, loose their precision, and take longer to read into memory.
If you are dealing with integer type columns (see @ref{Numeric data types}), another issue with FITS ASCII tables is that the type information for the column will be lost (there is only one integer type in FITS ASCII tables).
One problem with the binary format on the other hand is that it is not portable (different CPUs/compilers) have different standards for translating the zeros and ones.
But since ASCII characters are defined on a byte and are well recognized, they are better for portability on those various systems.
Gnuastro's plain text table format described below is much more portable and easier to read/write/interpret by humans manually.
Generally, as the name implies, this format is useful for when your table mainly contains ASCII columns (for example, file names, or descriptions).
They can be useful when you need to include columns with structured ASCII information along with other extensions in one FITS file.
In such cases, you can also consider header keywords (see @ref{Fits}).
@cindex Binary table, FITS
@item FITS binary tables
The FITS binary table is the FITS standard's solution to the issues discussed with keeping numbers in ASCII format as described under the FITS ASCII table title above.
Only columns defined as a string type (a string of ASCII characters) are readable in a text editor.
The portability problem with binary formats discussed above is mostly solved thanks to the portability of CFITSIO (see @ref{CFITSIO}) and the very long history of the FITS format which has been widely used since the 1970s.
In the case of most numbers, storing them in binary format is more memory efficient than ASCII format.
For example, to store @code{-25.72034} in ASCII format, you need 9 bytes/characters.
But if you keep this same number (to the approximate precision possible) as a 4-byte (32-bit) floating point number, you can keep/transmit it with less than half the amount of memory.
When catalogs contain thousands/millions of rows in tens/hundreds of columns, this can lead to significant improvements in memory/band-width usage.
Moreover, since the CPU does its operations in the binary formats, reading the table in and writing it out is also much faster than an ASCII table.
When you are dealing with integer numbers, the compression ratio can be even better, for example, if you know all of the values in a column are positive and less than @code{255}, you can use the @code{unsigned char} type which only takes one byte! If they are between @code{-128} and @code{127}, then you can use the (signed) @code{char} type.
So if you are thoughtful about the limits of your integer columns, you can greatly reduce the size of your file and also the speed at which it is read/written.
This can be very useful when sharing your results with collaborators or publishing them.
To decrease the file size even more you can name your output as ending in @file{.fits.gz} so it is also compressed after creation.
Just note that compression/decompressing is CPU intensive and can slow down the writing/reading of the file.
Fortunately the FITS Binary table format also accepts ASCII strings as column types (along with the various numerical types).
So your dataset can also contain non-numerical columns.
@end table
@node Gnuastro text table format, Selecting table columns, Recognized table formats, Tables
@subsection Gnuastro text table format
Plain text files are the most generic, portable, and easiest way to (manually) create, (visually) inspect, or (manually) edit a table.
In this format, the ending of a row is defined by the new-line character (a line on a text editor).
So when you view it on a text editor, every row will occupy one line.
The delimiters (or characters separating the columns) are white space characters (space, horizontal tab, vertical tab) and a comma (@key{,}).
The only further requirement is that all rows/lines must have the same number of columns.
The columns do not have to be exactly under each other and the rows can be arbitrarily long with different lengths.
For example, the following contents in a file would be interpreted as a table with 4 columns and 2 rows, with each element interpreted as a 64-bit floating point type (see @ref{Numeric data types}).
@example
1 2.234948 128 39.8923e8
2 , 4.454 792 72.98348e7
@end example
However, the example above has no other information about the columns (it is just raw data, with no meta-data).
To use this table, you have to remember what the numbers in each column represent.
Also, when you want to select columns, you have to count their position within the table.
This can become frustrating and prone to bad errors (getting the columns wrong in your scientific project!) especially as the number of columns increase.
It is also bad for sending to a colleague, because they will find it hard to remember/use the columns properly.
To solve these problems in Gnuastro's programs/libraries you are not limited to using the column's number, see @ref{Selecting table columns}.
If the columns have names, units, or comments you can also select your columns based on searches/matches in these fields, for example, see @ref{Table}.
Also, in this manner, you cannot guide the program reading the table on how to read the numbers.
As an example, the first and third columns above can be read as integer types: the first column might be an ID and the third can be the number of pixels an object occupies in an image.
So there is no need to read these to columns as a 64-bit floating point type (which takes more memory, and is slower).
In the bare-minimum example above, you also cannot use strings of characters, for example, the names of filters, or some other identifier that includes non-numerical characters.
In the absence of any information, only numbers can be read robustly.
Assuming we read columns with non-numerical characters as string, there would still be the problem that the strings might contain space (or any delimiter) character for some rows.
So, each `word' in the string will be interpreted as a column and the program will abort with an error that the rows do not have the same number of columns.
To correct for these limitations, Gnuastro defines the following convention for storing the table meta-data along with the raw data in one plain text file.
The format is primarily designed for ease of reading/writing by eye/fingers, but is also structured enough to be read by a program.
When the first non-white character in a line is @key{#}, or there are no non-white characters in it, then the line will not be considered as a row of data in the table (this is a pretty standard convention in many programs, and higher level languages).
In the first case (when the first character of the line is @key{#}), the line is interpreted as a @emph{comment}.
If the comment line starts with `@code{# Column N:}', then it is assumed to contain information about column @code{N} (a number, counting from 1).
Comment lines that do not start with this pattern are ignored and you can use them to include any further information you want to store with the table in the text file.
The most generic column information comment line has the following format:
@example
# Column N: NAME [UNIT, TYPE(NUM), BLANK] COMMENT
@end example
@cindex NaN
@noindent
Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted as the column name (so it can contain anything except the `@key{[}' character).
Anything between the `@key{]}' and the end of the line is defined as a comment.
Within the brackets, anything before the first `@key{,}' is the units (physical units, for example, km/s, or erg/s), anything before the second `@key{,}' is the short type identifier (see below, and @ref{Numeric data types}).
If the type identifier is not recognized, the default 64-bit floating point type will be used.
The type identifier can optionally be followed by an integer within parenthesis.
If the parenthesis is present and the integer is larger than 1, the column is assumed to be a ``vector column'' (which can have multiple values, for more see @ref{Vector columns}).
Finally (still within the brackets), any non-white characters after the second `@key{,}' are interpreted as the blank value for that column (see @ref{Blank pixels}).
The blank value can either be in the same type as the column (for example, @code{-99} for a signed integer column), or any string (for example, @code{NaN} in that same column).
In both cases, the values will be stored in memory as Gnuastro's fixed blank values for each type.
For floating point types, Gnuastro's internal blank value is IEEE NaN (Not-a-Number).
For signed integers, it is the smallest possible value and for unsigned integers its the largest possible value.
When a formatting problem occurs, or when the column was already given meta-data in a previous comment, or when the column number is larger than the actual number of columns in the table (the non-commented or empty lines), then the comment information line will be ignored.
When a comment information line can be used, the leading and trailing white space characters will be stripped from all of the elements.
For example, in this line:
@example
# Column 5: column name [km/s, f32,-99] Redshift as speed
@end example
The @code{NAME} field will be `@code{column name}' and the @code{TYPE} field will be `@code{f32}'.
Note how all the white space characters before and after strings are not used, but those in the middle remained.
Also, white space characters are not mandatory.
Hence, in the example above, the @code{BLANK} field will be given the value of `@code{-99}'.
Except for the column number (@code{N}), the rest of the fields are optional.
Also, the column information comments do not have to be in order.
In other words, the information for column @mymath{N+m} (@mymath{m>0}) can be given in a line before column @mymath{N}.
Furthermore, you do not have to specify information for all columns.
Those columns that do not have this information will be interpreted with the default settings (like the case above: values are double precision floating point, and the column has no name, unit, or comment).
So these lines are all acceptable for any table (the first one, with nothing but the column number is redundant):
@example
# Column 5:
# Column 1: ID [,i8] The Clump ID.
# Column 3: mag_f160w [AB mag, f32] Magnitude from the F160W filter
@end example
@noindent
The data type of the column should be specified with one of the following values:
@itemize
@item
For a numeric column, you can use any of the numeric types (and their
recognized identifiers) described in @ref{Numeric data types}.
@item
`@code{strN}': for strings.
The @code{N} value identifies the length of the string (how many characters it has).
The start of the string on each row is the first non-delimiter character of the column that has the string type.
The next @code{N} characters will be interpreted as a string and all leading and trailing white space will be removed.
If the next column's characters, are closer than @code{N} characters to the start of the string column in that line/row, they will be considered part of the string column.
If there is a new-line character before the ending of the space given to the string column (in other words, the string column is the last column), then reading of the string will stop, even if the @code{N} characters are not complete yet.
See @file{tests/table/table.txt} for one example.
Therefore, the only time you have to pay attention to the positioning and spaces given to the string column is when it is not the last column in the table.
The only limitation in this format is that trailing and leading white space characters will be removed from the columns that are read.
In most cases, this is the desired behavior, but if trailing and leading white-spaces are critically important to your analysis, define your own starting and ending characters and remove them after the table has been read.
For example, in the sample table below, the two `@key{|}' characters (which are arbitrary) will remain in the value of the second column and you can remove them manually later.
If only one of the leading or trailing white spaces is important for your work, you can only use one of the `@key{|}'s.
@example
# Column 1: ID [label, u8]
# Column 2: Notes [no unit, str50]
1 leading and trailing white space is ignored here 2.3442e10
2 | but they will be preserved here | 8.2964e11
@end example
@end itemize
Note that the FITS binary table standard does not define the @code{unsigned int} and @code{unsigned long} types, so if you want to convert your tables to FITS binary tables, use other types.
Also, note that in the FITS ASCII table, there is only one integer type (@code{long}).
So if you convert a Gnuastro plain text table to a FITS ASCII table with the @ref{Table} program, the type information for integers will be lost.
Conversely if integer types are important for you, you have to manually set them when reading a FITS ASCII table (for example, with the Table program when reading/converting into a file, or with the @file{gnuastro/table.h} library functions when reading into memory).
@node Selecting table columns, , Gnuastro text table format, Tables
@subsection Selecting table columns
At the lowest level, the only defining aspect of a column in a table is its number, or position.
But selecting columns purely by number is not very convenient and, especially when the tables are large it can be very frustrating and prone to errors.
Hence, table file formats (for example, see @ref{Recognized table formats}) have ways to store additional information about the columns (meta-data).
Some of the most common pieces of information about each column are its @emph{name}, the @emph{units} of data in it, and a @emph{comment} for longer/informal description of the column's data.
To facilitate research with Gnuastro, you can select columns by matching, or searching in these three fields, besides the low-level column number.
To view the full list of information on the columns in the table, you can use the Table program (see @ref{Table}) with the command below (replace @file{table-file} with the filename of your table, if its FITS, you might also need to specify the HDU/extension which contains the table):
@example
$ asttable --information table-file
@end example
Gnuastro's programs need the columns for different purposes, for example, in Crop, you specify the columns containing the central coordinates of the crop centers with the @option{--coordcol} option (see @ref{Crop options}).
On the other hand, in MakeProfiles, to specify the column containing the profile position angles, you must use the @option{--pcol} option (see @ref{MakeProfiles catalog}).
Thus, there can be no unified common option name to select columns for all programs (different columns have different purposes).
However, when the program expects a column for a specific context, the option names end in the @option{col} suffix like the examples above.
These options accept values in integer (column number), or string (metadata match/search) format.
If the value can be parsed as a positive integer, it will be seen as the low-level column number.
Note that column counting starts from 1, so if you ask for column 0, the respective program will abort with an error.
When the value cannot be interpreted as an a integer number, it will be seen as a string of characters which will be used to match/search in the table's meta-data.
The meta-data field which the value will be compared with can be selected through the @option{--searchin} option, see @ref{Input output options}.
@option{--searchin} can take three values: @code{name}, @code{unit}, @code{comment}.
The matching will be done following this convention:
@itemize
@item
If the value is enclosed in two slashes (for example, @command{-x/RA_/}, or @option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a regular expression with the same convention as GNU AWK.
GNU AWK has a very well written @url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html, chapter} describing regular expressions, so we will not continue discussing them here.
Regular expressions are a very powerful tool in matching text and useful in many contexts.
We thus strongly encourage reviewing this chapter for greatly improving the quality of your work in many cases, not just for searching column meta-data in Gnuastro.
@item
When the string is not enclosed between `@key{/}'s, any column that exactly matches the given value in the given field will be selected.
@end itemize
Note that in both cases, you can ignore the case of alphabetic characters with the @option{--ignorecase} option, see @ref{Input output options}.
Also, in both cases, multiple columns may be selected with one call to this function.
In this case, the order of the selected columns (with one call) will be the same order as they appear in the table.
@node Tessellation, Automatic output, Tables, Common program behavior
@section Tessellation
It is sometimes necessary to classify the elements in a dataset (for example, pixels in an image) into a grid of individual, non-overlapping tiles.
For example, when background sky gradients are present in an image, you can define a tile grid over the image.
When the tile sizes are set properly, the background's variation over each tile will be negligible, allowing you to measure (and subtract) it.
In other cases (for example, spatial domain convolution in Gnuastro, see @ref{Convolve}), it might simply be for speed of processing: each tile can be processed independently on a separate CPU thread.
In the arts and mathematics, this process is formally known as @url{https://en.wikipedia.org/wiki/Tessellation, tessellation}.
The size of the regular tiles (in units of data-elements, or pixels in an image) can be defined with the @option{--tilesize} option.
It takes multiple numbers (separated by a comma) which will be the length along the respective dimension (in FORTRAN/FITS dimension order).
Divisions are also acceptable, but must result in an integer.
For example, @option{--tilesize=30,40} can be used for an image (a 2D dataset).
The regular tile size along the first FITS axis (horizontal when viewed in SAO DS9) will be 30 pixels and along the second it will be 40 pixels.
Ideally, @option{--tilesize} should be selected such that all tiles in the image have exactly the same size.
In other words, that the dataset length in each dimension is divisible by the tile size in that dimension.
However, this is not always possible: the dataset can be any size and every pixel in it is valuable.
In such cases, Gnuastro will look at the significance of the remainder length, if it is not significant (for example, one or two pixels), then it will just increase the size of the first tile in the respective dimension and allow the rest of the tiles to have the required size.
When the remainder is significant (for example, one pixel less than the size along that dimension), the remainder will be added to one regular tile's size and the large tile will be cut in half and put in the two ends of the grid/tessellation.
In this way, all the tiles in the central regions of the dataset will have the regular tile sizes and the tiles on the edge will be slightly larger/smaller depending on the remainder significance.
The fraction which defines the remainder significance along all dimensions can be set through @option{--remainderfrac}.
The best tile size is directly related to the spatial properties of the property you want to study (for example, gradient on the image).
In practice we assume that the gradient is not present over each tile.
So if there is a strong gradient (for example, in long wavelength ground based images) or the image is of a crowded area where there is not too much blank area, you have to choose a smaller tile size.
A larger mesh will give more pixels and so the scatter in the results will be less (better statistics).
@cindex CCD
@cindex Amplifier
@cindex Bias current
@cindex Subaru Telescope
@cindex Hyper Suprime-Cam
@cindex Hubble Space Telescope (HST)
For raw image processing, a single tessellation/grid is not sufficient.
Raw images are the unprocessed outputs of the camera detectors.
Modern detectors usually have multiple readout channels each with its own amplifier.
For example, the Hubble Space Telescope Advanced Camera for Surveys (ACS) has four amplifiers over its full detector area dividing the square field of view to four smaller squares.
Ground based image detectors are not exempt, for example, each CCD of Subaru Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers, but they have the same height of the CCD and divide the width by four parts.
@cindex Channel
The bias current on each amplifier is different, and initial bias subtraction is not perfect.
So even after subtracting the measured bias current, you can usually still identify the boundaries of different amplifiers by eye.
See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an example.
This results in the final reduced data to have non-uniform amplifier-shaped regions with higher or lower background flux values.
Such systematic biases will then propagate to all subsequent measurements we do on the data (for example, photometry and subsequent stellar mass and star formation rate measurements in the case of galaxies).
Therefore an accurate analysis requires a two layer tessellation: the top layer contains larger tiles, each covering one amplifier channel.
For clarity we will call these larger tiles ``channels''.
The number of channels along each dimension is defined through the @option{--numchannels}.
Each channel is then covered by its own individual smaller tessellation (with tile sizes determined by the @option{--tilesize} option).
This will allow independent analysis of two adjacent pixels from different channels if necessary.
If the image is processed or the detector only has one amplifier, you can set the number of channels in both dimension to 1.
The final tessellation can be inspected on the image with the @option{--checktiles} option that is available to all programs which use tessellation for localized operations.
When this option is called, a FITS file with a @file{_tiled.fits} suffix will be created along with the outputs, see @ref{Automatic output}.
Each pixel in this image has the number of the tile that covers it.
If the number of channels in any dimension are larger than unity, you will notice that the tile IDs are defined such that the first channels is covered first, then the second and so on.
For the full list of processing-related common options (including tessellation options), please see @ref{Processing options}.
@node Automatic output, Output FITS files, Tessellation, Common program behavior
@section Automatic output
@cindex Standard input
@cindex Automatic output file names
@cindex Output file names, automatic
@cindex Setting output file names automatically
All the programs in Gnuastro are designed such that specifying an output file or directory (based on the program context) is optional.
When no output name is explicitly given (with @option{--output}, see @ref{Input output options}), the programs will automatically set an output name based on the input name(s) and what the program does.
For example, when you are using ConvertType to save FITS image named @file{dataset.fits} to a JPEG image and do not specify a name for it, the JPEG output file will be name @file{dataset.jpg}.
When the input is from the standard input (for example, a pipe, see @ref{Standard input}), and @option{--output} is not given, the output name will be the program's name (for example, @file{converttype.jpg}).
@vindex --keepinputdir
Another very important part of the automatic output generation is that all the directory information of the input file name is stripped off of it.
This feature can be disabled with the @option{--keepinputdir} option, see @ref{Input output options}.
It is the default because astronomical data are usually very large and organized specially with special file names.
In some cases, the user might not have write permissions in those directories@footnote{In fact, even if the data is stored on your own computer, it is advised to only grant write permissions to the super user or root.
This way, you will not accidentally delete or modify your valuable data!}.
Let's assume that we are working on a report and want to process the FITS images from two projects (ABC and DEF), which are stored in the sub-directories named @file{ABCproject/} and @file{DEFproject/} of our top data directory (@file{/mnt/data}).
The following shell commands show how one image from the former is first converted to a JPEG image through ConvertType and then the objects from an image in the latter project are detected using NoiseChisel.
The text after the @command{#} sign are comments (not typed!).
@example
$ pwd # Current location
/home/usrname/research/report
$ ls # List directory contents
ABC01.jpg
$ ls /mnt/data/ABCproject # Archive 1
ABC01.fits ABC02.fits ABC03.fits
$ ls /mnt/data/DEFproject # Archive 2
DEF01.fits DEF02.fits DEF03.fits
$ astconvertt /mnt/data/ABCproject/ABC02.fits --output=jpg # Prog 1
$ ls
ABC01.jpg ABC02.jpg
$ astnoisechisel /mnt/data/DEFproject/DEF01.fits # Prog 2
$ ls
ABC01.jpg ABC02.jpg DEF01_detected.fits
@end example
@node Output FITS files, Numeric locale, Automatic output, Common program behavior
@section Output FITS files
@cindex FITS
@cindex Output FITS headers
@cindex CFITSIO version on outputs
The output of many of Gnuastro's programs are (or can be) FITS files.
The FITS format has many useful features for storing scientific datasets (cubes, images and tables) along with a robust features for archivability.
For more on this standard, please see @ref{Fits}.
As a community convention described in @ref{Fits}, the first extension of all FITS files produced by Gnuastro's programs only contains the meta-data that is intended for the file's extension(s).
For a Gnuastro program, this generic meta-data (that is stored as FITS keyword records) is its configuration when it produced this dataset: file name(s) of input(s) and option names, values and comments.
You can use the @option{--outfitsnoconfig} option to stop the programs from writing these keywords into the first extension of their output.
When the configuration is too trivial (only input filename, for example, the program @ref{Table}) no meta-data is written in this extension.
FITS keywords have the following limitations in regards to generic option names and values which are described below:
@itemize
@item
If a keyword (option name) is longer than 8 characters, the first word in the record (80 character line) is @code{HIERARCH} which is followed by the keyword name.
@item
Values can be at most 75 characters, but for strings, this changes to 73 (because of the two extra @key{'} characters that are necessary).
However, if the value is a file name, containing slash (@key{/}) characters to separate directories, Gnuastro will break the value into multiple keywords.
@item
Keyword names ignore case, therefore they are all in capital letters.
Therefore, if you want to use Grep to inspect these keywords, use the @option{-i} option, like the example below.
@example
$ astfits image_detected.fits -h0 | grep -i snquant
@end example
@end itemize
The keywords above are classified (separated by an empty line and title) as a group titled ``ProgramName configuration''.
This meta-data extension also contains a final group of keywords to keep the basic date and version information of Gnuastro, its dependencies and the pipeline that is using Gnuastro (if it is under version control); they are listed below.
@table @command
@item DATE
The creation time of the FITS file.
This date is written directly by CFITSIO and is in UT format.
While the date can be a good metadata in most scenarios, it does have a caveat: when everything else in your output is the same between multiple runs, the date will be different!
If exact reproducibility is important for you, this can be annoying!
To stop any Gnuastro program from writing the @code{DATE} keyword, you can use the @option{--outfitsnodate} (see @ref{Input output options}).
@item DATEUTC
If the date in the @code{DATE} keyword is in @url{https://en.wikipedia.org/wiki/Coordinated_Universal_Time, UTC}, this keyword will have a value of 1; otherwise, it will have a value of 0.
If @code{DATE} is not written, this is also ignored.
@item COMMIT
Git's commit description from the running directory of Gnuastro's programs.
If the running directory is not version controlled or @file{libgit2} is not installed (see @ref{Optional dependencies}) then this keyword will not be present.
The printed value is equivalent to the output of the following command:
@example
git describe --dirty --always
@end example
If the running directory contains non-committed work, then the stored value will have a `@command{-dirty}' suffix.
This can be very helpful to let you know that the data is not ready to be shared with collaborators or submitted to a journal.
You should only share results that are produced after all your work is committed (safely stored in the version controlled history and thus reproducible).
At first sight, version control appears to be mainly a tool for software developers.
However progress in a scientific research is almost identical to progress in software development: first you have a rough idea that starts with handful of easy steps.
But as the first results appear to be promising, you will have to extend, or generalize, it to make it more robust and work in all the situations your research covers, not just your first test samples.
Slowly you will find wrong assumptions or bad implementations that need to be fixed (`bugs' in software development parlance).
Finally, when you submit the research to your collaborators or a journal, many comments and suggestions will come in, and you have to address them.
Software developers have created version control systems precisely for this kind of activity.
Each significant moment in the project's history is called a ``commit'', see @ref{Version controlled source}.
A snapshot of the project in each ``commit'' is safely stored away, so you can revert back to it at a later time, or check changes/progress.
This way, you can be sure that your work is reproducible and track the progress and history.
With version control, experimentation in the project's analysis is greatly facilitated, since you can easily revert back if a brainstorm test procedure fails.
One important feature of version control is that the research result (FITS image, table, report or paper) can be stamped with the unique commit information that produced it.
This information will enable you to exactly reproduce that same result later, even if you have made changes/progress.
For one example of a research paper's reproduction pipeline, please see the @url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} describing @ref{NoiseChisel}.
In case you don't want the @code{COMMIT} keyword in the first extension of your output FITS file, you can use the @option{--outfitsnocommit} option (see @ref{Input output options}).
@item CFITSIO
The version of CFITSIO used (see @ref{CFITSIO}).
This can be disabled with @option{--outfitsnoversions} (see @ref{Input output options}).
@item WCSLIB
The version of WCSLIB used (see @ref{WCSLIB}).
Note that older versions of WCSLIB do not report the version internally.
So this is only available if you are using more recent WCSLIB versions.
This can be disabled with @option{--outfitsnoversions} (see @ref{Input output options}).
@item GSL
The version of GNU Scientific Library that was used, see @ref{GNU Scientific Library}.
This can be disabled with @option{--outfitsnoversions} (see @ref{Input output options}).
@item GNUASTRO
The version of Gnuastro used (see @ref{Version numbering}).
This can be disabled with @option{--outfitsnoversions} (see @ref{Input output options}).
@end table
@node Numeric locale, , Output FITS files, Common program behavior
@section Numeric locale
@cindex Locale
@cindex @code{LC_ALL}
@cindex @code{LC_NUMERIC}
@cindex Decimal separator
@cindex Language of command-line
If your @url{https://en.wikipedia.org/wiki/Locale_(computer_software), system locale} is not English, it may happen that the `.' is not used as the decimal separator of basic command-line tools for input or output.
For example, in Spanish and some other languages the decimal separator (symbol used to separate the integer and fractional part of a number), is a comma.
Therefore in such systems, some programs may print @mymath{0.5} as as `@code{0,5}' (instead of `@code{0.5}').
This mainly happens in some core operating system tools like @command{awk} or @command{seq} depend on the locale.
This can cause problems for other programs (like those in Gnuastro that expect a `@key{.}' as the decimal separator).
To see the effect, please try the commands below.
The first one will print @mymath{0.5} in your default locale's format.
The second set will use the Spanish locale for printing numbers (which will put a comma between the 0 and the 5).
The third will use the English (US) locale for printing numbers (which will put a point between the 0 and the 5).
@example
$ seq 0.5 1
$ export LC_NUMERIC=es_ES.utf8
$ seq 0.5 1
$ export LC_NUMERIC=en_US.utf8
$ seq 0.5 1
@end example
@noindent
With the simple command below, you can check your current locale environment variables for specifying the formats of various things like date, time, monetary, telephone, numbers, etc.
You can change any of these, by simply giving different values to the respective variable like above.
For a more complete explanation on each variable, see @url{https://www.baeldung.com/linux/locale-environment-variables}.
@example
$ locale
@end example
To avoid these kinds of locale-specific problems (for example, another program not being able to read `@code{0,5}' as half of unity), you can change the locale by giving the value of @code{C} to the @code{LC_NUMERIC} environment variable (or the lower-level/generic @code{LC_ALL}).
You will notice that @code{C} is not a human-language and country identifier like @code{en_US}, it is the programming locale, which is well recognized by programmers in all countries and is available on all Unix-like operating systems (others may not be pre-defined and may need installation).
You can set the @code{LC_NUMERIC} only for a single command (the first one below: simply defining the variable in the same line), or all commands within the running session (the second command below, or ``exporting'' it to all subsequent commands):
@example
## Change the numeric locale, only for this 'seq' command.
$ LC_NUMERIC=C seq 0.5 1
## Change the locale to the standard, for all commands after it.
$ export LC_NUMERIC=C
@end example
If you want to change it generally for all future sessions, you can put the second command in your shell's startup file.
For more on startup files, please see @ref{Installation directory}.
@node Data containers, Data manipulation, Common program behavior, Top
@chapter Data containers
@cindex File operations
@cindex Operations on files
@cindex General file operations
The most low-level and basic property of a dataset is how it is stored.
To process, archive and transmit the data, you need a container to store it first.
From the start of the computer age, different formats have been defined to store data, optimized for particular applications.
One format/container can never be useful for all applications: the storage defines the application and vice-versa.
In astronomy, the Flexible Image Transport System (FITS) standard has become the most common format of data storage and transmission.
It has many useful features, for example, multiple sub-containers (also known as extensions or header data units, HDUs) within one file, or support for tables as well as images.
Each HDU can store an independent dataset and its corresponding meta-data.
Therefore, Gnuastro has one program (see @ref{Fits}) specifically designed to manipulate FITS HDUs and the meta-data (header keywords) in each HDU.
Your astronomical research does not just involve data analysis (where the FITS format is very useful).
For example, you want to demonstrate your raw and processed FITS images or spectra as figures within slides, reports, or papers.
The FITS format is not defined for such applications.
Thus, Gnuastro also comes with the ConvertType program (see @ref{ConvertType}) which can be used to convert a FITS image to and from (where possible) other formats like plain text and JPEG (which allow two way conversion), along with EPS and PDF (which can only be created from FITS, not the other way round).
Finally, the FITS format is not just for images, it can also store tables.
Binary tables in particular can be very efficient in storing catalogs that have more than a few tens of columns and rows.
However, unlike images (where all elements/pixels have one data type), tables contain multiple columns and each column can have different properties: independent data types (see @ref{Numeric data types}) and meta-data.
In practice, each column can be viewed as a separate container that is grouped with others in the table.
The only shared property of the columns in a table is thus the number of elements they contain.
To allow easy inspection/manipulation of table columns, Gnuastro has the Table program (see @ref{Table}).
It can be used to select certain table columns in a FITS table and see them as a human readable output on the command-line, or to save them into another plain text or FITS table.
@menu
* Fits:: View and manipulate extensions and keywords.
* ConvertType:: Convert data to various formats.
* Table:: Read and Write FITS tables to plain text.
* Query:: Import data from external databases.
@end menu
@node Fits, ConvertType, Data containers, Data containers
@section Fits
@cindex Vatican library
The ``Flexible Image Transport System'', or FITS, is by far the most common data container format in astronomy and in constant use since the 1970s.
Archiving (future usage, simplicity) has been one of the primary design principles of this format.
In the last few decades it has proved so useful and robust that the Vatican Library has also chosen FITS for its ``long-term digital preservation'' project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
@cindex IAU, international astronomical union
Although the full name of the standard invokes the idea that it is only for images, it also contains complete and robust features for tables.
It started off in the 1970s and was formally published as a standard in 1981, it was adopted by the International Astronomical Union (IAU) in 1982 and an IAU working group to maintain its future was defined in 1988.
The FITS 2.0 and 3.0 standards were approved in 2000 and 2008 respectively, and the 4.0 draft has also been released recently, please see the @url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document web page} for the full text of all versions.
Also see the @url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 standard paper} for a nice introduction and history along with the full standard.
@cindex Meta-data
Many common image formats, for example, a JPEG, only have one image/dataset per file, however one great advantage of the FITS standard is that it allows you to keep multiple datasets (images or tables along with their separate meta-data) in one file.
In the FITS standard, each data + metadata is known as an extension, or more formally a header data unit or HDU.
The HDUs in a file can be completely independent: you can have multiple images of different dimensions/sizes or tables as separate extensions in one file.
However, while the standard does not impose any constraints on the relation between the datasets, it is strongly encouraged to group data that are contextually related with each other in one file.
For example, an image and the table/catalog of objects and their measured properties in that image.
Other examples can be images of one patch of sky in different colors (filters), or one raw telescope image along with its calibration data (tables or images).
As discussed above, the extensions in a FITS file can be completely independent.
To keep some information (meta-data) about the group of extensions in the FITS file, the community has adopted the following convention: put no data in the first extension, so it is just meta-data.
This extension can thus be used to store Meta-data regarding the whole file (grouping of extensions).
Subsequent extensions may contain data along with their own separate meta-data.
All of Gnuastro's programs also follow this convention: the main output dataset(s) are placed in the second (or later) extension(s).
The first extension contains no data the program's configuration (input file name, along with all its option values) are stored as its meta-data, see @ref{Output FITS files}.
The meta-data contain information about the data, for example, which region of the sky an image corresponds to, the units of the data, what telescope, camera, and filter the data were taken with, it observation date, or the software that produced it and its configuration.
Without the meta-data, the raw dataset is practically just a collection of numbers and really hard to understand, or connect with the real world (other datasets).
It is thus strongly encouraged to supplement your data (at any level of processing) with as much meta-data about your processing/science as possible.
The meta-data of a FITS file is in ASCII format, which can be easily viewed or edited with a text editor or on the command-line.
Each meta-data element (known as a keyword generally) is composed of a name, value, units and comments (the last two are optional).
For example, below you can see three FITS meta-data keywords for specifying the world coordinate system (WCS, or its location in the sky) of a dataset:
@example
LATPOLE = -27.805089 / [deg] Native latitude of celestial pole
RADESYS = 'FK5' / Equatorial coordinate system
EQUINOX = 2000.0 / [yr] Equinox of equatorial coordinates
@end example
However, there are some limitations which discourage viewing/editing the keywords with text editors.
For example, there is a fixed length of 80 characters for each keyword (its name, value, units and comments) and there are no new-line characters, so on a text editor all the keywords are seen in one line.
Also, the meta-data keywords are immediately followed by the data which are commonly in binary format and will show up as strange looking characters on a text editor, and significantly slowing down the processor.
Gnuastro's Fits program was designed to allow easy manipulation of FITS extensions and meta-data keywords on the command-line while conforming fully with the FITS standard.
For example, you can copy or cut (copy and remove) HDUs/extensions from one FITS file to another, or completely delete them.
It also has features to delete, add, or edit meta-data keywords within one HDU.
@menu
* Invoking astfits:: Arguments and options to Header.
@end menu
@node Invoking astfits, , Fits, Fits
@subsection Invoking Fits
Fits can print or manipulate the FITS file HDUs (extensions), meta-data keywords in a given HDU.
The executable name is @file{astfits} with the following general template
@example
$ astfits [OPTION...] ASTRdata
@end example
@noindent
One line examples:
@example
## View general information about every extension:
$ astfits image.fits
## Print the header keywords in the second HDU (counting from 0):
$ astfits image.fits -h1
## Only print header keywords that contain `NAXIS':
$ astfits image.fits -h1 | grep NAXIS
## Only print the WCS standard PC matrix elements
$ astfits image.fits -h1 | grep 'PC._.'
## Copy a HDU from input.fits to out.fits:
$ astfits input.fits --copy=hdu-name --output=out.fits
## Update the OLDKEY keyword value to 153.034:
$ astfits --update=OLDKEY,153.034,"Old keyword comment"
## Delete one COMMENT keyword and add a new one:
$ astfits --delete=COMMENT --comment="Anything you like ;-)."
## Write two new keywords with different values and comments:
$ astfits --write=MYKEY1,20.00,"An example keyword" --write=MYKEY2,fd
## Inspect individual pixel area taken based on its WCS (in degree^2).
## Then convert the area to arcsec^2 with the Arithmetic program.
$ astfits input.fits --pixelareaonwcs -o pixarea.fits
$ astarithmetic pixarea.fits 3600 3600 x x -o pixarea_arcsec2.fits
@end example
@cindex HDU
@cindex HEALPix
When no action is requested (and only a file name is given), Fits will print a list of information about the extension(s) in the file.
This information includes the HDU number, HDU name (@code{EXTNAME} keyword), type of data (see @ref{Numeric data types}, and the number of data elements it contains (size along each dimension for images and table rows and columns).
Optionally, a comment column is printed for special situations (like a 2D HEALPix grid that is usually stored as a 1D dataset/table).
You can use this to get a general idea of the contents of the FITS file and what HDU to use for further processing, either with the Fits program or any other Gnuastro program.
Here is one example of information about a FITS file with four extensions: the first extension has no data, it is a purely meta-data HDU (commonly used to keep meta-data about the whole file, or grouping of extensions, see @ref{Fits}).
The second extension is an image with name @code{IMAGE} and single precision floating point type (@code{float32}, see @ref{Numeric data types}), it has 4287 pixels along its first (horizontal) axis and 4286 pixels along its second (vertical) axis.
The third extension is also an image with name @code{MASK}.
It is in 2-byte integer format (@code{int16}) which is commonly used to keep information about pixels (for example, to identify which ones were saturated, or which ones had cosmic rays and so on), note how it has the same size as the @code{IMAGE} extension.
The third extension is a binary table called @code{CATALOG} which has 12371 rows and 5 columns (it probably contains information about the sources in the image).
@example
GNU Astronomy Utilities X.X
Run on Day Month DD HH:MM:SS YYYY
-----
HDU (extension) information: `image.fits'.
Column 1: Index (counting from 0).
Column 2: Name (`EXTNAME' in FITS standard).
Column 3: Image data type or `table' format (ASCII or binary).
Column 4: Size of data in HDU.
-----
0 n/a uint8 0
1 IMAGE float32 4287x4286
2 MASK int16 4287x4286
3 CATALOG table_binary 12371x5
@end example
If a specific HDU is identified on the command-line with the @option{--hdu} (or @option{-h} option) and no operation requested, then the full list of header keywords in that HDU will be printed (as if the @option{--printallkeys} was called, see below).
It is important to remember that this only occurs when @option{--hdu} is given on the command-line.
The @option{--hdu} value given in a configuration file will only be used when a specific operation on keywords requested.
Therefore as described in the paragraphs above, when no explicit call to the @option{--hdu} option is made on the command-line and no operation is requested (on the command-line or configuration files), the basic information of each HDU/extension is printed.
The operating mode and input/output options to Fits are similar to the other programs and fully described in @ref{Common options}.
The options particular to Fits can be divided into three groups:
1) those related to modifying HDUs or extensions (see @ref{HDU information and manipulation}), and
2) those related to viewing/modifying meta-data keywords (see @ref{Keyword inspection and manipulation}).
3) those related to creating meta-images where each pixel shows values for a specific property of the image (see @ref{Pixel information images}).
These three classes of options cannot be called together in one run: you can either work on the extensions, meta-data keywords in any instance of Fits, or create meta-images where each pixel shows a particular information about the image itself.
@menu
* HDU information and manipulation:: Learn about the HDUs and move them.
* Keyword inspection and manipulation:: Manipulate metadata keywords in a HDU.
* Pixel information images:: Pixel values contain information on the pixels.
@end menu
@node HDU information and manipulation, Keyword inspection and manipulation, Invoking astfits, Invoking astfits
@subsubsection HDU information and manipulation
Each FITS file header data unit, or HDU (also known as an extension) is an independent dataset (data + meta-data).
Multiple HDUs can be stored in one FITS file, see @ref{Fits}.
The general HDU-related options to the Fits program are listed below as two general classes:
the first group below focus on HDU information while the latter focus on manipulating (moving or deleting) the HDUs.
The options below print information about the given HDU on the command-line.
Thus they cannot be called together in one command (each has its own independent output).
@table @option
@item -n
@itemx --numhdus
Print the number of extensions/HDUs in the given file.
Note that this option must be called alone and will only print a single number.
It is thus useful in scripts, for example, when you need to do check the number of extensions in a FITS file.
For a complete list of basic meta-data on the extensions in a FITS file, do not use any of the options in this section or in @ref{Keyword inspection and manipulation}.
For more, see @ref{Invoking astfits}.
@item --hastablehdu
Print @code{1} (on standard output) if at least one table HDU (ASCII or binary) exists in the FITS file.
Otherwise (when no table HDU exists in the file), print @code{0}.
@item --listtablehdus
Print the names or numbers (when a name does not exist, counting from zero) of HDUs that contain a table (ASCII or Binary) on standard output, one per line.
Otherwise (when no table HDU exists in the file) nothing will be printed.
@item --hasimagehdu
Print @code{1} (on standard output) if at least one image HDU exists in the FITS file.
Otherwise (when no image HDU exists in the file), print @code{0}.
In the FITS standard, any array with any dimensions is called an ``image'', therefore this option includes 1, 3 and 4 dimensional arrays too.
However, an image HDU with zero dimensions (which is usually the first extension and only contains metadata) is not counted here.
@item --listimagehdus
Print the names or numbers (when a name does not exist, counting from zero) of HDUs that contain an image on standard output, one per line.
Otherwise (when no image HDU exists in the file) nothing will be printed.
In the FITS standard, any array with any dimensions is called an ``image'', therefore this option includes 1, 3 and 4 dimensional arrays too.
However, an image HDU with zero dimensions (which is usually the first extension and only contains metadata) is not counted here.
@item --listallhdus
Print the names or numbers (when a name does not exist, counting from zero) of all HDUs within the input file on the standard output, one per line.
@item --pixelscale
Print the HDU's pixel-scale (change in world coordinate for one pixel along each dimension) and pixel area or voxel volume.
Without the @option{--quiet} option, the output of @option{--pixelscale} has multiple lines and explanations, thus being more human-friendly.
It prints the file/HDU name, number of dimensions, and the units along with the actual pixel scales.
Also, when any of the units are in degrees, the pixel scales and area/volume are also printed in units of arc-seconds.
For 3D datasets, the pixel area (on each 2D slice of the 3D cube) is printed as well as the voxel volume.
If you only want the pixel area of a 2D image in units of arcsec@mymath{^2} you can use @option{--pixelareaarcsec2} described below.
However, in scripts (that are to be run automatically), this human-friendly format is annoying, so when called with the @option{--quiet} option, only the pixel-scale value(s) along each dimension is(are) printed in one line.
These numbers are followed by the pixel area (in the raw WCS units).
For 3D datasets, this will be area on each 2D slice.
Finally, for 3D datasets, a final number (the voxel volume) is printed.
As a summary, in @option{--quiet} mode, for 2D datasets three numbers are printed and for 3D datasets, 5 numbers are printed.
If the dataset has more than 3 dimensions, only the pixel-scale values are printed (no area or volume will be printed).
@item --pixelareaarcsec2
Print the HDU's pixel area in units of arcsec@mymath{^2}.
This option only works on 2D images, that have WCS coordinates in units of degrees.
For lower-level information about the pixel scale in each dimension, see @option{--pixelscale} (described above).
@item --skycoverage
@cindex Image's sky coverage
@cindex Coverage of image over sky
Print the rectangular area (or 3D cube) covered by the given image/data-cube HDU over the Sky in the WCS units.
The covered area is reported in two ways:
1) the center and full width in each dimension,
2) the minimum and maximum sky coordinates in each dimension.
This is option is thus useful when you want to get a general feeling of a new image/dataset, or prepare the inputs to query external databases in the region of the image (for example, with @ref{Query}).
If run without the @option{--quiet} option, the values are given with a human-friendly description.
For example, here is the output of this option on an image taken near the star Castor:
@example
$ astfits castor.fits --skycoverage
Input file: castor.fits (hdu: 1)
Sky coverage by center and (full) width:
Center: 113.9149075 31.93759664
Width: 2.41762045 2.67945253
Sky coverage by range along dimensions:
RA 112.7235592 115.1411797
DEC 30.59262123 33.27207376
@end example
With the @option{--quiet} option, the values are more machine-friendly (easy to parse).
It has two lines, where the first line contains the center/width values and the second line shows the coordinate ranges in each dimension.
@example
$ astfits castor.fits --skycoverage --quiet
113.9149075 31.93759664 2.41762045 2.67945253
112.7235592 115.1411797 30.59262123 33.27207376
@end example
Note that this is a simple rectangle (cube in 3D) definition, so if the image is rotated in relation to the celestial coordinates a general polygon is necessary to exactly describe the coverage.
Hence when there is rotation, the reported area will be larger than the actual area containing data, you can visually see the area with the @option{--pixelareaonwcs} option of @ref{Fits}.
Currently this option only supports images that are less than 180 degrees in width (which is usually the case!).
This requirement has been necessary to account for images that cross the RA=0 hour circle on the sky.
Please get in touch with us at @url{mailto:bug-gnuastro@@gnu.org} if you have an image that is larger than 180 degrees so we try to find a solution based on need.
@item --datasum
@cindex @code{DATASUM}: FITS keyword
Calculate and print the given HDU's "datasum" to stdout.
The given HDU is specified with the @option{--hdu} (or @option{-h}) option.
This number is calculated by parsing all the bytes of the given HDU's data records (excluding keywords).
This option ignores any possibly existing @code{DATASUM} keyword in the HDU.
For more on @code{DATASUM} in the FITS standard, see @ref{Keyword inspection and manipulation} (under the @code{checksum} component of @option{--write}).
You can use this option to confirm that the data in two different HDUs (possibly with different keywords) is identical.
Its advantage over @option{--write=datasum} (which writes the @code{DATASUM} keyword into the given HDU) is that it does not require write permissions.
@item --datasum-encoded
Similar to @option{--datasum}, except that the output will be an encoded string of numbers and small-caps alphabetic characters.
This is the same encoding algorithm that is used for the @code{CHECKSUM} keyword, but applied to the value of the @code{DATASUM} result.
In some scenarios, this string can be more useful than the raw integer.
@end table
The following options manipulate (move/delete) the HDUs in one FITS file or to another FITS file.
These options may be called multiple times in one run.
If so, the extensions will be copied from the input FITS file to the output FITS file in the given order (on the command-line and also in configuration files, see @ref{Configuration file precedence}).
If the separate classes are called together in one run of Fits, then first @option{--copy} is run (on all specified HDUs), followed by @option{--cut} (again on all specified HDUs), and then @option{--remove} (on all specified HDUs).
The @option{--copy} and @option{--cut} options need an output FITS file (specified with the @option{--output} option).
If the output file exists, then the specified HDU will be copied following the last extension of the output file (the existing HDUs in it will be untouched).
Thus, after Fits finishes, the copied HDU will be the last HDU of the output file.
If no output file name is given, then automatic output will be used to store the HDUs given to this option (see @ref{Automatic output}).
@table @option
@item -C STR
@itemx --copy=STR
Copy the specified extension into the output file, see explanations above.
@item -k STR
@itemx --cut=STR
Cut (copy to output, remove from input) the specified extension into the
output file, see explanations above.
@item -R STR
@itemx --remove=STR
Remove the specified HDU from the input file.
The first (zero-th) HDU cannot be removed with this option.
Consider using @option{--copy} or @option{--cut} in combination with @option{primaryimghdu} to not have an empty zero-th HDU.
From CFITSIO: ``In the case of deleting the primary array (the first HDU in the file) then [it] will be replaced by a null primary array containing the minimum set of required keywords and no data.''.
So in practice, any existing data (array) and meta-data in the first extension will be removed, but the number of extensions in the file will not change.
This is because of the unique position the first FITS extension has in the FITS standard (for example, it cannot be used to store tables).
@item --primaryimghdu
Copy or cut an image HDU to the zero-th HDU/extension a file that does not yet exist.
This option is thus irrelevant if the output file already exists or the copied/cut extension is a FITS table.
For example, with the commands below, first we make sure that @file{out.fits} does not exist, then we copy the first extension of @file{in.fits} to the zero-th extension of @file{out.fits}.
@example
$ rm -f out.fits
$ astfits in.fits --copy=1 --primaryimghdu --output=out.fits
@end example
If we had not used @option{--primaryimghdu}, then the zero-th extension of @file{out.fits} would have no data, and its second extension would host the copied image (just like any other output of Gnuastro).
@end table
@node Keyword inspection and manipulation, Pixel information images, HDU information and manipulation, Invoking astfits
@subsubsection Keyword inspection and manipulation
The meta-data in each header data unit, or HDU (also known as extension, see @ref{Fits}) is stored as ``keyword''s.
Each keyword consists of a name, value, unit, and comments.
The Fits program (see @ref{Fits}) options related to viewing and manipulating keywords in a FITS HDU are described below.
First, let's review the @option{--keyvalue} option which should be called separately from the rest of the options described in this section.
Also, unlike the rest of the options in this section, with @option{--keyvalue}, you can give more than one input file.
@table @option
@item -l STR[,STR[,...]
@itemx --keyvalue=STR[,STR[,...]
Only print the value of the requested keyword(s): the @code{STR}s.
@option{--keyvalue} can be called multiple times, and each call can contain multiple comma-separated keywords.
If more than one file is given, this option uses the same HDU/extension for all of them (value to @option{--hdu}).
For example, you can get the number of dimensions of the three FITS files in the running directory, as well as the length along each dimension, with this command:
@example
$ astfits *.fits --keyvalue=NAXIS,NAXIS1 --keyvalue=NAXIS2
image-a.fits 2 774 672
image-b.fits 2 774 672
image-c.fits 2 387 336
@end example
If only one input is given, and the @option{--quiet} option is activated, the file name is not printed on the first column, only the values of the requested keywords.
@example
$ astfits image-a.fits --keyvalue=NAXIS,NAXIS1 \
--keyvalue=NAXIS2 --quiet
2 774 672
@end example
@cartouche
@noindent
@cindex Argument list too long
@strong{Argument list too long:} if the list of input files are too long, the shell is going to complain with the @code{Argument list too long} error!
To avoid this problem, you can put the list of files in a plain-text file and give that plain-text file to the Fits program through the @option{--arguments} option discussed below.
@end cartouche
The output is internally stored (and finally printed) as a table (with one column per keyword).
Therefore just like the Table program, you can use @option{--colinfoinstdout} to print the metadata like the example below (also see @ref{Invoking asttable}).
The keyword metadata (comments and units) are extracted from the comments and units of the keyword in the input files (first file that has a comment or unit).
Hence if the keyword does not have units or comments in any of the input files, they will be empty.
For more on Gnuastro's plain-text metadata format, see @ref{Gnuastro text table format}.
@example
$ astfits *.fits --keyvalue=NAXIS,NAXIS1,NAXIS2 \
--colinfoinstdout
# Column 1: FILENAME [name,str10,] Name of input file.
# Column 2: NAXIS [ ,u8 ,] number of data axes
# Column 3: NAXIS1 [ ,u16 ,] length of data axis 1
# Column 4: NAXIS2 [ ,u16 ,] length of data axis 2
image-a.fits 2 774 672
image-b.fits 2 774 672
image-c.fits 2 387 336
@end example
Another advantage of a table output is that you can directly write the table to a file.
For example, if you add @option{--output=fileinfo.fits}, the information above will be printed into a FITS table.
You can also pipe it into @ref{Table} to select files based on certain properties, to sort them based on another property, or any other operation that can be done with Table (including @ref{Column arithmetic}).
For example, with the command below, you can select all the files that have a size larger than 500 pixels in both dimensions.
@example
$ astfits *.fits --keyvalue=NAXIS,NAXIS1,NAXIS2 \
--colinfoinstdout \
| asttable --range=NAXIS1,500,inf \
--range=NAXIS2,500,inf -cFILENAME
image-a.fits
image-b.fits
@end example
Note that @option{--colinfoinstdout} is necessary to use column names when piping to other programs (like @command{asttable} above).
Also, with the @option{-cFILENAME} option, we are asking Table to only print the final file names (we do not need the sizes any more).
The commands with multiple files above used @file{*.fits}, which is only useful when all your FITS files are in the same directory.
However, in many cases, your FITS files will be scattered in multiple sub-directories of a certain top-level directory, or you may only want those with more particular file name patterns.
A more powerful way to list the input files to @option{--keyvalue} is to use the @command{find} program in Unix-like operating systems.
For example, with the command below you can search all the FITS files in all the sub-directories of @file{/TOP/DIR}.
@example
astfits $(find /TOP/DIR/ -name "*.fits") --keyvalue=NAXIS2
@end example
@item --arguments=STR
A plain-text file containing the list of input files that will be used in @option{--keyvalue}.
Each word (group of characters separated by SPACE or new-line) is assumed to be the name of the separate input file.
This option is only relevant when no input files are given as arguments on the command-line: if any arguments are given, this option is ignored.
This is necessary when the list of input files are very long; causing the shell to abort with an @code{Argument list too long} error.
In such cases, you can put the list into a plain-text file and use this option like below:
@example
$ ls $(path)/*.fits > list.txt
$ astfits --arguments=list.txt --keyvalue=NAXIS1
@end example
@item -O
@itemx --colinfoinstdout
Print column information (or metadata) above the column values when writing keyword values to standard output with @option{--keyvalue}.
You can read this option as column-information-in-standard-output.
@end table
Below we will discuss the options that can be used to manipulate keywords.
To see the full list of keywords in a FITS HDU, you can use the @option{--printallkeys} option.
If any of the keyword modification options below are requested (for example, @option{--update}), the headers of the input file/HDU will be changed first, then printed.
Keyword modification is done within the input file.
Therefore, if you want to keep the original FITS file or HDU intact, it is easiest to create a copy of the file/HDU first and then run Fits on that (for copying a HDU to another file, see @ref{HDU information and manipulation}.
In the FITS standard, keywords are always uppercase.
So case does not matter in the input or output keyword names you specify.
@cartouche
@noindent
@strong{@code{CHECKSUM} automatically updated, when present:} the keyword modification options will change the contents of the HDU.
Therefore, if a @code{CHECKSUM} is present in the HDU, after all the keyword modification options have been complete, Fits will also update @code{CHECKSUM} before closing the file.
@end cartouche
Most of the options can accept multiple instances in one command.
For example, you can add multiple keywords to delete by calling @option{--delete} multiple times, since repeated keywords are allowed, you can even delete the same keyword multiple times.
The action of such options will start from the top most keyword.
The precedence of operations are described below.
Note that while the order within each class of actions is preserved, the order of individual actions is not.
So irrespective of what order you called @option{--delete} and @option{--update}.
First, all the delete operations are going to take effect then the update operations.
@enumerate
@item
@option{--delete}
@item
@option{--rename}
@item
@option{--update}
@item
@option{--write}
@item
@option{--asis}
@item
@option{--history}
@item
@option{--comment}
@item
@option{--date}
@item
@option{--printallkeys}
@item
@option{--verify}
@item
@option{--copykeys}
@end enumerate
@noindent
All possible syntax errors will be reported before the keywords are actually written.
FITS errors during any of these actions will be reported, but Fits will not stop until all the operations are complete.
If @option{--quitonerror} is called, then Fits will immediately stop upon the first error.
@cindex GNU Grep
If you want to inspect only a certain set of header keywords, it is easiest to pipe the output of the Fits program to GNU Grep.
Grep is a very powerful and advanced tool to search strings which is precisely made for such situations.
for example, if you only want to check the size of an image FITS HDU, you can run:
@example
$ astfits input.fits | grep NAXIS
@end example
@cartouche
@noindent
@strong{FITS STANDARD KEYWORDS:}
Some header keywords are necessary for later operations on a FITS file, for example, BITPIX or NAXIS, see the FITS standard for their full list.
If you modify (for example, remove or rename) such keywords, the FITS file extension might not be usable any more.
Also be careful for the world coordinate system keywords, if you modify or change their values, any future world coordinate system (like RA and Dec) measurements on the image will also change.
@end cartouche
@noindent
The keyword related options to the Fits program are fully described below.
@table @option
@item -d STR
@itemx --delete=STR
Delete one instance of the @option{STR} keyword from the FITS header.
Multiple instances of @option{--delete} can be given (possibly even for the same keyword, when its repeated in the meta-data).
All keywords given will be removed from the headers in the same given order.
If the keyword does not exist, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -r STR,STR
@itemx --rename=STR,STR
Rename a keyword to a new value (for example, @option{--rename=OLDNAME,NEWNAME}.
@option{STR} contains both the existing and new names, which should be separated by either a comma (@key{,}) or a space character.
Note that if you use a space character, you have to put the value to this option within double quotation marks (@key{"}) so the space character is not interpreted as an option separator.
Multiple instances of @option{--rename} can be given in one command.
The keywords will be renamed in the specified order.
If the keyword does not exist, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -u STR
@itemx --update=STR
Update a keyword, its value, its comments and its units in the format described below.
If there are multiple instances of the keyword in the header, they will be changed from top to bottom (with multiple @option{--update} options).
@noindent
The format of the values to this option can best be specified with an
example:
@example
--update=KEYWORD,value,"comments for this keyword",unit
@end example
If there is a writing error, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.
@noindent
The value can be any numerical or string value@footnote{Some tricky situations arise with values like `@command{87095e5}', if this was intended to be a number it will be kept in the header as @code{8709500000} and there is no problem.
But this can also be a shortened Git commit hash.
In the latter case, it should be treated as a string and stored as it is written.
Commit hashes are very important in keeping the history of a file during your research and such values might arise without you noticing them in your reproduction pipeline.
One solution is to use @command{git describe} instead of the short hash alone.
A less recommended solution is to add a space after the commit hash and Fits will write the value as `@command{87095e5 }' in the header.
If you later compare the strings on the shell, the space character will be ignored by the shell in the latter solution and there will be no problem.}.
Other than the @code{KEYWORD}, all the other values are optional.
To leave a given token empty, follow the preceding comma (@key{,}) immediately with the next.
If any space character is present around the commas, it will be considered part of the respective token.
So if more than one token has space characters within it, the safest method to specify a value to this option is to put double quotation marks around each individual token that needs it.
Note that without double quotation marks, space characters will be seen as option separators and can lead to undefined behavior.
@item -w STR
@itemx --write=STR
Write a keyword to the header.
For the possible value input formats, comments and units for the keyword, see the @option{--update} option above.
The special names (first string) below will cause a special behavior:
@table @option
@item /
Write a commentary ``title'' to the list of keywords.
A title consists of one blank (on @key{SPACE} characters) line and another which is blank for several 22 bytes and then starts with a slash (@key{/}).
The second string given to this option is the ``title'' or string printed after the slash.
For example, with the command below you can add a ``title'' of `My keywords' after the existing keywords and add the subsequent @code{K1} and @code{K2} keywords under it (note that keyword names are not case sensitive).
@example
$ astfits test.fits -h1 --write=/,"My keywords" \
--write=k1,1.23,"My first keyword" \
--write=k2,4.56,"My second keyword"
$ astfits test.fits -h1
[[[ ... truncated ... ]]]
/ My keywords
K1 = 1.23 / My first keyword
K2 = 4.56 / My second keyword
END
@end example
Adding a ``title'' before each contextually separate group of header keywords greatly helps in readability and visual inspection of the keywords.
So generally, when you want to add new FITS keywords, it is good practice to also add a title before them or classify them under different titles.
The title(s) is(are) written into the FITS with the same order that @option{--write} is called.
Therefore in one run of the Fits program, you can specify many different titles (with their own keywords under them).
For example, the command below that builds on the previous example and adds another group of keywords named @code{A1} and @code{A2}.
@example
$ astfits test.fits -h1 --write=/,"My keywords" \
--write=k1,1.23,"My first keyword" \
--write=k2,4.56,"My second keyword" \
--write=/,"My second group of keywords" \
--write=a1,7.89,"First keyword" \
--write=a2,0.12,"Second keyword"
@end example
The reason you need to use @key{/} instead of the keyword name in the call to @option{--write} is that @key{/} is the first non-white character in the printed title.
Technically (in the FITS standard), the two extra lines added in the header as a title are considered to be ``commentary keywords'' (where the name-value indicator '@code{= }' does not exist on the 9th and 10th byte).
Such lines will be treated as comment lines by any FITS reader and as mentioned in the FITS standard, they are primarily for improving the human experience (aesthetics).
For details, see Sections 4.1.2.2 and 4.4.2.4 of the @url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard} version 4.0 (latest as of this writing).
@item checksum
@cindex CFITSIO
@cindex @code{DATASUM}: FITS keyword
@cindex @code{CHECKSUM}: FITS keyword
When nothing is given afterwards, the header integrity keywords @code{DATASUM} and @code{CHECKSUM} will be calculated and written/updated.
The calculation and writing is done fully by CFITSIO, therefore they comply with the FITS standard 4.0@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}} that defines these keywords (its Appendix J).
If a value is given (e.g., @option{--write=checksum,MyOwnCheckSum}), then CFITSIO will not be called to calculate these two keywords and the value (as well as possible comment and unit) will be written just like any other keyword.
This is generally not recommended since @code{CHECKSUM} is a reserved FITS standard keyword.
If you want to calculate the checksum with another hashing standard manually and write it into the header, it is recommended to use another keyword name.
In the FITS standard, @code{CHECKSUM} depends on the HDU's data @emph{and} header keywords, it will therefore not be valid if you make any further changes to the header after writing the @code{CHECKSUM} keyword.
This includes any further keyword modification options in the same call to the Fits program.
However, @code{DATASUM} only depends on the data section of the HDU/extension, so it is not changed when you add, remove or update the header keywords.
Therefore, it is recommended to write these keywords as the last keywords that are written/modified in the extension.
You can use the @option{--verify} option (described below) to verify the values of these two keywords.
@item datasum
Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that does not depend on the header keywords, only the data).
@end table
@item -a STR
@itemx --asis=STR
Write the given @code{STR} @emph{exactly} as it is, into the given FITS file header with no modifications.
If the contents of @code{STR} does not conform to the FITS standard for keywords, then it may (most probably: it will!) corrupt your file and you may not be able to open it any more.
So please be @strong{very careful} with this option (its your responsibility to make sure that the string conforms with the FITS standard for keywords).
If you want to define the keyword from scratch, it is best to use the @option{--write} option (see below) and let CFITSIO worry about complying with the FITS standard.
Also, you want to copy keywords from one FITS file to another, you can use @option{--copykeys} that is described below.
Through these high-level instances, you don't have to worry about low-level issues.
One common usage of @option{--asis} occurs when you are given the contents of a FITS header (many keywords) as a plain-text file (so the format of each keyword line conforms with the FITS standard, just the file is plain-text, and you have one keyword per line when you open it in a plain-text editor).
In that case, Gnuastro's Fits program won't be able to parse it (it doesn't conform to the FITS standard, which doesn't have a new-line character!).
With the command below, you can insert those headers in @file{headers.txt} into @file{img.fits} (its HDU number 1, the default; you can change the HDU to modify with @option{--hdu}).
@example
$ cat headers.txt \
| while read line; do \
astfits img.fits --asis="$line"; \
done
@end example
@cartouche
@noindent
@strong{Don't forget a title:} Since the newly added headers in the example above weren't originally in the file, they are probably some form of high-level metadata.
The raw example above will just append the new keywords after the last one.
Making it hard for human readability (its not clear what this new group of keywords signify, where they start, and where this group of keywords end).
To help the human readability of the header, add a title for this group of keywords before writing them.
To do that, run the following command before the @command{cat ...} command above (replace @code{Imported keys} with any title that best describes this group of new keywords based on their context):
@example
$ astfits img.fits --write=/,"Imported keys"
@end example
@end cartouche
@item -H STR
@itemx --history STR
Add a @code{HISTORY} keyword to the header with the given value. A new @code{HISTORY} keyword will be created for every instance of this option. If the string given to this option is longer than 70 characters, it will be separated into multiple keyword cards. If there is an error, Fits will give a warning and return with a non-zero value, but will not stop. To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -c STR
@itemx --comment STR
Add a @code{COMMENT} keyword to the header with the given value.
Similar to the explanation for @option{--history} above.
@item -t
@itemx --date
Put the current date and time in the header.
If the @code{DATE} keyword already exists in the header, it will be updated.
If there is a writing error, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -p
@itemx --printallkeys
Print the full metadata (keywords, values, units and comments) in the specified FITS extension (HDU).
If this option is called along with any of the other keyword editing commands, as described above, all other editing commands take precedence to this.
Therefore, it will print the final keywords after all the editing has been done.
@item --printkeynames
Print only the keyword names of the specified FITS extension (HDU), one line per name.
This option must be called alone.
@item -v
@itemx --verify
Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of the FITS standard.
See the description under the @code{checksum} (under @option{--write}, above) for more on these keywords.
This option will print @code{Verified} for both keywords if they can be verified.
Otherwise, if they do not exist in the given HDU/extension, it will print @code{NOT-PRESENT}, and if they cannot be verified it will print @code{INCORRECT}.
In the latter case (when the keyword values exist but cannot be verified), the Fits program will also return with a failure.
By default this function will also print a short description of the @code{DATASUM} AND @code{CHECKSUM} keywords.
You can suppress this extra information with @code{--quiet} option.
@item --copykeys=INT:INT/STR,STR[,STR]
Copy the desired set of the input's keyword records, to the to the output (specified with the @option{--output} and @option{--outhdu} for the filename and HDU/extension respectively).
The keywords to copy can be given either as a range (in the format of @code{INT:INT}, inclusive) or a list of keyword names as comma-separated strings (@code{STR,STR}), the list can have any number of keyword names.
More details and examples of the two forms are given below:
@table @asis
@item Range
The given string to this option must be two integers separated by a colon (@key{:}).
The first integer must be positive (counting of the keyword records starts from 1).
The second integer may be negative (zero is not acceptable) or an integer larger than the first.
A negative second integer means counting from the end.
So @code{-1} is the last copy-able keyword (not including the @code{END} keyword).
To see the header keywords of the input with a number before them, you can pipe the output of the Fits program (when it prints all the keywords in an extension) into the @command{cat} program like below:
@example
$ astfits input.fits -h1 | cat -n
@end example
@item List of names
The given string to this option must be a comma separated list of keyword names.
For example, see the command below:
@example
$ astfits input.fits -h1 --copykeys=KEY1,KEY2 \
--output=output.fits --outhdu=1
@end example
Please consider the notes below when copying keywords with names:
@itemize
@item
If the number of characters in the name is more than 8, CFITSIO will place a @code{HIERARCH} before it.
In this case simply give the name and do not give the @code{HIERARCH} (which is a constant and not considered part of the keyword name).
@item
If your keyword name is composed only of digits, do not give it as the first name given to @option{--copykeys}.
Otherwise, it will be confused with the range format above.
You can safely give an only-digit keyword name as the second, or third requested keywords.
@item
If the keyword is repeated more than once in the header, currently only the first instance will be copied.
In other words, even if you call @option{--copykeys} multiple times with the same keyword name, its first instance will be copied.
If you need to copy multiple instances of the same keyword, please get in touch with us at @code{bug-gnuastro@@gnu.org}.
@end itemize
@end table
@item --outhdu
The HDU/extension to write the output keywords of @option{--copykeys}.
@item -Q
@itemx --quitonerror
Quit if any of the operations above are not successful.
By default if an error occurs, Fits will warn the user of the faulty keyword and continue with the rest of actions.
@item -s STR
@itemx --datetosec STR
@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
Interpret the value of the given keyword in the FITS date format (most generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding Unix epoch time (number of seconds that have passed since 00:00:00 Thursday, January 1st, 1970).
The @code{Thh:mm:ss.ddd...} section (specifying the time of day), and also the @code{.ddd...} (specifying the fraction of a second) are optional.
The value to this option must be the FITS keyword name that contains the requested date, for example, @option{--datetosec=DATE-OBS}.
@cindex GNU C Library
This option can also interpret the older FITS date format (@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to the year.
In this case (following the GNU C Library), this option will make the following assumption: values 68 to 99 correspond to the years 1969 to 1999, and values 0 to 68 as the years 2000 to 2068.
This is a very useful option for operations on the FITS date values, for example, sorting FITS files by their dates, or finding the time difference between two FITS files.
The advantage of working with the Unix epoch time is that you do not have to worry about calendar details (for example, the number of days in different months, or leap years).
@item --wcscoordsys=STR
@cindex FK5
@cindex Galactic coordinate system
@cindex Ecliptic coordinate system
@cindex Equatorial coordinate system
@cindex Supergalactic coordinate system
@cindex Coordinate system: Galactic
@cindex Coordinate system: Ecliptic
@cindex Coordinate system: Equatorial
@cindex Coordinate system: Supergalactic
Convert the coordinate system of the image's world coordinate system (WCS) to the given coordinate system (@code{STR}) and write it into the file given to @option{--output} (or an automatically named file if no @option{--output} has been given).
For example, with the command below, @file{img-eq.fits} will have an identical dataset (pixel values) as @file{image.fits}.
However, the WCS coordinate system of @file{img-eq.fits} will be the equatorial coordinate system in the Julian calendar epoch 2000 (which is the most common epoch used today).
Fits will automatically extract the current coordinate system of @file{image.fits} and as long as it is one of the recognized coordinate systems listed below, it will do the conversion.
@example
$ astfits image.fits --wcscoordsys=eq-j2000 \
--output=img-eq.fits
@end example
The currently recognized coordinate systems are listed below (the most common one today is @code{eq-j2000}):
@table @code
@item eq-j2000
2000.0 (Julian-year) equatorial coordinates.
This is also known as FK5 (short for ``Fundamental Katalog No 5'' which was the source of the star coordinates used to define it).
This coordinate system is based on the motion of the Sun and has epochs when the mean equator was used (for example @code{eq-b1950} below).
Furthermore, the definition of year is different: either the Besselian year in 1950.0, or the Julian year in 2000.
For more on their difference and links for further reading about epochs in astronomy, please see the description in @url{https://en.wikipedia.org/wiki/Epoch_(astronomy), Wikipedia}.
Because of these difficulties, the equatorial J2000.0 coordinate system has been deprecated by the IAU in favor of International Celestial Refernece System (ICRS) but is still used extensively.
ICRS is defined based on extra-galactic quasars, so it does not depend on the dynamics of the solar system any more.
But to enable historical continuity, ICRS has been defined to be equivalent to the equatorial J2000.0 within its accepted error bars of the latter (tens of milli-arcseconds).
This justifies the reason that moving to ICRS has been relatively slow.
@item eq-b1950
1950.0 (Besselian-year) equatorial coordinates.
@item ec-j2000
2000.0 (Julian-year) ecliptic coordinates.
@item ec-b1950
1950.0 (Besselian-year) ecliptic coordinates.
@item galactic
Galactic coordinates.
@item supergalactic
Supergalactic coordinates.
@end table
@item --wcsdistortion=STR
@cindex WCS distortion
@cindex Distortion, WCS
@cindex SIP WCS distortion
@cindex TPV WCS distortion
If the argument has a WCS distortion, the output (file given with the @option{--output} option) will have the distortion given to this option (for example, @code{SIP}, @code{TPV}).
The output will be a new file (with a copy of the image, and the new WCS), so if it already exists, the file will be delete (unless you use the @code{--dontdelete} option, see @ref{Input output options}).
With this option, the Fits program will read the minimal set of keywords from the input HDU and the HDU data.
It will then write them into the file given to the @option{--output} option but with a newly created set of WCS-related keywords corresponding to the desired distortion standard.
If no @option{--output} file is specified, an automatically generated output name will be used which is composed of the input's name but with the @file{-DDD.fits} suffix, see @ref{Automatic output}.
Where @file{DDD} is the value given to this option (desired output distortion).
Note that all possible conversions between all standards are not yet supported.
If the requested conversion is not supported, an informative error message will be printed.
If this happens, please let us know and we will try our best to add the respective conversions.
For example, with the command below, you can be sure that if @file{in.fits} has a distortion in its WCS, the distortion of @file{out.fits} will be in the SIP standard.
@example
$ astfits in.fits --wcsdistortion=SIP --output=out.fits
@end example
@end table
@node Pixel information images, , Keyword inspection and manipulation, Invoking astfits
@subsubsection Pixel information images
In @ref{Keyword inspection and manipulation} options like @option{--pixelscale} were introduced for information on the pixels from the keywords.
But that only provides a single value for all the pixels!
This will not be sufficient in some scenarios; for example due to distortion, different regions of the image will have different pixel areas when projected onto the sky.
@cindex Meta image
The options in this section provide such ``meta'' images: images where the pixel values are information about the pixel itself.
Such images can be useful in understanding the underlying pixel grid with the same tools that you study the astronomical objects within the image (like @ref{SAO DS9}).
After all, nothing beats visual inspection with tools you are familiar with.
@table @code
@item --pixelareaonwcs
Create a meta-image where each pixel's value shows its area in the WCS units (usually degrees squared).
The output is therefore the same size as the input.
@cindex Pixel mixing
@cindex Area resampling
@cindex Resampling by area
This option uses the same ``pixel mixing'' or ``area resampling'' concept that is described in @ref{Resampling} (as part of the Warp program).
Similar to Warp, its sampling can be tuned with the @option{--edgesampling} that is described below.
@cindex Distortion
@cindex Area of pixel on sky
One scenario where this option becomes handy is when you are debugging aligned images using the Warp program (see @ref{Warp}).
You may observe gradients after warping and can check if they caused by the distortion of the instrument or not.
Such gradients can happen due to distortions because the detectors pixels are measuring photons from different areas on the sky (or the type of projection you're seeing).
This effect is more pronounced in images covering larger portions of the sky, for instance, the TESS images@footnote{@url{https://www.nasa.gov/tess-transiting-exoplanet-survey-satellite}}.
Here is an example usage of the @option{--pixelareaonwcs} option:
@example
# Check the area each 'input.fits' pixel takes in sky
$ astfits input.fits -h1 --pixelareaonwcs -o pixarea.fits
# Convert each pixel's area to arcsec^2
$ astarithmetic pixarea.fits 3600 3600 x x \
--output=pixarea_arcsec2.fits
# Compare area relative to the actual reported pixel scale
$ pixarea=$(astfits input.fits --pixelscale -q \
| awk '@{print $3@}')
$ astarithmetic pixarea.fits $pixarea / -o pixarea_rel.fits
@end example
@item --edgesampling=INT
Extra sampling along the pixel edges for @option{--pixelareaonwcs}.
The default value is 0, meaning that only the pixel vertices are used.
Values greater than zero improve the accuracy in the expense of greater time and memory consumption.
With that said, the default value of zero usually has a good precision unless the given image has extreme distortions that produce irregular pixel shapes.
For more, see @ref{Align pixels with WCS considering distortions}).
@cartouche
@noindent
@strong{Caution:} This option does not ``oversample'' the output image!
Rather, it makes Warp use more points to calculate the @emph{input} pixel area.
To oversample the output image, set a reasonable @option{--cdelt} value.
@end cartouche
@end table
@node ConvertType, Table, Fits, Data containers
@section ConvertType
@cindex Data format conversion
@cindex Converting data formats
@cindex Image format conversion
@cindex Converting image formats
@pindex @r{ConvertType (}astconvertt@r{)}
The FITS format used in astronomy was defined mainly for archiving, transmission, and processing.
In other situations, the data might be useful in other formats.
For example, when you are writing a paper or report, or if you are making slides for a talk, you cannot use a FITS image.
Other image formats should be used.
In other cases you might want your pixel values in a table format as plain text for input to other programs that do not recognize FITS.
ConvertType is created for such situations.
The various types will increase with future updates and based on need.
The conversion is not only one way (from FITS to other formats), but two ways (except the EPS and PDF formats@footnote{Because EPS and PDF are vector, not raster/pixelated formats}).
So you can also convert a JPEG image or text file into a FITS image.
Basically, other than EPS/PDF, you can use any of the recognized formats as different color channel inputs to get any of the recognized outputs.
Before explaining the options and arguments (in @ref{Invoking astconvertt}), we will start with a short discussion on the difference between raster and vector graphics in @ref{Raster and Vector graphics}.
In ConvertType, vector graphics are used to add markers over your originally rasterized data, producing high quality images, ready to be used in your exciting papers.
We will continue with a description of the recognized files types in @ref{Recognized file formats}, followed a short introduction to digital color in @ref{Color}.
A tutorial on how to add markers over an image is then given in @ref{Marking objects for publication} and we conclude with a @LaTeX{} based solution to add coordinates over an image.
@menu
* Raster and Vector graphics:: Images coming from nature, and the abstract.
* Recognized file formats:: Recognized file formats
* Color:: Some explanations on color.
* Annotations for figure in paper:: Adding coordinates or physical scale.
* Invoking astconvertt:: Options and arguments to ConvertType.
@end menu
@node Raster and Vector graphics, Recognized file formats, ConvertType, ConvertType
@subsection Raster and Vector graphics
@cindex Raster graphics
@cindex Graphics (raster)
Images that are produced by a hardware (for example, the camera in your phone, or the camera connected to a telescope) provide pixelated data.
Such data are therefore stored in a @url{https://en.wikipedia.org/wiki/Raster_graphics, Raster graphics} format which has discrete, independent, equally spaced data elements.
For example, this is the format used FITS (see @ref{Fits}), JPEG, TIFF, PNG and other image formats.
@cindex Vector graphics
@cindex Graphics (vector)
On the other hand, when something is generated by the computer (for example, a diagram, plot or even adding a cross over a camera image to highlight something there), there is no ``observation'' or connection with nature!
Everything is abstract!
For such things, it is much easier to draw a mathematical line (with infinite resolution).
Therefore, no matter how much you zoom-in, it will never get pixelated.
This is the realm of @url{https://en.wikipedia.org/wiki/Vector_graphics, Vector graphics}.
If you open the Gnuastro manual in @url{https://www.gnu.org/software/gnuastro/manual/gnuastro.pdf, PDF format} You can see such graphics in the Gnuastro manual, for example, in @ref{Circles and the complex plane} or @ref{Distance on a 2D curved space}.
The most common vector graphics format is PDF for document sharing or SVG for web-based applications.
The pixels of a raster image can be shown as vector-based squares with different shades, so vector graphics can generally also support raster graphics.
This is very useful when you want to add some graphics over an image to help your discussion (for example a @mymath{+} over your object of interest).
However, vector graphics is not optimized for rasterized data (which are usually also noisy!), and can either not display nicely, or result in much larger file volume (in bytes).
Therefore, if it is not necessary to add any marks over a FITS image, for example, it may be better to store it in a rasterized format.
The distinction between the vector and raster graphics is also the primary theme behind Gnuastro's logo, see @ref{Logo of Gnuastro}.
@node Recognized file formats, Color, Raster and Vector graphics, ConvertType
@subsection Recognized file formats
The various standards and the file name extensions recognized by ConvertType are listed below.
For a review on the difference between Raster and Vector graphics, see @ref{Raster and Vector graphics}.
For a review on the concept of color and channels, see @ref{Color}.
Currently, except for the FITS format, Gnuastro uses the file name's suffix to identify the format, so if the file's name does not end with one of the suffixes mentioned below, it will not be recognized.
@table @asis
@item FITS or IMH
@cindex IRAF
@cindex Astronomical data format
Astronomical data are commonly stored in the FITS format (or the older data IRAF @file{.imh} format), a list of file name suffixes which indicate that the file is in this format is given in @ref{Arguments}.
FITS is a raster graphics format.
Each image extension of a FITS file only has one value per pixel/element.
Therefore, when used as input, each input FITS image contributes as one color channel.
If you want multiple extensions in one FITS file for different color channels, you have to repeat the file name multiple times and use the @option{--hdu}, @option{--hdu2}, @option{--hdu3} or @option{--hdu4} options to specify the different extensions.
@item JPEG
@cindex JPEG format
@cindex Raster graphics
@cindex Pixelated graphics
The JPEG standard was created by the Joint photographic experts group.
It is currently one of the most commonly used image formats.
Its major advantage is the compression algorithm that is defined by the standard.
Like the FITS standard, this is a raster graphics format, which means that it is pixelated.
A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK) color channels.
If you only want to convert one JPEG image into other formats, there is no problem, however, if you want to use it in combination with other input files, make sure that the final number of color channels does not exceed four.
If it does, then ConvertType will abort and notify you.
@cindex Suffixes, JPEG images
The file name endings that are recognized as a JPEG file for input are:
@file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG}, @file{.jpe}, @file{.jif}, @file{.jfif} and @file{.jfi}.
@item TIFF
@cindex TIFF format
TIFF (or Tagged Image File Format) was originally designed as a common format for scanners in the early 90s and since then it has grown to become very general.
In many aspects, the TIFF standard is similar to the FITS image standard: it can allow data of many types (see @ref{Numeric data types}), and also allows multiple images to be stored in a single file (like a FITS extension: each image in the file is called a `directory' in the TIFF standard).
However, unlike FITS, it can only store images, it has no constructs for tables.
Also unlike FITS, each `directory' of a TIFF file can have a multi-channel (e.g., RGB) image.
Another (inconvenient) difference with the FITS standard is that keyword names are stored as numbers, not human-readable text.
However, outside of astronomy, because of its support of different numeric data types, many fields use TIFF images for accurate (for example, 16-bit integer or floating point for example) imaging data.
@item EPS
@cindex EPS
@cindex PostScript
@cindex Vector graphics
@cindex Encapsulated PostScript
The Encapsulated PostScript (EPS) format is essentially a one page PostScript file which has a specified size.
Postscript is used to store a full document like this whole Gnuastro book.
PostScript therefore also includes non-image data, for example, lines and texts.
It is a fully functional programming language to describe a document.
A PostScript file is a plain text file that can be edited like any program source with any plain-text editor.
Therefore in ConvertType, EPS is only an output format and cannot be used as input.
Contrary to the FITS or JPEG formats, PostScript is not a raster format, but is categorized as vector graphics.
@cindex @TeX{}
@cindex @LaTeX{}
With these features in mind, you can see that when you are compiling a document with @TeX{} or @LaTeX{}, using an EPS file is much more low level than a JPEG and thus you have much greater control and therefore quality.
Since it also includes vector graphic lines we also use such lines to make a thin border around the image to make its appearance in the document much better.
Furthermore, through EPS, you can add marks over the image in many shapes and colors.
No matter the resolution of the display or printer, these lines will always be clear and not pixelated.
However, this can be done better with tools within @TeX{} or @LaTeX{} such as PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
@cindex Binary image
@cindex Saving binary image
@cindex Black and white image
If the final input image (possibly after all operations on the flux explained below) is a binary image or only has two colors of black and white (in segmentation maps for example), then PostScript has another great advantage compared to other formats.
It allows for 1 bit pixels (pixels with a value of 0 or 1), this can decrease the output file size by 8 times.
So if a gray-scale image is binary, ConvertType will exploit this property in the EPS and PDF (see below) outputs.
@cindex Suffixes, EPS format
The standard formats for an EPS file are @file{.eps}, @file{.EPS}, @file{.epsf} and @file{.epsi}.
The EPS outputs of ConvertType have the @file{.eps} suffix.
@item PDF
@cindex PDF
@cindex Adobe systems
@cindex PostScript vs. PDF
@cindex Compiled PostScript
@cindex Portable Document format
@cindex Static document description format
The Portable Document Format (PDF) is currently the most common format for documents.
It is a vector graphics format, allowing abstract constructs like marks or borders.
The PDF format is based on Postscript, so it shares all the features mentioned above for EPS.
To be able to display it is programmed content or print, a Postscript file needs to pass through a processor or compiler.
A PDF file can be thought of as the processed output of the PostScript compiler.
PostScript, EPS and PDF were created and are registered by Adobe Systems.
@cindex Suffixes, PDF format
@cindex GPL Ghostscript
As explained under EPS above, a PDF document is a static document description format, viewing its result is therefore much faster and more efficient than PostScript.
To create a PDF output, ConvertType will make an EPS file and convert that to PDF using GPL Ghostscript.
The suffixes recognized for a PDF file are: @file{.pdf}, @file{.PDF}.
If GPL Ghostscript cannot be run on the PostScript file, The EPS will remain and a warning will be printed (see @ref{Optional dependencies}).
@item @option{blank}
@cindex @file{blank} color channel
This is not actually a file type! But can be used to fill one color channel with a blank value.
If this argument is given for any color channel, that channel will not be used in the output.
@item Plain text
@cindex Plain text
@cindex Suffixes, plain text
The value of each pixel in a 2D image can be written as a 2D matrix in a plain-text file.
Therefore, for the purpose of ConvertType, plain-text files are a single-channel raster graphics file format.
Plain text files have the advantage that they can be viewed with any text editor or on the command-line.
Most programs also support input as plain text files.
As input, each plain text file is considered to contain one color channel.
In ConvertType, the recognized extensions for plain text files are @file{.txt} and @file{.dat}.
As described in @ref{Invoking astconvertt}, if you just give these extensions, (and not a full filename) as output, then automatic output will be preformed to determine the final output name (see @ref{Automatic output}).
Besides these, when the format of a file cannot be recognized from its name, ConvertType will fall back to plain text mode.
So you can use any name (even without an extension) for a plain text input or output.
Just note that when the suffix is not recognized, automatic output will not be preformed.
The basic input/output on plain text images is very similar to how tables are read/written as described in @ref{Gnuastro text table format}.
Simply put, the restrictions are very loose, and there is a convention to define a name, units, data type (see @ref{Numeric data types}), and comments for the data in a commented line.
The only difference is that as a table, a text file can contain many datasets (columns), but as a 2D image, it can only contain one dataset.
As a result, only one information comment line is necessary for a 2D image, and instead of the starting `@code{# Column N}' (@code{N} is the column number), the information line for a 2D image must start with `@code{# Image 1}'.
When ConvertType is asked to output to plain text file, this information comment line is written before the image pixel values.
When converting an image to plain text, consider the fact that if the image is large, the number of columns in each line will become very large, possibly making it very hard to open in some text editors.
@item Standard output (command-line)
This is very similar to the plain text output, but instead of creating a file to keep the printed values, they are printed on the command-line.
This can be very useful when you want to redirect the results directly to another program in one command with no intermediate file.
The only difference is that only the pixel values are printed (with no information comment line).
To print to the standard output, set the output name to `@file{stdout}'.
@end table
@node Color, Annotations for figure in paper, Recognized file formats, ConvertType
@subsection Color
@cindex RGB
@cindex Filter
@cindex Color channel
@cindex Channel (color)
Color is generally defined after mixing various data ``channels''.
The values for each channel usually come a filter that is placed in the optical path.
Filters, only allow a certain window of the spectrum to pass (for example, the SDSS @emph{r} filter only allows light from about 5500 to 7000 Angstroms).
In digital monitors or common digital cameras, a different set of filters are used: Red, Green and Blue (commonly known as RGB) that are more optimized to the eye's perception.
On the other hand, when printing on paper, standard printers use the cyan, magenta, yellow and key (CMYK, key=black) color space.
@menu
* Pixel colors:: Multiple filters in each pixel.
* Colormaps for single-channel pixels:: Better display of single-filter images.
* Vector graphics colors::
@end menu
@node Pixel colors, Colormaps for single-channel pixels, Color, Color
@subsubsection Pixel colors
@cindex RGB
@cindex CMYK
@cindex Image
@cindex Color
@cindex Pixels
@cindex Colormap
@cindex Primary colors
@cindex Color channel
@cindex Channel, color
As discussed in @ref{Color}, for each displayed/printed pixel of a color image, the dataset/image has three or four values.
To store/show the three values for each pixel, cameras and monitors allocate a certain fraction of each pixel's area to red, green and blue filters.
These three filters are thus built into the hardware at the pixel level.
However, because measurement accuracy is very important in scientific instruments, and we want to do measurements (take images) with various/custom filters (without having to order a new expensive detector!), scientific detectors use the full area of the pixel to store one value for it in a single/mono channel dataset.
To make measurements in different filters, we just place a filter in the light path before the detector.
Therefore, the FITS format that is used to store astronomical datasets is inherently a mono-channel format (see @ref{Recognized file formats} or @ref{Fits}).
@cindex False color
@cindex Pseudo color
When a subject has been imaged in multiple filters, you can feed each different filter into the red, green and blue channels of your monitor and obtain a false-colored visualization.
The reason we say ``false-color'' (or pseudo color) is that generally, the three data channels you provide are not from the same Red, Green and Blue filters of your monitor!
So the observed color on your monitor does not correspond the physical ``color'' that you would have seen if you looked at the object by eye.
Nevertheless, it is good (and sometimes necessary) for visualization (of special features).
In ConvertType, you can do this by giving each separate single-channel dataset (for example, in the FITS image format) as an argument (in the proper order), then asking for the output in a format that supports multi-channel datasets (for example, see the command below, or @ref{ConvertType input and output}).
@example
$ astconvertt r.fits g.fits b.fits --output=color.jpg
@end example
@node Colormaps for single-channel pixels, Vector graphics colors, Pixel colors, Color
@subsubsection Colormaps for single-channel pixels
@cindex Visualization
@cindex Colormap, HSV
@cindex HSV: Hue Saturation Value
As discussed in @ref{Pixel colors}, color is not defined when a dataset/image contains a single value for each pixel.
However, we interact with scientific datasets through monitors or printers.
They allow multiple channels (independent values) per pixel and produce color with them (on monitors, this is usually with three channels: Red, Green and Blue).
As a result, there is a lot of freedom in visualizing a single-channel dataset.
The mapping of single-channel values to multi-channel colors is called called a ``color map''.
Since more information can be put in multiple channels, this usually results in better visualizing the dynamic range of your single-channel data.
In ConvertType, you can use the @option{--colormap} option to choose between different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
Below, we will review two of the basic color maps, please see the description of @option{--colormap} in @ref{Invoking astconvertt} for the full list.
@itemize
@item
@cindex Grayscale
@cindex Colormap, gray-scale
The most basic colormap is shades of black (because of its strong contrast with white).
This scheme is called @url{https://en.wikipedia.org/wiki/Grayscale, Grayscale}.
But ultimately, the black is just one color, so with Grayscale, you are not using the full dynamic range of the three-channel monitor effectively.
To help in visualization, more complex mappings can be defined.
@item
A slightly more complex color map can be defined when you scale the values to a range of 0 to 360, and use as it as the ``Hue'' term of the @url{https://en.wikipedia.org/wiki/HSL_and_HSV, Hue-Saturation-Value} (HSV) color space (while fixing the ``Saturation'' and ``Value'' terms).
The increased usage of the monitor's 3-channel color space is indeed better, but the resulting images can be un-''natural'' to the eye.
@end itemize
Since grayscale is a commonly used mapping of single-valued datasets, we will continue with a closer look at how it is stored.
One way to represent a gray-scale image in different color spaces is to use the same proportions of the primary colors in each pixel.
This is the common way most FITS image viewers work: for each pixel, they fill all the channels with the single value.
While this is necessary for displaying a dataset, there are downsides when storing/saving this type of grayscale visualization (for example, in a paper).
@itemize
@item
Three (for RGB) or four (for CMYK) values have to be stored for every pixel, this makes the output file very heavy (in terms of bytes).
@item
If printing, the printing errors of each color channel can make the printed image slightly more blurred than it actually is.
@end itemize
@cindex PNG standard
@cindex Single channel CMYK
To solve both these problems when storing grayscale visualization, the best way is to save a single-channel dataset into the black channel of the CMYK color space.
The JPEG standard is the only common standard that accepts CMYK color space.
The JPEG and EPS standards set two sizes for the number of bits in each channel: 8-bit and 12-bit.
The former is by far the most common and is what is used in ConvertType.
Therefore, each channel should have values between 0 to @math{2^8-1=255}.
From this we see how each pixel in a gray-scale image is one byte (8 bits) long, in an RGB image, it is 3 bytes long and in CMYK it is 4 bytes long.
But thanks to the JPEG compression algorithms, when all the pixels of one channel have the same value, that channel is compressed to one pixel.
Therefore a Grayscale image and a CMYK image that has only the K-channel filled are approximately the same file size.
@node Vector graphics colors, , Colormaps for single-channel pixels, Color
@subsubsection Vector graphics colors
@cindex Web colors
@cindex Colors (web)
When creating vector graphics, ConvertType recognizes the @url{https://en.wikipedia.org/wiki/Web_colors#Extended_colors, extended web colors} that are the result of merging the colors in the HTML 4.01, CSS 2.0, SVG 1.0 and CSS3 standards.
They are all shown with their standard name in @ref{colornames}.
The names are not case sensitive so you can use them in any form (for example, @code{turquoise} is the same as @code{Turquoise} or @code{TURQUOISE}).
@cindex 24-bit terminal
@cindex True color terminal
@cindex Terminal (true color, 24 bit)
On the command-line, you can also get the list of colors with the @option{--listcolors} option to CovertType, like below.
In particular, if your terminal is 24-bit or "true color", in the last column, you will see each color.
This greatly helps in selecting the best color for our purpose easily on the command-line (without taking your hands off the keyboard and getting distracted).
@example
$ astconvertt --listcolors
@end example
@float Figure,colornames
@center@image{gnuastro-figures/color-names, 15cm, , }
@caption{Recognized color names in Gnuastro, shown with their numerical identifiers.}
@end float
@node Annotations for figure in paper, Invoking astconvertt, Color, ConvertType
@subsection Annotations for figure in paper
@cindex Image annotation
@cindex Annotation of images for paper
To make a nice figure from your FITS images, it is important to show more than merely the raw image (converted to a printer friendly format like PDF or JPEG; see @ref{FITS images in a publication} and @ref{Marking objects for publication}).
Annotations (or visual metadata) over the raw image greatly help the readers clearly see your argument and put the image/result in a larger context.
Examples include:
@itemize
@item
Coordinates (Right Ascension and Declination) on the edges of the image, so viewers of your paper or presentation slides can get a physical feeling of the field's sky coverage.
@item
Thick line that has a fixed tangential size (for example, in kilo parsecs) at the redshift/distance of interest.
@item
Contours over the image to show radio/X-ray emission, over an optical image for example.
@item
Text, arrows, etc., over certain parts of the image.
@end itemize
@cindex PGFPlots
Because of the modular philosophy of Gnuastro, ConvertType is only focused on converting your FITS images to printer friendly formats like JPEG or PDF.
But to present your results in a slide or paper, you will often need to annotate the raw JPEG or PDF with some of the features above.
The good news is that there are many powerful plotting programs that you can use to add such annotations.
As a result, there is no point in making a new one, specific to Gnuastro.
Instead, Gnuastro hopes to provide the necessary interface to communicate easily with those tools.
In this section, we will demonstrate this using the very powerful PGFPlots@footnote{@url{http://mirrors.ctan.org/graphics/pgf/contrib/pgfplots/doc/pgfplots.pdf}} package of @LaTeX{}.
@cartouche
@noindent
@strong{Single script for easy running:} In this section we are reviewing the reason and details of every step which is good for educational purposes.
But when you know the steps already, these separate code blocks can be annoying.
Therefore the full script (except for the data download step) is available in @ref{Full script of annotations on figure}.
@end cartouche
@cindex TiKZ
@cindex Matplotlib
PGFPlots uses the same @LaTeX{} graphic engine that typesets your paper/slide.
Therefore when you build your plots and figures using PGFPlots (and its underlying package PGF/TiKZ@footnote{@url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf}}) your plots will blend beautifully within your text: same fonts, same colors, same line properties, etc.
Since most papers (and presentation slides@footnote{To build slides, @LaTeX{} has packages like Beamer, see @url{http://mirrors.ctan.org/macros/latex/contrib/beamer/doc/beameruserguide.pdf}}) are made with @LaTeX{}, PGFPlots is therefore the best tool for those who use @LaTeX{} to create documents.
PGFPlots also does not need any extra dependencies beyond a basic/minimal @TeX{}-live installation, so it is much more reliable than tools like Matplotlib in Python that have hundreds of fast-evolving dependencies@footnote{See Figure 1 of Alliez et al. @url{https://arxiv.org/abs/1905.11123,2019}.}.
To demonstrate this, we will create a surface brightness image of a galaxy in the F160W filter of the ABYSS survey@footnote{@url{http://research.iac.es/proyecto/abyss}}.
In the code-block below, let's make a ``build'' directory to keep intermediate files and avoid populating the top-level source directory (it is always good to keep your data separate from your source).
Afterwards, we will download the full image and crop out a 20 arcmin wide image around the galaxy with the commands below.
You can run these commands in an empty directory.
@example
$ mkdir build
$ wget http://cdsarc.u-strasbg.fr/ftp/J/A+A/621/A133/fits/ah_f160w.fits
$ astcrop ah_f160w.fits --center=53.1616278,-27.7802446 --mode=wcs \
--width=20/3600 --output=build/crop.fits
@end example
To better show the low surface brightness (LSB) outskirts, we will warp the image, then convert the pixel units to surface brightness with the commands below with Gnuastro's Arithmetic program.
An important point to remember here is that the magnitudes (and thus the surface brightness) come from a logarithm; therefore if a pixel is negative, its log will be NaN (Not-a-Number).
Therefore after @code{counts-to-sb}, we convert all the NaN pixels with the faintest (highest) surface brightness limit possible in this image: 30 mag/arcsec@mymath{^2} (which we define as a variable because it is also necessary later).
In case you don't know your image's surface brightness limit, see @ref{FITS images in a publication}.
For more, see the surface brightness topic of @ref{Brightness flux magnitude}, and for a more complete tutorial, see @ref{FITS images in a publication}.
@example
$ sbhigh=30
$ zeropoint=25.94
$ astwarp build/crop.fits --centeroncorner --scale=1/3 \
--output=build/scaled.fits
$ pixarea=$(astfits build/scaled.fits --pixelareaarcsec2)
$ astarithmetic build/scaled.fits $zeropoint $pixarea counts-to-sb \
set-sb sb sb isblank sb $sbhigh gt or $sbhigh where \
--output=build/sb.fits
@end example
We are now ready to convert the surface brightness image into a PDF.
To better show the LSB features, we will limit the brighter range of values (lower numerics in the Magnitude) with the @code{--fluxlow} option.
All pixels with a surface brightness brighter than 22 mag/arcsec@mymath{^2} will be shown at a similar color.
This threshold is also being defined as a variable, because we will also need it later below (to pass into PGFPlots).
Finally, in the call to convert the FITS to PDF, we also set @option{--borderwidth=0}, because the coordinate system we will be added over the edges of the image which effectively becomes a border for the image (separating it from the background).
The @option{--cmappgfplots} option of ConvertType is recommended when you want to add a color bar in PGFPlots because it will create the necessary PGFPlots command to show the exact color map that ConvertType used in the colorbar.
@example
$ sblow=22
$ colormap=sls
$ astconvertt build/sb.fits --colormap=$colormap --borderwidth=0 \
--fluxhigh=$sbhigh --fluxlow=$sblow \
--output=build/sb.pdf --cmappgfplots
@end example
Please open @file{build/sb.pdf} and have a look.
Try changing the surface brightness variables above or the colormap to make it more pleasing to your eye (you will need such experimentation when you later do this on your own images).
We now have the printable PDF representation of the image, but as discussed above, it is not enough for a paper.
We will add 1) a thick line showing the size of 20 kpc (kilo parsecs) at the redshift of the central galaxy to help the readers of our report create a mental image of its physical size, 2) coordinates on the edge so the reader can easily identify the location on the sky and observed size, and 3) a color bar, showing the colormap of the surface brightness level for readers to be able to interpret the pixel values.
To get the first job done, we first need to know the redshift of the central galaxy.
To do this, we can use Gnuastro's Query program to look into all the objects in NED within this image (only asking for the RA, Dec and redshift columns).
We will then use the Match program to find the NED entry that corresponds to our galaxy.
@example
$ astquery ned --dataset=objdir --overlapwith=build/sb.fits \
--column=ra,dec,z --output=build/ned.fits
$ astmatch build/ned.fits -h1 --coord=53.1616278,-27.7802446 \
--ccol1=RA,Dec --aperture=1/3600 \
--output=build/ned-matched.fits
$ redshift=$(asttable build/ned-matched.fits -cz)
$ echo $redshift
@end example
Now that we know the redshift of the central object, we can define the coordinates of the thick line that will show the length of 20 kpc at that redshift.
It will be a horizontal line (fixed Declination) across a range of RA.
The start of this thick line will be located at the top edge of the image (at the 95-percent of the width and height of the image).
With the commands below we will find the three necessary parameters (one declination and two Right Ascensions).
Just note that in astronomical images, RA increases to the left/east, which is the reason we are using the minimum and @code{+} to find the RA starting point.
@example
$ scalelineinkpc=20
$ coverage=$(astfits build/sb.fits --skycoverage --quiet | awk 'NR==2')
$ scalelinedec=$(echo $coverage | awk '@{print $4-($4-$3)*0.05@}')
$ scalelinerastart=$(echo $coverage | awk '@{print $1+($2-$1)*0.05@}')
$ scalelineraend=$(astcosmiccal --redshift=$redshift --arcsectandist \
| awk '@{start='$scalelinerastart'; \
width='$scalelineinkpc'/$1/3600; \
print start+width@}')
@end example
To draw coordinates over the image, we need to feed these values into PGFPlots.
But manually entering numbers into the PGFPlots source will be very frustrating, prone to many errors and will be hard to change in the future (for example after the referee report comes)!
Fortunately there is an easy way to do this: @LaTeX{} macros.
New macros are defined by this @LaTeX{} command:
@example
\newcommand@{\macroname@}@{value@}
@end example
@noindent
Anywhere that @LaTeX{} confronts @code{\macroname}, it will replace @code{value} when building the output.
We will have one file called @file{macros.tex} in the build directory and define macros based on those values.
We will use the shell's @code{printf} command to write these macro definition lines into the macro file.
Double backslashes are used in the @code{printf} command, because backslash is a meaningful character for it.
Also, we put a @code{\n} at the end of each line, otherwise, all the commands will go into a single line of the macro file.
We will also place the random `@code{ma}' string at the start of all our @LaTeX{} macros to help identify the macros for this plot throughout your report's source.
In the case of calculated numbers (like the redshift, we will use AWK to print to two decimal digits) to simplify the @LaTeX{} commands.
@example
$ macros=build/macros.tex
$ printf '\\newcommand@{\\maScaleDec@}'"@{$scalelinedec@}\n" > $macros
$ printf '\\newcommand@{\\maScaleRAa@}'"@{$scalelinerastart@}\n" >> $macros
$ printf '\\newcommand@{\\maScaleRAb@}'"@{$scalelineraend@}\n" >> $macros
$ printf '\\newcommand@{\\maScaleKpc@}'"@{$scalelineinkpc@}\n" >> $macros
$ v=$(echo $redshift | awk '@{printf "%.2f", $1'@})
$ printf '\\newcommand@{\\maCenterZ@}'"@{$v@}\n" >> $macros
@end example
Please open the macros file after these commands and have a look to see if they do conform to the expected format above.
Another set of macros we will need to feed into PGFPlots is the coordinates of the image corners.
Fortunately the @code{coverage} variable found above is also useful here.
We just need to extract each item before feeding it into the macros.
To do this, we will use AWK and keep each value with the temporary shell variable `@code{v}'.
@example
$ v=$(echo $coverage | awk '@{print $1@}')
$ printf '\\newcommand@{\\maCropRAMin@}'"@{$v@}\n" >> $macros
$ v=$(echo $coverage | awk '@{print $2@}')
$ printf '\\newcommand@{\\maCropRAMax@}'"@{$v@}\n" >> $macros
$ v=$(echo $coverage | awk '@{print $3@}')
$ printf '\\newcommand@{\\maCropDecMin@}'"@{$v@}\n" >> $macros
$ v=$(echo $coverage | awk '@{print $4@}')
$ printf '\\newcommand@{\\maCropDecMax@}'"@{$v@}\n" >> $macros
@end example
Finally, we also need to pass some other numbers to PGFPlots: 1) the major tick distance (in the coordinate axes that will be printed on the edge of the image).
We will assume 7 ticks for this image.
2) The minimum and maximum surface brightness values that we gave to ConvertType when making the PDF; PGFPlots will define its color-bar based on these two values.
3) The name of the Gnuastro colormap (you can manually change this in the produced colormap file by ConvertType: it is plain-text). Also, note that if all your figures are generated with the same colormap you only need to ask ConvertType to generate it once.
@example
$ v=$(echo $coverage | awk '@{print ($2-$1)/7@}')
$ printf '\\newcommand@{\\maTickDist@}'"@{$v@}\n" >> $macros
$ printf '\\newcommand@{\\maSBlow@}'"@{$sblow@}\n" >> $macros
$ printf '\\newcommand@{\\maSBhigh@}'"@{$sbhigh@}\n" >> $macros
$ printf '\\newcommand@{\maColormap@}'"@{gnuastro$colormap@}\n" >> $macros
@end example
All the necessary numbers we need to pass to @LaTeX{} are now ready.
Please copy the contents below into a file called @file{my-figure.tex}.
This is the PGFPlots source for this particular plot.
Besides the coordinates and scale-line, we will also add some text over the image and an orange arrow pointing to the central object with its redshift printed over it.
The parameters are generally human-readable, so you should be able to get a good feeling of every line by simply reading it.
There are also comments which will show up as a different color when you copy this into a plain-text editor that recognizes @LaTeX{}.
@verbatim
\begin{tikzpicture}
%% Define the coordinates and colorbar
\begin{axis}[
at={(0,0)},
axis on top,
x dir=reverse,
scale only axis,
width=\linewidth,
height=\linewidth,
minor tick num=10,
xmin=\maCropRAMin,
xmax=\maCropRAMax,
ymin=\maCropDecMin,
ymax=\maCropDecMax,
enlargelimits=false,
every tick/.style={black},
xtick distance=\maTickDist,
ytick distance=\maTickDist,
yticklabel style={rotate=90},
ylabel={Declination (degrees)},
xlabel={Right Ascension (degrees)},
ticklabel style={font=\small,
/pgf/number format/.cd, precision=4,/tikz/.cd},
x label style={at={(axis description cs:0.5,0.02)},
anchor=north,font=\small},
y label style={at={(axis description cs:0.07,0.5)},
anchor=south,font=\small},
colorbar,
colormap name=\maColormap,
point meta min=\maSBlow,
point meta max=\maSBhigh,
colorbar style={
at={(1.01,1)},
ylabel={Surface brightness (mag/arcsec$^2$)},
yticklabel style={
/pgf/number format/.cd, precision=1, /tikz/.cd},
y label style={at={(axis description cs:5.3,0.5)},
anchor=south,font=\small},
},
]
%% Put the image in the proper positions of the plot.
\addplot graphics[ xmin=\maCropRAMin, xmax=\maCropRAMax,
ymin=\maCropDecMin, ymax=\maCropDecMax]
{sb.pdf};
%% Draw the scale factor.
\addplot[black, line width=5, name=scaleline] coordinates
{(\maScaleRAa,\maScaleDec) (\maScaleRAb,\maScaleDec)}
node [anchor=north west] {\large $\maScaleKpc$ kpc};
\end{axis}
%% Add some text anywhere over the plot. The text is added two
%% times: the first time with a white background (that with a
%% certain opacity), the second time just the text with opacity.
\node[anchor=south west, fill=white, opacity=0.5]
at (0.01\linewidth,0.01\linewidth)
{(a) Text can be added here};
\node[anchor=south west]
at (0.01\linewidth,0.01\linewidth)
{(a) Text can be added here};
%% Add an arrow to highlight certain structures.
\draw [->, black, line width=5]
(0.35\linewidth,0.35\linewidth)
-- node [anchor=south, rotate=45]{$z=\maCenterZ$}
(0.45\linewidth,0.45\linewidth);
\end{tikzpicture}
@end verbatim
Finally, we need another simple @LaTeX{} source for the main text of your report that will host the figure (it helps the readability of your source for yourself to keep the code of your figures in a separate file to be easily included in the proper place you want).
Simply copy the minimal working example below in a file called @file{report.tex}.
@verbatim
\documentclass{article}
%% Import the TiKZ package and activate its "external" feature.
\usepackage{tikz}
\usetikzlibrary{external}
\tikzexternalize
%% PGFPlots (which uses TiKZ) and all the colormaps you need.
\usepackage{pgfplots}
\pgfplotsset{axis line style={thick}}
\input{sb-colormap.tex}
%% Import the macros.
\input{macros.tex}
%% Start document.
\begin{document}
You can write anything here.
%% Add the figure and its caption.
\begin{figure}
\input{my-figure.tex}
\caption{A demo image.}
\end{figure}
%% Finish the document.
\end{document}
@end verbatim
You are now ready to create the PDF.
But @LaTeX{} creates many temporary files, so to avoid populating our top-level (source code and input data) directory, we will copy the two @file{.tex} files into the build directory, go there and run @LaTeX{}.
Before running it, we will first delete all the files that have the name pattern @file{*-figure0*}, these are ``external'' files created by TiKZ+PGFPlots, including the actual PDF of the figure.
PGFPlots has nice ways to set a name for each ``external'' figure it generates (and not depend on a counter), but that is beyond the scope here, see its manual for details.
@example
$ cp report.tex my-figure.tex build
$ cd build
$ rm -f *-figure0*
$ pdflatex -shell-escape -halt-on-error report.tex
@end example
You now have the full ``report'' in @file{report.pdf}.
The good news is that in case you need the high-quality figure for other purposes (like showing in slides), you don't have to take a screenshot!
You also have the raw PDF of the figure in @file{report-figure0.pdf}.
Try adding some extra text in @file{report.tex}, or in the caption of the figure and re-running the last four commands so it becomes more like an actual
You can also try changing the 20kpc scale line length to 50kpc, or try changing the redshift, to see how the length and text of the thick scale-line will automatically change.
But these kinds of changes will be hard in the manual steps above and due to the number of steps, human error can easily cause a crash.
Therefore it is best to put such numerous steps in a script as we have done in @ref{Full script of annotations on figure} below.
So it may be easier to take the script from there and do the changes suggested here.
In a larger paper, you can add multiple such figures (with different @file{.tex} files that are placed in different @code{figure} environments with different captions throughout your text).
Each figure will get a number in the build directory.
TiKZ also allows setting a file name for each ``external'' figure (to avoid such numbers that can be annoying if the image orders are changed).
PGFPlots is also highly customizable, you can make a lot of changes and customizations.
Both TiKZ@footnote{@url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf}} and PGFPLots@footnote{@url{http://mirrors.ctan.org/graphics/pgf/contrib/pgfplots/doc/pgfplots.pdf}} have wonderful manuals, so have a look trough them and you will enjoy the power that comes under your fingers afterwards.
@menu
* Full script of annotations on figure:: All the steps in one script
@end menu
@node Full script of annotations on figure, , Annotations for figure in paper, Annotations for figure in paper
@subsubsection Full script of annotations on figure
In @ref{Annotations for figure in paper}, we went through each of the steps to add annotations over an image were described in detail.
So if you have understood the steps, but want to start experimenting with different settings, repeating those steps individually will be annoying and buggy (humans aren't good at repetition).
Therefore in this section, we will summarize all the steps in a single script that you can simply copy-paste into a text editor, configure, and run.
Compared to @ref{Annotations for figure in paper}, we have brought the redshift as a parameter here.
But if the center of your image always points to your main object, you can also include the Query command to automatically find the object's redshift from NED.
Alternatively, your image may already be cropped, in this case, you can remove the cropping step.
If you are not familiar with reading, writing or running scripts, see @ref{Writing scripts to automate the steps}.
@cartouche
@noindent
@strong{Necessary files:} To run this script, you will need an image to crop your object from (here assuming it is called @file{ah_f160w.fits} with a certain zero point) and two @file{my-figure.tex} and @file{report.tex} files that were fully provided in @ref{Annotations for figure in paper}.
They need to be in the same directory as this script.
@end cartouche
@verbatim
# Parameters.
sblow=22 # Minimum surface brightness.
sbhigh=30 # Maximum surface brightness.
bdir=build # Build directory location on filesystem.
numticks=7 # Number of major ticks in each axis.
colormap=sls # Name of ConvertType's colormap.
redshift=0.619 # Redshift of object of interest.
zeropoint=25.94 # Zero point of input image.
scalelineinkpc=20 # Length of scale-line (in kilo parsecs).
input=ah_f160w.fits # Name of large input image.
width=20/3600 # Width of crop in degrees.
center=53.1616278,-27.7802446 # RA and Dec of crop's center.
# Stop the script in case of a crash.
set -e
# Build directory
if ! [ -d $bdir ]; then mkdir $bdir; fi
# Crop out the desired region.
crop=$bdir/crop.fits
astcrop $input --center=$center --mode=wcs --width=$width \
--output=$crop
# Warp the image to larger pixels to show surface brightness better.
scaled=$bdir/scaled.fits
astwarp $crop --centeroncorner --scale=1/3 --output=$scaled
# Calculate the pixel area and convert image to Surface brightness.
sb=$bdir/sb.fits
pixarea=$(astfits $scaled --pixelareaarcsec2)
astarithmetic $scaled $zeropoint $pixarea counts-to-sb \
set-sb sb sb isblank sb $sbhigh gt or $sbhigh where \
--output=$sb
# Convert the surface brightness image into PDF.
sbpdf=$bdir/sb.pdf
astconvertt $sb --colormap=$colormap --borderwidth=0 --cmappgfplots \
--fluxhigh=$sbhigh --fluxlow=$sblow --output=$sbpdf
# Specify the coordinates of the scale line (specifying a certain
# width in kpc). We will put it on the top-right side of the image (5%
# of the full width of the image away from the edge).
coverage=$(astfits $sb --skycoverage --quiet | awk 'NR==2')
scalelinedec=$(echo $coverage | awk '{print $4-($4-$3)*0.05}')
scalelinerastart=$(echo $coverage | awk '{print $1+($2-$1)*0.05}')
scalelineraend=$(astcosmiccal --redshift=$redshift --arcsectandist \
| awk '{start='$scalelinerastart'; \
width='$scalelineinkpc'/$1/3600; \
print start+width}')
# Write the LaTeX macros to use in plot. Start with the thick line
# showing tangential distance.
macros=$bdir/macros.tex
printf '\\newcommand{\\maScaleDec}'"{$scalelinedec}\n" > $macros
printf '\\newcommand{\\maScaleRAa}'"{$scalelinerastart}\n" >> $macros
printf '\\newcommand{\\maScaleRAb}'"{$scalelineraend}\n" >> $macros
printf '\\newcommand{\\maScaleKpc}'"{$scalelineinkpc}\n" >> $macros
printf '\\newcommand{\\maCenterZ}'"{$redshift}\n" >> $macros
# Add image extrema for the coordinates.
v=$(echo $coverage | awk '{print $1}')
printf '\\newcommand{\maCropRAMin}'"{$v}\n" >> $macros
v=$(echo $coverage | awk '{print $2}')
printf '\\newcommand{\maCropRAMax}'"{$v}\n" >> $macros
v=$(echo $coverage | awk '{print $3}')
printf '\\newcommand{\maCropDecMin}'"{$v}\n" >> $macros
v=$(echo $coverage | awk '{print $4}')
printf '\\newcommand{\maCropDecMax}'"{$v}\n" >> $macros
printf '\\newcommand{\maColormap}'"{gnuastro$colormap}\n" >> $macros
# Distance between each tick value.
v=$(echo $coverage | awk '{print ($2-$1)/'$numticks'}')
printf '\\newcommand{\maTickDist}'"{$v}\n" >> $macros
printf '\\newcommand{\maSBlow}'"{$sblow}\n" >> $macros
printf '\\newcommand{\maSBhigh}'"{$sbhigh}\n" >> $macros
# Copy the LaTeX source into the build directory and go there to run
# it and have all the temporary LaTeX files there.
cp report.tex my-figure.tex $bdir
cd $bdir
rm -f *-figure0*
pdflatex -shell-escape -halt-on-error report.tex
@end verbatim
@node Invoking astconvertt, , Annotations for figure in paper, ConvertType
@subsection Invoking ConvertType
ConvertType will convert any recognized input file type to any specified output type.
The executable name is @file{astconvertt} with the following general template
@example
$ astconvertt [OPTION...] InputFile [InputFile2] ... [InputFile4]
@end example
@noindent
One line examples:
@example
## Convert an image in FITS to PDF:
$ astconvertt image.fits --output=pdf
## Similar to before, but use the Viridis color map:
$ astconvertt image.fits --colormap=viridis --output=pdf
## Add markers to to highlight parts of the image
## ('marks.fits' is a table containing coordinates)
$ astconvertt image.fits --marks=marks.fits --output=pdf
## Convert an image in JPEG to FITS (with multiple extensions
## if it has color):
$ astconvertt image.jpg -oimage.fits
## Use three 2D arrays to create an RGB JPEG output (two are
## plain-text, the third is FITS, but all have the same size).
$ astconvertt f1.txt f2.txt f3.fits -o.jpg
## Use two images and one blank for an RGB EPS output:
$ astconvertt M31_r.fits M31_g.fits blank -oeps
## Directly pass input from output of another program through Standard
## input (not a file).
$ cat 2darray.txt | astconvertt -oimg.fits
@end example
In the sub-sections below various options that are specific to ConvertType are grouped in different categories.
Please see those sections for a detailed discussion on each group and its options.
Besides those, ConvertType also shares the @ref{Common options} with other Gnuastro programs.
The common options are not repeated here.
@menu
* ConvertType input and output:: Input/output file names and formats.
* Pixel visualization:: Visualizing the pixels in the output.
* Drawing with vector graphics:: Adding marks in many shapes and colors over the pixels.
@end menu
@node ConvertType input and output, Pixel visualization, Invoking astconvertt, Invoking astconvertt
@subsubsection ConvertType input and output
@cindex Standard input
At most four input files (one for each color channel for formats that allow it) are allowed in ConvertType.
When there is only one input channel (grayscale), the input can either be given as a file name (as an argument on the command-line) or through @ref{Standard input} (a pipe for example: only when no input file is specified).
Therefore, if an input file is given, the standard input will not be checked.
The order of multiple input files is important.
After reading the input file(s) the number of color channels in all the inputs will be used to define which color space to use for the outputs and how each color channel is interpreted: 1 (for grayscale), 3 (for RGB) and 4 (for CMYK) input channels.
For more on pixel color channels, see @ref{Pixel colors}.
Depending on the format of the input(s), the number of input files can differ.
For example, if you plan to build an RGB PDF and your three channels are in the first HDU of @file{r.fits}, @file{g.fits} and @file{b.fits}, then you can simply call MakeProfiles like this:
@example
$ astconvertt r.fits g.fits b.fits -g1 --output=rgb.pdf
@end example
@noindent
However, if the three color channels are in three extensions (assuming the HDUs are respectively named @code{R}, @code{G} and @code{B}) of a single file (assuming @file{channels.fits}), you should run it like this:
@example
$ astconvertt channels.fits -hR -hG -hB --output=rgb.pdf
@end example
@noindent
On the other hand, if the channels are already in a multi-channel format (like JPEG), you can simply provide that file:
@example
$ astconvertt image.jpg --output=rgb.pdf
@end example
@noindent
If multiple channels are given as input, and the output format does not support multiple color channels (for example, FITS), ConvertType will put the channels in different HDUs, like the example below.
After running the @command{astfits} command, if your JPEG file was not grayscale (single channel), you will see multiple HDUs in @file{channels.fits}.
@example
$ astconvertt image.jpg --output=channels.fits
$ astfits channels.fits
@end example
As shown above, the output's file format will be interpreted from the name given to the @option{--output} option (as a common option to all Gnuastro programs, for the description of @option{--output}, see @ref{Input output options}).
It can either be given on the command-line or in any of the configuration files (see @ref{Configuration files}).
When the output suffix is not recognized, it will default to plain text format, see @ref{Recognized file formats}.
If there is one input dataset (color channel) the output will be gray-scale.
When three input datasets (color channels) are given, they are respectively considered to be the red, green and blue color channels.
Finally, if there are four color channels they will be cyan, magenta, yellow and black (CMYK colors).
The value to @option{--output} (or @option{-o}) can be either a full file name or just the suffix of the desired output format.
In the former case (full name), it will be directly used for the output's file name.
In the latter case, the name of the output file will be set based on the automatic output guidelines, see @ref{Automatic output}.
Note that the suffix name can optionally start with a @file{.} (dot), so for example, @option{--output=.jpg} and @option{--output=jpg} are equivalent.
See @ref{Recognized file formats}.
The relevant options for input/output formats are described below:
@table @option
@item -h STR/INT
@itemx --hdu=STR/INT
Input HDU name or counter (counting from 0) for each input FITS file.
If the same HDU should be used from all the FITS files, you can use the @option{--globalhdu} option described below.
In ConvertType, it is possible to call the HDU option multiple times for the different input FITS or TIFF files in the same order that they are called on the command-line.
Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may contain multiple color channels (for example, when the image is in RGB).
Except for the fact that multiple calls are possible, this option is identical to the common @option{--hdu} in @ref{Input output options}.
The number of calls to this option cannot be less than the number of input FITS or TIFF files, but if there are more, the extra HDUs will be ignored, note that they will be read in the order described in @ref{Configuration file precedence}.
Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes numbers (counting from zero, similar to CFITSIO) for `directory' identification.
Hence the concept of names is not defined for the directories and the values to this option for TIFF files must be numbers.
@item -g STR/INT
@itemx --globalhdu=STR/INT
Use the value given to this option (a HDU name or a counter, starting from 0) for the HDU identifier of all the input FITS files.
This is useful when all the inputs are distributed in different files, but have the same HDU in those files.
@item -w FLT
@itemx --widthincm=FLT
The width of the output in centimeters.
This is only relevant for those formats that accept such a width as metadata (not FITS or plain-text for example), see @ref{Recognized file formats}.
For most digital purposes, the number of pixels is far more important than the value to this parameter because you can adjust the absolute width (in inches or centimeters) in your document preparation program.
@item -x
@itemx --hex
@cindex ASCII85 encoding
@cindex Hexadecimal encoding
Use Hexadecimal encoding in creating EPS output.
By default the ASCII85 encoding is used which provides a much better compression ratio.
When converted to PDF (or included in @TeX{} or @LaTeX{} which is finally saved as a PDF file), an efficient binary encoding is used which is far more efficient than both of them.
The choice of EPS encoding will thus have no effect on the final PDF.
So if you want to transfer your EPS files (for example, if you want to submit your paper to arXiv or journals in PostScript), their storage might become important if you have large images or lots of small ones.
By default ASCII85 encoding is used which offers a much better compression ratio (nearly 40 percent) compared to Hexadecimal encoding.
@item -u INT
@itemx --quality=INT
@cindex JPEG compression quality
@cindex Compression quality in JPEG
@cindex Quality of compression in JPEG
The quality (compression) of the output JPEG file with values from 0 to 100 (inclusive).
For other formats the value to this option is ignored.
Note that only in gray-scale (when one input color channel is given) will this actually be the exact quality (each pixel will correspond to one input value).
If it is in color mode, some degradation will occur.
While the JPEG standard does support loss-less graphics, it is not commonly supported.
@end table
@node Pixel visualization, Drawing with vector graphics, ConvertType input and output, Invoking astconvertt
@subsubsection Pixel visualization
The main goal of ConvertType is to visualize pixels to/from print or web friendly formats.
Astronomical data usually have a very large dynamic range (difference between maximum and minimum value) and different subjects might be better demonstrated with a limited flux range.
@table @option
@item --colormap=STR[,FLT,...]
The color map to visualize a single channel.
The first value given to this option is the name of the color map, which is shown below.
Some color maps can be configured.
In this case, the configuration parameters are optionally given as numbers following the name of the color map for example, see @option{hsv}.
When the input has blank values, the value to @option{--cmapblankcolor} will be used to define the color that they are shown (except for the @code{gray} color map).
The table below contains the usable names of the color maps that are currently supported:
@table @option
@item gray
@itemx grey
@cindex Colorspace, gray-scale
Grayscale color map.
This color map does not have any parameters.
The full dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored in the requested format.
@item hsv
@cindex Colorspace, HSV
@cindex Hue, saturation, value
@cindex HSV: Hue Saturation Value
Hue, Saturation, Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color map.
If no values are given after the name (@option{--colormap=hsv}), the dataset will be scaled to 0 and 360 for hue covering the full spectrum of colors.
However, you can limit the range of hue (to show only a special color range) by explicitly requesting them after the name (for example, @option{--colormap=hsv,20,240}).
The mapping of a single-channel dataset to HSV is done through the Hue and Value elements: Lower dataset elements have lower ``value'' @emph{and} lower ``hue''.
This creates darker colors for fainter parts, while also respecting the range of colors.
@item viridis
@cindex matplotlib
@cindex Colormap: Viridis
@cindex Viridis: Colormap
Viridis is the default colormap of the popular Matplotlib module of Python and available in many other visualization tools like PGFPlots.
@item sls
@cindex DS9
@cindex SAO DS9
@cindex SLS Color
@cindex Colormap: SLS
The SLS color range, taken from the commonly used @url{http://ds9.si.edu,SAO DS9}.
The advantage of this color range is that it starts with black, going into dark blue and finishes with the brighter colors of red and white.
So unlike the HSV color range, it includes black and white and brighter colors (like yellow, red) show the larger values.
@item sls-inverse
@cindex Colormap: SLS-inverse
The inverse of the SLS color map (see above), where the lowest value corresponds to white and the highest value is black.
While SLS is good for visualizing on the monitor, SLS-inverse is good for printing.
@end table
@item -l STR
@itemx --cmapblankcolor=STR
Name of color to be used for blank pixels with the @option{--colormap} is used (a single channel input).
For the list of color names that can be given to this option, run ConvertType with @option{--listcolors}.
This option will be ignored for the gray color map because the list of colors given to this option are defined on a 3-channel RGB color space.
But image formats have single-channel formats for grayscale images that ConvertType also uses (to save storage space).
@item --cmappgfplots
@cindex Colorbar
Create a second output file containing the PGFPlots colormap definition for single-channel data that are to be displayed through a colormap.
PGFPlots is a powerful plot creation package within LaTeX to create high-quality plots and figures in your papers, slides or reports.
With this option, you can use Gnuastro colormaps that are not available in PGFPlots.
For a fully working example of the usage of PGFPlots in combination with Gnuastro for high-quality figure generation (including this option), see @ref{Annotations for figure in paper}.
@item --rgbtohsv
When there are three input channels and the output is in the FITS format, interpret the three input channels as red, green and blue channels (RGB) and convert them to the hue, saturation, value (HSV) color space.
The currently supported output formats of ConvertType do not have native support for HSV.
Therefore this option is only supported when the output is in FITS format and each of the hue, saturation and value arrays can be saved as one FITS extension in the output for further analysis (for example, to select a certain color).
@item -c STR
@itemx --change=STR
@cindex Change converted pixel values
(@option{=STR}) Change pixel values with the following format @option{"from1:to1, from2:to2,..."}.
This option is very useful in displaying labeled pixels (not actual data images which have noise) like segmentation maps.
In labeled images, usually a group of pixels have a fixed integer value.
With this option, you can manipulate the labels before the image is displayed to get a better output for print or to emphasize on a particular set of labels and ignore the rest.
The labels in the images will be changed in the same order given.
By default first the pixel values will be converted then the pixel values will be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
You can use any number for the values irrespective of your final output, your given values are stored and used in the double precision floating point format.
So for example, if your input image has labels from 1 to 20000 and you only want to display those with labels 957 and 11342 then you can run ConvertType with these options:
@example
$ astconvertt --change=957:50000,11342:50001 --fluxlow=5e4 \
--fluxhigh=1e5 segmentationmap.fits --output=jpg
@end example
@noindent
While the output JPEG format is only 8 bit, this operation is done in an intermediate step which is stored in double precision floating point.
The pixel values are converted to 8-bit after all operations on the input fluxes have been complete.
By placing the value in double quotes you can use as many spaces as you like for better readability.
@item -C
@itemx --changeaftertrunc
Change pixel values (with @option{--change}) after truncation of the flux values, by default it is the opposite.
@item -L FLT
@itemx --fluxlow=FLT
The minimum flux (pixel value) to display in the output image, any pixel value below this value will be set to this value in the output.
If the value to this option is the same as @option{--fluxhigh}, then no flux truncation will be applied.
Note that when multiple channels are given, this value is used for all the color channels.
@item -H FLT
@itemx --fluxhigh=FLT
The maximum flux (pixel value) to display in the output image, see
@option{--fluxlow}.
@item -m INT
@itemx --maxbyte=INT
This is only used for the JPEG and EPS output formats which have an 8-bit space for each channel of each pixel.
The maximum value in each pixel can therefore be @mymath{2^8-1=255}.
With this option you can change (decrease) the maximum value.
By doing so you will decrease the dynamic range.
It can be useful if you plan to use those values for other purposes.
@item -A
@itemx --forcemin
Enforce the value of @option{--fluxlow} (when it is given), even if it is smaller than the minimum of the dataset and the output is format supporting color.
This is particularly useful when you are converting a number of images to a common image format like JPEG or PDF with a single command and want them all to have the same range of colors, independent of the contents of the dataset.
Note that if the minimum value is smaller than @option{--fluxlow}, then this option is redundant.
@cindex PDF
@cindex EPS
@cindex PostScript
By default, when the dataset only has two values, @emph{and} the output format is PDF or EPS, ConvertType will use the PostScript optimization that allows setting the pixel values per bit, not byte (@ref{Recognized file formats}).
This can greatly help reduce the file size.
However, when @option{--fluxlow} or @option{--fluxhigh} are called, this optimization is disabled: even though there are only two values (is binary), the difference between them does not correspond to the full contrast of black and white.
@item -B
@itemx --forcemax
Similar to @option{--forcemin}, but for the maximum.
@item -i
@itemx --invert
For 8-bit output types (JPEG, EPS, and PDF for example) the final value that is stored is inverted so white becomes black and vice versa.
The reason for this is that astronomical images usually have a very large area of blank sky in them.
The result will be that a large are of the image will be black.
This behavior is ideal for gray-scale images, if you want a color image, the colors are going to be mixed up.
Special considerations for this option:
@itemize
@item
For single-channel inputs (when @option{--colormap} becomes necessary), this option only affects @option{--colormap=gray}.
@end itemize
@end table
@node Drawing with vector graphics, , Pixel visualization, Invoking astconvertt
@subsubsection Drawing with vector graphics
With the options described in this section, you can draw marks over your to-be-published images (for example, in PDF).
Each mark can be highly customized so they can have different shapes, colors, line widths, text, text size, etc.
The properties of the marks should be stored in a table that is given to the @option{--marks} option described below.
A fully working demo on adding marks is provided in @ref{Marking objects for publication}.
@cindex PostScript point
@cindex Vector graphics point
@cindex Point (Vector graphics; PostScript)
An important factor to consider when drawing vector graphics is that vector graphics standards (the PostScript standard in this case) use a ``point'' as the primary unit of line thickness or font size.
Such that 72 points correspond to 1 inch (or 2.54 centimeters).
In other words, there are roughly 3 PostScript points in every millimeter.
On the other hand, the pixels of the images you plan to show as the background do not have any real size!
Pixels are abstract and can be associated with any print-size.
In ConvertType, the print-size of your final image is set with the @option{--widthincm} option (see @ref{ConvertType input and output}).
The value to @option{--widthincm} is the to-be width of the image in centimeters.
It therefore defines the thickness of lines or font sizes for your vector graphics features (like the image border or marks).
Just recall that we are not talking about resolution!
Vector graphics have infinite resolution!
We are talking about the relative thickness of the lines (or font sizes) in relation to the pixels in your background image.
@table @option
@item -b INT
@itemx --borderwidth=INT
@cindex Border on an image
The width of the border to be put around the EPS and PDF outputs in units of PostScript points.
If you are planning on adding a border, its thickness in relation to your image pixel sizes is highly correlated with the value you give to the @option{--widthincm} parameter.
See the description at the start of this section for more.
Unfortunately in the document structuring convention of the PostScript language, the ``bounding box'' has to be in units of PostScript points with no fractions allowed.
So the border values only have to be specified in integers.
To have a final border that is thinner than one PostScript point in your document, you can ask for a larger width in ConvertType and then scale down the output EPS or PDF file in your document preparation program.
For example, by setting @command{width} in your @command{includegraphics} command in @TeX{} or @LaTeX{} to be larger than the value to @option{--widthincm}.
Since it is vector graphics, the changes of size have no effect on the quality of your output (pixels do not get different values).
@item --bordercolor=STR
The name of the color to use for border that will be put around the EPS and PDF outputs.
The list of available colors, along with their name and an example can be seen with the following command (also see @ref{Vector graphics colors}):
@example
$ astconvertt --listcolors
@end example
This option only accepts the name of the color, not the numeric identifier.
@item --marks=STR
Draw vector graphics (infinite resolution) marks over the image.
The value to this option should be the file name of a table containing the mark information.
The table given to this option can have various properties for each mark in each column.
You can specify which column contains which property of the marks using the options below that start with @option{--mark}.
Only two property columns are mandatory (@option{--markcoords}), the rest are optional.
The table can be in any of the Gnuastro's @ref{Recognized table formats}.
For more on the difference between vector and raster graphics, see @ref{Raster and Vector graphics}.
For example, if your table with mark information is called @file{my-marks.fits}, you can use the command below to draw red circles of radius 5 pixels over the coordinates.
@example
$ astconvertt image.fits --output=image.pdf \
--marks=marks.fits --mode=wcs \
--markcoords=RA,DEC
@end example
You can highly customize each mark with different columns in @file{marks.fits} using the @option{--mark*} options below (for example, using different colors, different shapes, different sizes, text, and the rest on each mark).
@item --markshdu=STR/INT
The HDU (or extension) name or number of the table containing mark properties (file given to @option{--marks}).
This is only relevant if the table is in the FITS format and there is more than one HDU in the FITS file.
@item -r STR,STR
@itemx --markcoords=STR,STR
The column names (or numbers) containing the coordinates of each mark (in table given to @option{--marks}).
Only two values should be given to this option (one for each coordinate).
They can either be given to one call (@option{--markcoords=RA,DEC}) or in separate calls (@option{--markcoords=RA --markcoords=DEC}).
When @option{--mode=image} the columns will be associated to the horizontal/vertical coordinates of the image, and interpreted in units of pixels.
In @option{--mode=wcs}, the columns will be associated to the WCS coordinates (typically Right Ascension and Declination, in units of degrees).
@item -O STR
@itemx --mode=STR
The coordinate mode for interpreting the values in the columns given to the @option{--markcoord1} and @option{--markcoord2} options.
The acceptable values are either @code{img} (for image or pixel coordinates), and @code{wcs} for World Coordinate System (typically RA and Dec).
For the WCS-mode, the input image should have the necessary WCS keywords, otherwise ConvertType will crash.
@item --markshape=STR/INT
@cindex Shapes for marks (vector graphics)
The column name(s), or number(s), containing the shapes of each mark (in table given to @option{--marks}).
The shapes can either be identified by their name, or their numerical identifier.
If identifying them by name in a plain-text table, you need to define a string column (see @ref{Gnuastro text table format}).
The full list of names is shown below, with their numerical identifier in parenthesis afterwards.
For each shape, you can also specify properties such as the size, line width, rotation, and color.
See the description of the relevant @option{--mark*} option below.
@table @code
@item circle (1)
A circular circumference.
It's @emph{radius} is defined by a single size element (the first column given to @option{--marksize}).
Any value in the second size column (if given for other shapes in the same call) are ignored by this shape.
@item plus (2)
The plus sign (@mymath{+}).
The @emph{length of its lines} is defined by a single size element (the first column given to @option{--marksize}).
Such that the intersection of its lines is on the central coordinate of the mark.
Any value in the second size column (if given for other shapes in the same call) are ignored by this shape.
@item cross (3)
A multiplication sign (@mymath{\times}).
The @emph{length of its lines} is defined by a single size element (the first column given to @option{--marksize}).
Such that the intersection of its lines is on the central coordinate of the mark.
Any value in the second size column (if given for other shapes in the same call) are ignored by this shape.
@item ellipse (4)
An elliptical circumference.
Its major axis radius is defined by the first size element (first column given to @option{--marksize}), and its axis ratio is defined through the second size element (second column given to @option{--marksize}).
@item point (5)
A point (or a filled circle).
Its @emph{radius} is defined by a single size element (the first column given to @option{--marksize}).
Any value in the second size column (if given for other shapes in the same call) are ignored by this shape.
This filled circle mark is defined as a ``point'' because it is usually relevant as a small size (or point in the whole image).
But there is no limit on its size, so it can be arbitrarily large.
@item square (6)
A square circumference.
Its @emph{edge length} is defined by a single size element (the first column given to @option{--marksize}).
Any value in the second size column (if given for other shapes in the same call) are ignored by this shape.
@item rectangle (7)
A rectangular circumference.
Its length along the horizontal image axis is defined by first size element (first column given to @option{--marksize}), and its length along the vertical image axis is defined through the second size element (second column given to @option{--marksize}).
@item line (8)
A line.
The line's @emph{length} is defined by a single size element (the first column given to @option{--marksize}.
The line will be centered on the given coordinate.
Like all shapes, you can rotate the line about its center using the @option{--markrotate} column.
Any value in the second size column (if given for other shapes in the same call) are ignored by this shape.
@end table
@item --markrotate=STR/INT
Column name or number that contains the mark's rotation angle.
The rotation angle should be in degrees and be relative to the horizontal axis of the image.
@item --marksize=STR[,STR]
The column name(s), or number(s), containing the size(s) of each mark (in table given to @option{--marks}).
All shapes need at least one ``size'' parameter and some need two.
For the interpretation of the size column(s) for each shape, see the @option{--markshape} option's description.
Since the size column(s) is (are) optional, when not specified, default values will be used (which may be too small in larger images, so you need to change them).
By default, the values in the size column are assumed to be in the same units as the coordinates (defined by the @option{--mode} option, described above).
However, when the coordinates are in WCS-mode, some special cases may occur for the size.
@itemize
@item
The native WCS units (usually degrees) can be too large, and it may be more convenient for the values in the size column(s) to be in arc-seconds.
In this case, you can use the @option{--sizeinarcsec} option.
@item
Similar to above, but in units of arc-minutes.
In this case, you can use the @option{--sizeinarcmin} option.
@item
Your sizes may be in units of pixels, not the WCS units.
In this case, you can use the @option{--sizeinpix} option.
@end itemize
@item --sizeinpix
In WCS-mode, assume that the sizes are in units of pixels.
By default, when in WCS-mode, the sizes are assumed to be in the units of the WCS coordinates (usually degrees).
@item --sizeinarcsec
In WCS-mode, assume that the sizes are in units of arc-seconds.
By default, when in WCS-mode, the sizes are assumed to be in the units of the WCS coordinates (usually degrees).
@item --sizeinarcmin
In WCS-mode, assume that the sizes are in units of arc-seconds.
By default, when in WCS-mode, the sizes are assumed to be in the units of the WCS coordinates (usually degrees).
@item --marklinewidth=STR/INT
Column containing the width (thickness) of the line to draw each mark.
The line width is measured in units of ``points'' (where 72 points is one inch), and it can be any positive floating point number.
Therefore, the thickness (in relation to the pixels of your image) depends on @option{--widthincm} option.
For more, see the description at the start of this section.
@item --markcolor=STR/INT
Column containing the color of the mark.
This column can be either a string or an integer.
As a string, the color name can be written directly in your table (this greatly helps in human readability).
For more on string columns see @ref{Gnuastro text table format}.
As an integer, you can simply use the numerical identifier of the column.
You can see the list of colors with their names and numerical identifiers in Gnuastro by running ConvertType with @option{--listcolors}, or see @ref{Vector graphics colors}.
@item --listcolors
The list of acceptable color names, their codes and their representation can be seen with the @option{--listcolors} option.
By ``representation'' we mean that the color will be shown on the terminal as the background in that column.
But this will only be properly visible with ``true color'' or 24-bit terminals, see @url{https://en.wikipedia.org/wiki/ANSI_escape_code,ANSI escape sequence standard}.
Most modern GNU/Linux terminals support 24-bit colors natively, and no modification is necessary.
For macOS, see the box below.
The printed text in standard output is in the @ref{Gnuastro text table format}, so if you want to store this table, you can simply pipe the output to Gnuastro's Table program and store it as a FITS table:
@example
$ astconvertt --listcolors | astttable -ocolors.fits
@end example
@cindex iTerm
@cindex macOS terminal 24-bit color
@cindex Color in macOS terminals
@cartouche
@noindent
@strong{macOS terminal colors}: as of August 2022, the default macOS terminal (iTerm) does not support 24-bit colors!
The output of @option{--listlines} therefore does not display the actual colors (you can only use the color names).
One tested solution is to install and use @url{https://iterm2.com, iTerm2}, which is free software and available in @url{https://formulae.brew.sh/cask/iterm2, Homebrew}.
iTerm2 is described as a successor for iTerm and works on macOS 10.14 (released in September 2018) or newer.
@end cartouche
@item --marktext=STR/INT
Column name or number that contains the text that should be printed under the mark.
If the column is numeric, the number will be printed under the mark (for example, if you want to write the magnitude or redshift of the object under the mark showing it).
For the precision of writing floating point columns, see @option{--marktextprecision}.
But if the column has a string format (for example, the name of the object like an NGC1234), you need to define the column as a string column (see @ref{Gnuastro text table format}).
For text with different lengths, set the length in the definition of the column to the maximum length of the strings to be printed.
If there are some rows or marks that don't require text, set the string in this column to @option{n/a} (not applicable; the blank value for strings in Gnuastro).
When having strings with different lengths, make sure to have enough white spaces (for the shorter strings) so the adjacent columns are not taken as part of the string (see @ref{Gnuastro text table format}).
@item --marktextprecision=INT
The number of decimal digits to print after the floating point.
This is only relevant when @option{--marktext} is given, and the selected column has a floating point format.
@item --markfont=STR/INT
@cindex Fonts
@cindex Ghostscript fonts
Column name or number that contains the font for the displayed text under the mark.
This is only relevant if @option{--marktext} is called.
The font should be accessible by Ghostscript.
If you are not familiar with the available fonts on your system's Ghostscript, you can use the @option{--showfonts} option to see all the fonts in a custom PDF file (one page per font).
If you are already familiar with the font you want, but just want to make sure about its presence (or spelling!), you can get a list (on standard output) of all the available fonts with the @option{--listfonts} option.
Both are described below.
@cindex Adding Ghostscript fonts
It is possible to add custom fonts to Ghostscript as described in the @url{https://ghostscript.com/doc/current/Fonts.htm, Fonts section} of the Ghostscript manual.
@item --markfontsize=STR/INT
Column name or number that contains the font size to use.
This is only relevant if a text column has been defined (with @option{--marktext}, described above).
The font size is in units of ``point''s, see description at the start of this section for more.
@item --showfonts
Create a special PDF file that shows the name and shape of all available fonts in your system's Ghostscript.
You can use this for selecting the best font to put in the @option{--markfonts} column.
The available fonts can differ from one system to another (depending on how Ghostscript was configured in that system).
The PDF file's name is constructed by appending a @file{-fonts.pdf} to the file name given to the @option{--output} option.
The PDF file will have one page for each font, and the sizes of the pages are customized for showing the fonts (each page is horizontally elongated).
This helps to better check the files by disable ``continuous'' mode in your PDF viewer, and setting the zoom such that the width of the page corresponds to the width of your PDF viewer.
Simply pressing the left/right keys will then nicely show each fonts separately.
@item --listfonts
Print (to standard output) the names of all available fonts in Ghostscript that you can use for the @option{--markfonts} column.
The available fonts can differ from one system to another (depending on how Ghostscript was configured in that system).
If you are not already familiar with the shape of each font, please use @option{--showfonts} (described above).
@end table
@node Table, Query, ConvertType, Data containers
@section Table
Tables are the high-level products of processing on low-leveler data like images or spectra.
For example, in Gnuastro, MakeCatalog will process the pixels over an object and produce a catalog (or table) with the properties of each object such as magnitudes and positions (see @ref{MakeCatalog}).
Each one of these properties is a column in its output catalog (or table) and for each input object, we have a row.
When there are only a small number of objects (rows) and not too many properties (columns), then a simple plain text file is mainly enough to store, transfer, or even use the produced data.
However, to be more efficient, astronomers have defined the FITS binary table standard to store data in a binary format (which cannot be seen in a text editor text).
This can offer major advantages: the file size will be greatly reduced and the reading and writing will also be faster (because the RAM and CPU also work in binary).
The acceptable table formats are fully described in @ref{Tables}.
@cindex AWK
@cindex GNU AWK
Binary tables are not easily readable with basic plain-text editors.
There is no fixed/unified standard on how the zero and ones should be interpreted.
Unix-like operating systems have flourished because of a simple fact: communication between the various tools is based on human readable characters@footnote{In ``The art of Unix programming'', Eric Raymond makes this suggestion to programmers: ``When you feel the urge to design a complex binary file format, or a complex binary application protocol, it is generally wise to lie down until the feeling passes.''.
This is a great book and strongly recommended, give it a look if you want to truly enjoy your work/life in this environment.}.
So while the FITS table standards are very beneficial for the tools that recognize them, they are hard to use in the vast majority of available software.
This creates limitations for their generic use.
Table is Gnuastro's solution to this problem.
Table has a large set of operations that you can directly do on any recognized table (such as selecting certain rows and doing arithmetic on the columns).
For operations that Table does not do internally, FITS tables (ASCII or binary) are directly accessible to the users of Unix-like operating systems (in particular those working the command-line or shell, see @ref{Command-line interface}).
With Table, a FITS table (in binary or ASCII formats) is only one command away from AWK (or any other tool you want to use).
Just like a plain text file that you read with the @command{cat} command.
You can pipe the output of Table into any other tool for higher-level processing, see the examples in @ref{Invoking asttable} for some simple examples.
In the sections below we describe how to effectively use the Table program.
We start with @ref{Column arithmetic}, where the basic concept and methods of applying arithmetic operations on one or more columns are discussed.
Afterwards, in @ref{Operation precedence in Table}, we review the various types of operations available and their precedence in an instance of calling Table.
This is a good place to get a general feeling of all the things you can do with Table.
Finally, in @ref{Invoking asttable}, we give some examples and describe each option in Table.
@menu
* Printing floating point numbers:: Optimal storage of floating point types.
* Vector columns:: How to keep more than one value in each column.
* Column arithmetic:: How to do operations on table columns.
* Operation precedence in Table:: Order of running options in Table.
* Invoking asttable:: Options and arguments to Table.
@end menu
@node Printing floating point numbers, Vector columns, Table, Table
@subsection Printing floating point numbers
@cindex Floating point numbers
@cindex Printing floating point numbers
Many of the columns containing astronomical data will contain floating point numbers (those that aren't an integer, like @mymath{1.23} or @mymath{4.56\times10^{-7}}).
However, printing (for human readability) of floating point numbers has some intricacies that we will explain in this section.
For a basic introduction to different types of integers or floating points, see @ref{Numeric data types}.
It may be tempting to simply use 64-bit floating points all the time and avoid this section over all.
But have in mind that compared to 32-bit floating point type, a 64-bit floating point type will consume double the storage, double the RAM and will take almost double the time for processing.
So when the statistical precision of your numbers is less than that offered by 32-bit floating point precision, it is much better to store them in this format.
Within almost all commonly used CPUs of today, numbers (including integers or floating points) are stored in binary base-2 format (where the only digits that can be used to represent the number are 0 and 1).
However, we (humans) are use to numbers in base-10 (where we have 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9).
For integers, there is a one-to-one correspondence between a base-2 and base-10 representation.
Therefore, converting a base-10 integer (that you will be giving as an option value when running a Gnuastro program, for example) to base-2 (that the computer will store in memory), or vice-versa, will not cause any loss of information for integers.
The problem is that floating point numbers don't have such a one-to-one correspondence between the two notations.
The full discussion on how floating point numbers are stored in binary format is beyond the scope of this book.
But please have a look at the corresponding @url{https://en.wikipedia.org/wiki/Floating-point_arithmetic, Wikipedia article} to get a rough feeling about the complexity.
Of course, if you are interested in the details, that Wikipedia article should be a good starting point for further reading.
@cindex IEEE 754 (floating point)
The most common convention for storing floating point numbers in digital storage is IEEE Standard for Floating-Point Arithmetic; @url{https://en.wikipedia.org/wiki/IEEE_754, IEEE 754}.
In short, the full width (in bits) assigned to that type (for example the 32 bits allocated for 32-bit floating point types) is divided into separate components: The first bit is the ``sign'' (specifying if the number is negative or positive).
In 32-bit floats, the next 8 bits are the ``exponent'' and finally (again, in 32-bit floats), the ``fraction'' is stored in the next 23 bits.
For example see @url{https://commons.wikimedia.org/wiki/File:Float_example.svg, this image on Wikipedia}.
@cindex Decimal digits
@cindex Precision of floats
In IEEE 754, around zero, the base-2 and base-10 representations approximately match.
However, as we go away from 0, you will loose precision.
The important concept in understanding the precision of floating point numbers is ``decimal digits'', or the number of digits in the number, independent of where the decimal point is.
For example @mymath{1.23} has three decimal digits and @mymath{4.5678\times10^9} has 5 decimal digits.
According to IEEE 754@footnote{@url{https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats}}, 32-bit and 64-bit floating point numbers can accurately (statistically) represent a floating point with 7.22 and 15.95 decimal digits respectively.
@cartouche
@noindent
@strong{Should I store my columns as 32-bit or 64-bit floating point type?} If your floating point numbers have 7 decimal digits or less (for example noisy image pixel values, measured star or galaxy magnitudes, and anything that is derived from them like galaxy mass and etc), you can safely use 32-bit precision (the statistical error on the measurements is usually significantly larger than 7 digits!).
However, some columns require more digits; thus 64-bit precision.
For example, RA or Dec with more than one arcsecond accuracy: the degrees can have 3 digits, and 1 arcsecond is @mymath{1/3600\sim0.0003} of a degree, requiring 4 more digits).
You can use the @ref{Numerical type conversion operators} of @ref{Column arithmetic} to convert your columns to a certain type for storage.
@end cartouche
The discussion above was for the storage of floating point numbers.
When printing floating point numbers in a human-friendly format (for example, in a plain-text file or on standard output in the command-line), the computer has to convert its internal base-2 representation to a base-10 representation.
This second conversion may cause a small discrepancy between the stored and printed values.
@cartouche
@noindent
@strong{Use FITS tables as output of measurement programs:} When you are doing a measurement to produce a catalog (for example with @ref{MakeCatalog}) set the output to be a FITS table (for example @option{--output=mycatalog.fits}).
A FITS binary table will store the same the base-2 number that was measured by the CPU.
However, if you choose to store the output table as a plain-text table, you risk loosing information due to the human friendly base-10 floating point conversion (which is necessary in a plain-text output).
@end cartouche
To customize how columns containing floating point values are printed (in a plain-text output file, or in the standard output in your terminal), Table has four options for the two different types: @option{--txtf32format}, @option{--txtf32precision}, @option{--txtf64format} and @option{--txtf64precision}.
They are fully described in @ref{Invoking asttable}.
@cartouche
@noindent
@strong{Summary:} it is therefore recommended to always store your tables as FITS (binary) tables.
To view the contents of the table on the command-line or to feed it to a program that doesn't recognize FITS tables, you can use the four options above for a custom base-10 conversion that will not cause any loss of data.
@end cartouche
@node Vector columns, Column arithmetic, Printing floating point numbers, Table
@subsection Vector columns
@cindex Vector columns
@cindex Columns (Vector)
@cindex Multi-value columns (vector)
In its most common format, each column of a table only has a single value in each row.
For example, we usually have one column for the magnitude, another column for the RA (Right Ascension) and yet another column for the DEC (Declination) of a set of galaxies/stars (where each galaxy is represented by one row in the table).
This common single-valued column format is sufficient in many scenarios.
However, in some situations (like those below) it would help to have multiple values for each row in each column, not just one.
@itemize
@item
@cindex MUSE
@cindex Spectrum
@cindex Radial profile
Conceptually: the various numbers are ``connected'' to each other.
In other words, their order and position in relation to each other matters.
Common examples in astronomy are the radial profiles of each galaxy in your catalog, or their spectrum.
For example, each MUSE@footnote{@url{https://www.eso.org/sci/facilities/develop/instruments/muse.html}} spectra has 3681 points (with a sampling of of 1.25 Angstroms).
Dealing with this many separate measurements as separate columns in your table is very annoying and prone to error: you don't want to forget moving some of them in an output table for further analysis, mistakenly change their order, or do some operation only on a sub-set of them.
@item
Technically: in the FITS standard, you can only store a maximum of 999 columns in a FITS table.
Therefore, if you have more than 999 data points for each galaxy (like the MUSE spectra example above), it is impossible to store each point in one table as separate columns.
@end itemize
To address these problems, the FITS standard has defined the concept of ``vector'' columns in its Binary table format (ASCII FITS tables don't support vector columns, but Gnuastro's plain-text format does, as described here).
Within each row of a single vector column, we can store any number of data points (like the MUSE spectra above or the full radial profile of each galaxy).
All the values in a vector column have to have the same @ref{Numeric data types}, and the number of elements within each vector column is the same for all rows.
By grouping conceptually similar data points (like a spectrum) in one vector column, we can significantly reduce the number of columns and make it much more manageable, without loosing any information!
To demonstrate the vector column features of Gnuastro's Table program, let's start with a randomly generated small (5 rows and 3 columns) catalog.
This will allows us to show the outputs of each step here, but you can apply the same concept to vectors with any number of columns.
With the command below, we use @code{seq} to generate a single-column table that is piped to Gnuastro's Table program.
Table then uses column arithmetic to generate three columns with random values from that column (for more, see @ref{Column arithmetic}).
Each column becomes noisy, with standard deviations of 2, 5 and 10.
Finally, we will add metadata to each column, giving each a different name (using names is always the best way to work with columns):
@example
$ seq 1 5 \
| asttable -c'arith $1 2 mknoise-sigma f32' \
-c'arith $1 5 mknoise-sigma f32' \
-c'arith $1 10 mknoise-sigma f32' \
--colmetadata=1,abc,none,"First column." \
--colmetadata=2,def,none,"Second column." \
--colmetadata=3,ghi,none,"Third column." \
--output=table.fits
@end example
With the command below, let's have a look at the table.
When you run it, you will have a different random number generator seed, so the numbers will be slightly different.
For making reproducible random numbers, see @ref{Generating random numbers}.
The @option{-Y} option is used for more easily readable numbers (without it, floating point numbers are written in scientific notation, for more see @ref{Printing floating point numbers}) and with the @option{-O} we are asking Table to also print the metadata.
For more on Table's options, see @ref{Invoking asttable} and for seeing how the short options can be merged (such that @option{-Y -O} is identical to @option{-YO}), see @ref{Options}.
@example
$ asttable table.fits -YO
# Column 1: abc [none,f32,] First column.
# Column 2: def [none,f32,] Second column.
# Column 3: ghi [none,f32,] Third column.
1.074 5.535 -4.464
0.606 -2.011 15.397
1.475 1.811 5.687
2.248 7.663 -7.789
6.355 17.374 6.767
@end example
We see that indeed, it has three columns, with our given names.
Now, let's assume that you want to make a two-element vector column from the values in the @code{def} and @code{ghi} columns.
To do that, you can use the @option{--tovector} option like below.
As the name suggests, @option{--tovector} will merge the rows of the two columns into one vector column with multiple values in each row.
@example
$ asttable table.fits -YO --tovector=def,ghi
# Column 1: abc [none,f32 ,] First column.
# Column 2: def-VECTOR [none,f32(2),] Vector by merging multiple cols.
1.074 5.535 -4.464
0.606 -2.011 15.397
1.475 1.811 5.687
2.248 7.663 -7.789
6.355 17.374 6.767
@end example
@cindex Tokens
If you ignore the metadata, this doesn't seem to have changed anything!
You see that each line of numbers still has three ``tokens'' (to distinguish them from ``columns'').
But once you look at the metadata, you only see metadata for two columns, not three.
If you look closely, the numeric data type of the newly added fourth column is `@code{f32(2)}' (look above; previously it was @code{f32}).
The @code{(2)} shows that the second column contains two numbers/tokens not one.
If your vector column consisted of 3681 numbers, this would be @code{f32(3681)}.
Looking again at the metadata, we see that @option{--tovector} has also created a new name and comments for the new column.
This is done all the time to avoid confusion with the old columns.
Let's confirm that the newly added column is indeed a single column but with two values.
To do this, with the command below, we'll write the output into a FITS table.
In the same command, let's also give a more suitable name for the new merged/vector column).
We can get a first confirmation by looking at the table's metadata in the second command below:
@example
$ asttable table.fits -YO --tovector=def,ghi --output=vec.fits \
--colmetadata=2,vector,nounits,"New vector column."
$ asttable vec.fits -i
--------
vec.fits (hdu: 1)
------- ----- ---- -------
No.Name Units Type Comment
------- ----- ---- -------
1 abc none float32 First column.
2 vector nounits float32(2) New vector column.
--------
Number of rows: 5
--------
@end example
@noindent
A more robust confirmation would be to print the values in the newly added @code{vector} column.
As expected, asking for a single column with @option{--column} (or @option{-c}) will given us two numbers per row/line (instead of one!).
@example
$ asttable vec.fits -c vector -YO
# Column 1: vector [nounits,f32(2),] New vector column.
5.535 -4.464
-2.011 15.397
1.811 5.687
7.663 -7.789
17.374 6.767
@end example
If you want to keep the original single-valued columns that went into the vector column, you can use the @code{--keepvectfin} option (read it as ``KEEP VECtor To/From Inputs''):
@example
$ asttable table.fits -YO --tovector=def,ghi --keepvectfin \
--colmetadata=4,vector,nounits,"New vector column."
# Column 1: abc [none ,f32 ,] First column.
# Column 2: def [none ,f32 ,] Second column.
# Column 3: ghi [none ,f32 ,] Third column.
# Column 4: vector [nounits,f32(2),] New vector column.
1.074 5.535 -4.464 5.535 -4.464
0.606 -2.011 15.397 -2.011 15.397
1.475 1.811 5.687 1.811 5.687
2.248 7.663 -7.789 7.663 -7.789
6.355 17.374 6.767 17.374 6.767
@end example
Now that you know how to create vector columns, let's assume you have the inverse scenario: you want to extract one of the values of a vector column into a separate single-valued column.
To do this, you can use the @option{--fromvector} option.
The @option{--fromvector} option takes the name (or counter) of a vector column, followed by any number of integer counters (counting from 1).
It will extract those elements into separate single-valued columns.
For example, let's assume you want to extract the second element of the @code{defghi} column in the file you made before:
@example
$ asttable vec.fits --fromvector=vector,2 -YO
# Column 1: abc [none ,f32,] First column.
# Column 2: vector-2 [nounits,f32,] New vector column.
1.074 -4.464
0.606 15.397
1.475 5.687
2.248 -7.789
6.355 6.767
@end example
@noindent
Just like the case with @option{--tovector} above, if you want to keep the input vector column, use @option{--keepvectfin}.
This feature is useful in scenarios where you want to select some rows based on a single element (or multiple) of the vector column.
@cartouche
@noindent
@strong{Vector columns and FITS ASCII tables:} As mentioned above, the FITS standard only recognizes vector columns in its Binary table format (the default FITS table format in Gnuastro).
You can still use the @option{--tableformat=fits-ascii} option to write your tables in the FITS ASCII format (see @ref{Input output options}).
In this case, if a vector column is present, it will be written as separate single-element columns to avoid loosing information (as if you run called @option{--fromvector} on all the elements of the vector column).
A warning is printed if this occurs.
@end cartouche
For an application of the vector column concepts introduced here on MUSE data, see the 3D data cube tutorial and in particular these two sections: @ref{3D measurements and spectra} and @ref{Extracting a single spectrum and plotting it}.
@node Column arithmetic, Operation precedence in Table, Vector columns, Table
@subsection Column arithmetic
In many scenarios, you want to apply some kind of operation on the columns and save them in another table or feed them into another program.
With Table you can do a rich set of operations on the contents of one or more columns in a table, and save the resulting values as new column(s) in the output table.
For seeing the precedence of Column arithmetic in relation to other Table operators, see @ref{Operation precedence in Table}.
To enable column arithmetic, the first 6 characters of the value to @option{--column} (@code{-c}) should be the activation word `@option{arith }' (note the space character in the end, after `@code{arith}').
After the activation word, you can use reverse polish notation to identify the operators and their operands, see @ref{Reverse polish notation}.
Just note that white-space characters are used between the tokens of the arithmetic expression and that they are meaningful to the command-line environment.
Therefore the whole expression (including the activation word) has to be quoted on the command-line or in a shell script (see the examples below).
To identify a column you can directly use its name, or specify its number (counting from one, see @ref{Selecting table columns}).
When you are giving a column number, it is necessary to prefix the number with a @code{$}, similar to AWK.
Otherwise the number is not distinguishable from a constant number to use in the arithmetic operation.
For example, with the command below, the first two columns of @file{table.fits} will be printed along with a third column that is the result of multiplying the first column with @mymath{10^{10}} (for example, to convert wavelength from Meters to Angstroms).
Note that without the `@key{$}', it is not possible to distinguish between ``1'' as a column-counter, or ``1'' as a constant number to use in the arithmetic operation.
Also note that because of the significance of @key{$} for the command-line environment, the single-quotes are the recommended quoting method (as in an AWK expression), not double-quotes (for the significance of using single quotes see the box below).
@example
$ asttable table.fits -c1,2 -c'arith $1 1e10 x'
@end example
@cartouche
@noindent
@strong{Single quotes when string contains @key{$}}: On the command-line, or in shell-scripts, @key{$} is used to expand variables, for example, @code{echo $PATH} prints the value (a string of characters) in the variable @code{PATH}, it will not simply print @code{$PATH}.
This operation is also permitted within double quotes, so @code{echo "$PATH"} will produce the same output.
This is good when printing values, for example, in the command below, @code{$PATH} will expand to the value within it.
@example
$ echo "My path is: $PATH"
@end example
If you actually want to return the literal string @code{$PATH}, not the value in the @code{PATH} variable (like the scenario here in column arithmetic), you should put it in single quotes like below.
The printed value here will include the @code{$}, please try it to see for yourself and compare to above.
@example
$ echo 'My path is: $PATH'
@end example
Therefore, when your column arithmetic involves the @key{$} sign (to specify columns by number), quote your @code{arith } string with a single quotation mark.
Otherwise you can use both single or double quotes.
@end cartouche
@cartouche
@noindent
@strong{Manipulate all columns in one call using @key{$_all}}: Usually we manipulate one column in one call of column arithmetic.
For instance, with the command below the elements of the @code{AWAV} column will be sumed.
@example
$ asttable table.fits -c'arith AWAV sumvalue'
@end example
But sometimes, we want to manipulate more than one column with the same expression.
For example we want to sum all the elements of all the columns.
In this case we could use the following command (assuming that the table has four different @code{AWAV-*} columns):
@example
$ asttable table.fits -c'arith AWAV-1 sumvalue' \
-c'arith AWAV-2 sumvalue' \
-c'arith AWAV-3 sumvalue' \
-c'arith AWAV-4 sumvalue'
@end example
To avoid repetition and mistakes, instead of using column arithmetic many times, we can use the @code{$_all} identifier.
When column arithmetic confronts this special string, it will repeat the expression for all the columns in the input table.
Therefore the command above can be written as:
@example
$ asttable table.fits -c'arith $_all sumvalue'
@end example
@end cartouche
Alternatively, if the columns have meta-data and the first two are respectively called @code{AWAV} and @code{SPECTRUM}, the command above is equivalent to the command below.
Note that the character `@key{$}' is no longer necessary in this scenario (because names will not be confused with numbers):
@example
$ asttable table.fits -cAWAV,SPECTRUM -c'arith AWAV 1e10 x'
@end example
Comparison of the two commands above clearly shows why it is recommended to use column names instead of numbers.
When the columns have descriptive names, the command/script actually becomes much more readable, describing the intent of the operation.
It is also independent of the low-level table structure: for the second command, the column numbers of the @code{AWAV} and @code{SPECTRUM} columns in @file{table.fits} is irrelevant.
Column arithmetic changes the values of the data within the column.
So the old column metadata cannot be used any more.
By default the output column of the arithmetic operation will be given a generic metadata (for example, its name will be @code{ARITH_1}, which is hardly useful!).
But metadata are critically important and it is good practice to always have short, but descriptive, names for each columns, units and also some comments for more explanation.
To add metadata to a column, you can use the @option{--colmetadata} option that is described in @ref{Invoking asttable} and @ref{Operation precedence in Table}.
Since the arithmetic expressions are a value to @option{--column}, it does not necessarily have to be a separate option, so the commands above are also identical to the command below (note that this only has one @option{-c} option).
Just be very careful with the quoting!
With the @option{--colmetadata} option, we are also giving a name, units and a comment to the third column.
@example
$ asttable table.fits -cAWAV,SPECTRUM,'arith AWAV 1e10 x' \
--colmetadata=3,AWAV_A,angstrom,"Wavelength (in Angstroms)"
@end example
In case you need to append columns from other tables (with @option{--catcolumnfile}), you can use those extra columns in column arithmetic also.
The easiest, and most robust, way is that your columns of interest (in all files whose columns are to be merged) have different names.
In this scenario, you can simply use the names of the columns you plan to append.
If there are similar names, note that by default Table appends a @code{-N} to similar names (where @code{N} is the file counter given to @option{--catcolumnfile}, see the description of @option{--catcolumnfile} for more).
Using column numbers can get complicated: if the number is smaller than the main input's number of columns, the main input's column will be used.
Otherwise (when the requested column number is larger than the main input's number of columns), the final output (after appending all the columns from all the possible files) column number will be used.
Almost all the arithmetic operators of @ref{Arithmetic operators} are also supported for column arithmetic in Table.
In particular, the few that are not present in the Gnuastro library@footnote{For a list of the Gnuastro library arithmetic operators, please see the macros starting with @code{GAL_ARITHMETIC_OP} and ending with the operator name in @ref{Arithmetic on datasets}.} are not yet supported for column arithmetic.
Besides the operators in @ref{Arithmetic operators}, several operators are only available in Table to use on table columns.
@cindex WCS: World Coordinate System
@cindex World Coordinate System (WCS)
@table @code
@item wcs-to-img
Convert the given WCS positions to image/dataset coordinates based on the number of dimensions in the WCS structure of @option{--wcshdu} extension/HDU in @option{--wcsfile}.
It will output the same number of columns.
The first popped operand is the last FITS dimension.
For example, the two commands below (which have the same output) will produce 5 columns.
The first three columns are the input table's ID, RA and Dec columns.
The fourth and fifth columns will be the pixel positions in @file{image.fits} that correspond to each RA and Dec.
@example
$ asttable table.fits -cID,RA,DEC,'arith RA DEC wcs-to-img' \
--wcsfile=image.fits
$ asttable table.fits -cID,RA -cDEC \
-c'arith RA DEC wcs-to-img' --wcsfile=image.fits
@end example
@item img-to-wcs
Similar to @code{wcs-to-img}, except that image/dataset coordinates are converted to WCS coordinates.
@item eq-j2000-to-flat
Convert spherical RA and Dec (in Julian year 2000.0 equatorial coordinates; which are the most common) into RA and Dec on a flat surface based on the given reference point and projection.
The full details of the operands to this operator are given below, but let's start with a practical example to show the concept.
At (or very near) the reference point the output of this operator will be the same as the input.
But as you move away from the reference point, distortions due to the particular projection will gradually cause changes in the output (when compared to the input).
For example if you simply plot RA and Dec without this operator, a circular annulus on the sky will become elongated as the declination of its center goes farther from the equator.
For a demonstration of the difference between curved and flat RA and Decs, see @ref{Pointings that account for sky curvature} in the Tutorials chapter.
Let's assume you want to plot a set of RA and Dec points (defined on a spherical surface) in a paper (a flat surface) and that @file{table.fits} contains the RA and Dec in columns that are called @code{RA} and @code{DEC}.
With the command below, the points will be converted to flat-RA and flat-Dec using the Gnomonic projection (which is known as @code{TAN} in the FITS WSC standard; see the description of the first popped operand below):
@example
$ asttable table.fits \
-c'arith RA set-r DEC set-d \
r d r meanvalue d meanvalue TAN \
eq-j2000-to-flat'
@end example
As you see, the RA and Dec (@code{r} and @code{d}) are the last two operators that are popped.
Before them, the reference point's coordinates are calculated from the mean of the RA and Decs (`@code{r meanvalue}' and `@code{d meanvalue}'), and the first popped operand is the projection (@code{TAN}).
We are using the mean RA and Dec as the reference point since we are assuming that this is the only set of points you want to convert.
In case you have to plot multiple sets of points in the same plot, you should give the same reference point in each separate conversion; like the example below:
@example
$ ref_ra=123.45
$ ref_dec=-6.789
$ asttable table-1.fits --output=flat-1.txt \
-c'arith RA DEC '$ref_ra' '$ref_dec' TAN \
eq-j2000-to-flat'
$ asttable table-2.fits --output=flat-2.txt \
-c'arith RA DEC '$ref_ra' '$ref_dec' TAN \
eq-j2000-to-flat'
@end example
This operator takes 5 operands:
@enumerate
@item
The @emph{first} popped operand (closest to the operator) is the standard FITS WCS projection to use; and should contain a single element (not a column).
The full list of projections can be seen in the description of the @option{--ctype} option in @ref{Align pixels with WCS considering distortions}.
The most common projection for smaller fields of view is @code{TAN} (Gnomonic), but when your input catalog contains large portions of the sky, projections like @code{MOL} (Mollweide) should be used.
This is because distortions caused by the @code{TAN} projection can become very significant after a couple of degrees from the reference point.
@item
The @emph{second} popped operand is the reference point's declination; and should contain a single value (not a column).
@item
The @emph{third} popped operand is the reference point's right ascension; and should contain a single value (not a column).
@item
The @emph{fourth} popped operand is the declination column of your input table (the points that will be converted).
@item
The @emph{fifth} popped operand is the right ascension column of your input table (the points that will be converted).
@end enumerate
@item eq-j2000-from-flat
The inverse of @code{eq-j2000-to-flat}.
In other words, you have a set of points defined on the flat RA and Dec (after the projection from spherical to flat), but you want to map them to spherical RA and Dec.
For an example, see @ref{Pointings that account for sky curvature} in the Gnuastro tutorials.
@item distance-flat
Return the distance between two points assuming they are on a flat surface.
Note that each point needs two coordinates, so this operator needs four operands (currently it only works for 2D spaces).
The first and second popped operands are considered to belong to one point and the third and fourth popped operands to the second point.
Each of the input points can be a single coordinate or a full table column (containing many points).
In other words, the following commands are all valid:
@example
$ asttable table.fits \
-c'arith X1 Y1 X2 Y2 distance-flat'
$ asttable table.fits \
-c'arith X Y 12.345 6.789 distance-flat'
$ asttable table.fits \
-c'arith 12.345 6.789 X Y distance-flat'
@end example
In the first case we are assuming that @file{table.fits} has the following four columns @code{X1}, @code{Y1}, @code{X2}, @code{Y2}.
The returned column by this operator will be the difference between two points in each row with coordinates like the following (@code{X1}, @code{Y1}) and (@code{X2}, @code{Y2}).
In other words, for each row, the distance between different points is calculated.
In the second and third cases (which are identical), it is assumed that @file{table.fits} has the two columns @code{X} and @code{Y}.
The returned column by this operator will be the difference of each row with the fixed point at (12.345, 6.789).
@item distance-on-sphere
Return the spherical angular distance (along a great circle, in degrees) between the given two points.
Note that each point needs two coordinates (in degrees), so this operator needs four operands.
The first and second popped operands are considered to belong to one point and the third and fourth popped operands to the second point.
Each of the input points can be a single coordinate or a full table column (containing many points).
In other words, the following commands are all valid:
@example
$ asttable table.fits \
-c'arith RA1 DEC1 RA2 DEC2 distance-on-sphere'
$ asttable table.fits \
-c'arith RA DEC 9.876 5.432 distance-on-sphere'
$ asttable table.fits \
-c'arith 9.876 5.432 RA DEC distance-on-sphere'
@end example
In the first case we are assuming that @file{table.fits} has the following four columns @code{RA1}, @code{DEC1}, @code{RA2}, @code{DEC2}.
The returned column by this operator will be the difference between two points in each row with coordinates like the following (@code{RA1}, @code{DEC1}) and (@code{RA2}, @code{DEC2}).
In other words, for each row, the angular distance between different points is calculated.
In the second and third cases (which are identical), it is assumed that @file{table.fits} has the two columns @code{RA} and @code{DEC}.
The returned column by this operator will be the difference of each row with the fixed point at (9.876, 5.432).
The distance (along a great circle) on a sphere between two points is calculated with the equation below, where @mymath{r_1}, @mymath{r_2}, @mymath{d_1} and @mymath{d_2} are the right ascensions and declinations of points 1 and 2.
@dispmath {\cos(d)=\sin(d_1)\sin(d_2)+\cos(d_1)\cos(d_2)\cos(r_1-r_2)}
@item ra-to-degree
Convert the hour-wise Right Ascension (RA) string, in the sexagesimal format of @code{_h_m_s} or @code{_:_:_}, to degrees.
Note that the input column has to have a string format.
In FITS tables, string columns are well-defined.
For plain-text tables, please follow the standards defined in @ref{Gnuastro text table format}, otherwise the string column will not be read.
@example
$ asttable catalog.fits -c'arith RA ra-to-degree'
$ asttable catalog.fits -c'arith $5 ra-to-degree'
@end example
@item dec-to-degree
Convert the sexagesimal Declination (Dec) string, in the format of @code{_d_m_s} or @code{_:_:_}, to degrees (a single floating point number).
For more details please see the @option{ra-to-degree} operator.
@item degree-to-ra
@cindex Sexagesimal
@cindex Right Ascension
Convert degrees (a column with a single floating point number) to the Right Ascension, RA, string (in the sexagesimal format hours, minutes and seconds, written as @code{_h_m_s}).
The output will be a string column so no further mathematical operations can be done on it.
The output file can be in any format (for example, FITS or plain-text).
If it is plain-text, the string column will be written following the standards described in @ref{Gnuastro text table format}.
@item degree-to-dec
@cindex Declination
Convert degrees (a column with a single floating point number) to the Declination, Dec, string (in the format of @code{_d_m_s}).
See the @option{degree-to-ra} for more on the format of the output.
@item date-to-sec
@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
Return the number of seconds from the Unix epoch time (00:00:00 Thursday, January 1st, 1970).
The input (popped) operand should be a string column in the FITS date format (most generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}).
The returned operand will be named @code{UNIXSEC} (short for Unix-seconds) and will be a 64-bit, signed integer, see @ref{Numeric data types}.
If the input string has sub-second precision, it will be ignored because floating point numbers cannot accurately store numbers with many significant digits.
To preserve sub-second precision, please use @code{date-to-millisec}.
For example, in the example below we are using this operator, in combination with the @option{--keyvalue} option of the Fits program, to sort your desired FITS files by observation date (value in the @code{DATE-OBS} keyword in example below):
@example
$ astfits *.fits --keyvalue=DATE-OBS --colinfoinstdout \
| asttable -cFILENAME,'arith DATE-OBS date-to-sec' \
--colinfoinstdout \
| asttable --sort=UNIXSEC
@end example
If you do not need to see the Unix-seconds any more, you can add a @option{-cFILENAME} (short for @option{--column=FILENAME}) at the end.
For more on @option{--keyvalue}, see @ref{Keyword inspection and manipulation}.
@item date-to-millisec
Return the number of milli-seconds from the Unix epoch time (00:00:00 Thursday, January 1st, 1970).
The input (popped) operand should be a string column in the FITS date format (most generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}, where @code{.ddd} is the optional sub-second component).
The returned operand will be named @code{UNIXMILLISEC} (short for Unix milli-seconds) and will be a 64-bit, signed integer, see @ref{Numeric data types}.
The returned value is not a floating point type because for large numbers, floating point data types loose single-digit precision (which is important here).
Other than the units of the output, this operator behaves similarly to @code{date-to-sec}.
See the description of that operator for an example.
@item sorted-to-interval
Given a single column (which must be already sorted and have a numeric data type), return two columns: the first returned column is the minimum and the second returned column is the maximum value of the interval of each row row.
The maximum of each row is equal to the minimum of the previous row; thus creating a contiguous interval coverage of the input column's range in all rows.
The minimum value of the first row and maximum of the last row will be smaller/larger than the respective row of the input (based on the distance to the next/previous element).
This is done to ensure that if your input has a fixed interval length between all elements, the first and last intervals also have that fixed length.
For example, with the command below, we'll use this operator on a hypothetical radial profile.
Note how the intervals are contiguous even though the radius values are not equally distant (if the row with a radius of 2.5 didn't exist, the intervals would all be the same length).
For another example of the usage of this operator, see the example in the description of @option{--customtable} in @ref{MakeProfiles profile settings}.
@example
$ cat radial-profile.txt
# Column 1: RADIUS [pix,f32,] Distance to center in pixels.
# Column 2: MEAN [ADU,f32,] Mean value at that radius.
0 100
1 40
2 30
2.5 25
3 20
$ asttable radial-profile.txt --txtf32f=fixed --txtf32p=3 \
-c'arith RADIUS sorted-to-interval',MEAN
-0.500 0.500 100.000
0.500 1.500 40.000
1.500 2.250 30.000
2.250 2.750 25.000
2.750 3.250 20.000
@end example
Such intervals can be useful in scenarios like generating the input to @option{--customtable} in MakeProfiles (see @ref{MakeProfiles profile settings}) from a radial profile (see @ref{Generate radial profile}).
@end table
@node Operation precedence in Table, Invoking asttable, Column arithmetic, Table
@subsection Operation precedence in Table
The Table program can do many operations on the rows and columns of the input tables and they are not always applied in the order you call the operation on the command-line.
In this section we will describe which operation is done before/after which operation.
Knowing this precedence table is important to avoid confusion when you ask for more than one operation.
For a description of each option, please see @ref{Invoking asttable}.
By default, column-based operations will be done first.
You can ask for switching to row-based operations to be done first, using the @option{--rowfirst} option.
@cartouche
@noindent
@strong{Pipes for different precedence:} It may happen that your desired series of operations cannot be done with the precedence mentioned below (in one command).
In this case, you can pipe the output of one call to @command{asttable} to another @command{asttable}.
Just don't forget to give @option{-O} (or @option{--colinfoinstdout}) to the first instance (so the column metadata are also passed to the next instance).
Without metadata, all numbers will be read as double-precision (see @ref{Gnuastro text table format}; recall that piping is done in plain text format), vector columns will be broken into single-valued columns, and column names, units and comments will be lost.
At the end of this section, there is an example of doing this.
@end cartouche
@table @asis
@item Input table information
The first set of operations that will be preformed (if requested) are the printing of the input table information.
Therefore, when the following options are called, the column data are not read at all.
Table simply reads the main input's column metadata (name, units, numeric data type and comments), and the number of rows and prints them.
Table then terminates and no other operation is done.
These can therefore be called at the end of an arbitrarily long Table command.
When you have forgot some information about the input table.
You can then delete these options and continue writing the command (using the shell's history to retrieve the previous command with an up-arrow key).
At any time only a single one of the options in this category may be called.
The order of checking for these options is therefore important: in the same order that they are described below:
@table @asis
@item Column and row information (@option{--information} or @option{-i})
Print the list of input columns and the metadata of each column in a single row.
This includes the column name, numeric data type, units and comments of each column within a separate row of the output.
Finally, print the number of rows.
@item Number of columns (@option{--info-num-cols})
Print the number of columns in the input table.
Only a single integer (number of columns) is printed before Table terminates.
@item Number of rows (@option{--info-num-rows})
Print the number of rows in the input table.
Only a single integer (number of rows) is printed before Table terminates.
@end table
@item Column selection (@option{--column})
When this option is given, only the columns given to this option (from the main input) will be used for all future steps.
When @option{--column} (or @option{-c}) is not given, then all the main input's columns will be used in the next steps.
@item Column-based operations
By default the following column-based operations will be done before the row-based operations in the next item.
If you need to give precedence to row-based operations, use @option{--rowfirst}.
@table @asis
@item Column(s) from other file(s): @option{--catcolumnfile}
When column concatenation (addition) is requested, columns from other tables (in other files, or other HDUs of the same FITS file) will be added after the existing columns are read from the main input.
In one command, you can call @option{--catcolumnfile} multiple times to allow addition of columns from many files.
Therefore you can merge the columns of various tables into one table in this step (at the start), then start adding/limiting the rows, or building vector columns, .
If any of the row-based operations below are requested in the same @code{asttable} command, they will also be applied to the rows of the added columns.
However, the conditions to keep/reject rows can only be applied to the rows of the columns in main input table (not the columns that are added with these options).
@item Extracting single-valued columns from vectors (@option{--fromvector})
Once all the input columns are read into memory, if any of them are vectors, you can extract a single-valued column from the vector columns at this stage.
For more on vector columns, see @ref{Vector columns}.
@item Creating vector columns (@option{--tovector})
After column arithmetic, there is no other way to add new columns so the @option{--tovector} operator is applied at this stage.
You can use it to merge multiple columns that are available in this stage to a single vector column.
For more, see @ref{Vector columns}.
@item Column arithmetic
Once the final columns are selected in the requested order, column arithmetic is done (if requested).
For more on column arithmetic, see @ref{Column arithmetic}.
@end table
@item Row-based operations
Row-based operations only work within the rows of existing columns when they are activated.
By default row-based operations are activated after column-based operations (which are mentioned above).
If you need to give precedence to row-based operations, use @option{--rowfirst}.
@table @asis
@item Rows from other file(s) (@option{--catrowfile})
With this feature, you can import rows from other tables (in other files, or other HDUs of the same FITS file).
The same column selection of @option{--column} is applied to the tables given to this option.
The column metadata (name, units and comments) will be taken from the main input.
Two conditions are mandatory for adding rows:
@itemize
@item
The number of columns used from the new tables must be equal to the number of columns in memory, by the time control reaches here.
@item
The data type of each column (see @ref{Numeric data types}) should be the same as the respective column in memory by the time control reaches here.
If the data types are different, you can use the type conversion operators of column arithmetic which has higher precedence (and will therefore be applied before this by default).
For more on type conversion, see @ref{Numerical type conversion operators} and @ref{Column arithmetic}).
@end itemize
@item Row selection by value in a column
The following operations select rows based on the values in them.
A more complete description of each of these options is given in @ref{Invoking asttable}.
@itemize
@item
@option{--range}: only keep rows where the value in the given column is within a certain interval.
@item
@option{--inpolygon}: only keep rows where the value is within the polygon of @option{--polygon}.
@item
@option{--outpolygon}: only keep rows outside the polygon of @option{--polygon}.
@item
@option{--equal}: only keep rows with an specified value in given column.
@item
@option{--notequal}: only keep rows without specified value in given column.
@item
@option{--noblank}: only keep rows that are not blank in the given column(s).
@end itemize
These options can be called any number of times (to limit the final rows based on values in different columns for example).
Since these are row-rejection operations, their internal order is irrelevant.
In other words, it makes no difference if @option{--equal} is called before or after @option{--range} for example.
As a side-effect, because NaN/blank values are defined to fail on any condition, these operations will also remove rows with NaN/blank values in the specified column they are checking.
Also, the columns that are used for these operations do not necessarily have to be in the final output table (you may not need the column after doing the selection based on it).
By default, these options are applied after merging columns from other tables.
However, currently, the column given to these options can only come from the main input table.
If you need to apply these operations on columns from @option{--catcolumnfile}, pipe the output of one instance of Table with @option{--catcolumnfile} into another instance of Table as suggested in the box above this list.
These row-based operations options are applied first because the speed of later operations can be greatly affected by the number of rows.
For example, if you also call the @option{--sort} option, and your row selection will result in 50 rows (from an input of 10000 rows), limiting the number of rows first will greatly speed up the sorting in your final output.
@item Sorting (@option{--sort})
Sort of the rows based on values in a certain column.
The column to sort by can only come from the main input table columns (not columns that may have been added with @option{--catcolumnfile}).
@item Row selection (by position)
@itemize
@item
@option{--head}: keep only requested number of top rows.
@item
@option{--tail}: keep only requested number of bottom rows.
@item
@option{--rowrandom}: keep only a random number of rows.
@item
@option{--rowrange}: keep only rows within a certain positional interval.
@end itemize
These options limit/select rows based on their position within the table (not their value in any certain column).
@item Transpose vector columns (@option{--transpose})
Transposing vector columns will not affect the number or metadata of columns, it will just re-arrange them in their 2D structure.
As a result, after transposing, the number of rows changes, as well as the number of elements in each vector column.
See the description of this option in @ref{Invoking asttable} for more (with an example).
@end table
@item Column metadata (@option{--colmetadata})
Once the structure of the final table is set, you can set the column metadata just before finishing.
@item Output row selection (@option{--noblankend})
Only keep the output rows that do not have a blank value in the given column(s).
For example, you may need to apply arithmetic operations on the columns (through @ref{Column arithmetic}) before rejecting the undesired rows.
After the arithmetic operation is done, you can use the @code{where} operator to set the non-desired columns to NaN/blank and use @option{--noblankend} option to remove them just before writing the output.
In other scenarios, you may want to remove blank values based on columns in another table.
To help in readability, you can also use the final column names that you set with @option{--colmetadata}!
See the example below for applying any generic value-based row selection based on @option{--noblankend}.
@end table
As an example, let's review how Table interprets the command below.
We are assuming that @file{table.fits} contains at least three columns: @code{RA}, @code{DEC} and @code{PARAM} and you only want the RA and Dec of the rows where @mymath{p\times 2<5} (@mymath{p} is the value of each row in the @code{PARAM} column).
@example
$ asttable table.fits -cRA,DEC --noblankend=MULTIP \
-c'arith PARAM 2 x set-i i i 5 gt nan where' \
--colmetadata=3,MULTIP,unit,"Description of column"
@end example
@noindent
Due to the precedence described in this section, Table does these operations (which are independent of the order of the operations written on the command-line):
@enumerate
@item
At the start (with @code{-cRA,DEC}), Table reads the @code{RA} and @code{DEC} columns.
@item
In between all the operations in the command above, Column arithmetic (with @option{-c'arith ...'}) has the highest precedence.
So the arithmetic operation is done and stored as a new (third) column.
In this arithmetic operation, we multiply all the values of the @code{PARAM} column by 2, then set all those with a value larger than 5 to NaN (for more on understanding this operation, see the `@code{set-}' and `@code{where}' operators in @ref{Arithmetic operators}).
@item
Updating column metadata (with @option{--colmetadata}) is then done to give a name (@code{MULTIP}) to the newly calculated (third) column.
During the process, besides a name, we also set a unit and description for the new column.
These metadata entries are @emph{very important}, so always be sure to add metadata after doing column arithmetic.
@item
The lowest precedence operation is @option{--noblankend=MULTIP}.
So only rows that are not blank/NaN in the @code{MULTIP} column are kept.
@item
Finally, the output table (with three columns) is written to the command-line.
If you also want to print the column metadata, you can use the @option{-O} (or @option{--colinfoinstdout}) option.
Alternatively, if you want the output in a file, you can use the @option{--output} option to save the table in FITS or plain-text format.
@end enumerate
It may happen that your desired operation needs a separate precedence.
In this case you can pipe the output of Table into another call of Table and use the @option{-O} (or @option{--colinfoinstdout}) option to preserve the metadata between the two calls.
For example, let's assume that you want to sort the output table from the example command above based on the new @code{MULTIP} column.
Since sorting is done prior to column arithmetic, you cannot do it in one command, but you can circumvent this limitation by simply piping the output (including metadata) to another call to Table:
@example
asttable table.fits -cRA,DEC --noblankend=MULTIP --colinfoinstdout \
-c'arith PARAM 2 x set-i i i 5 gt nan where' \
--colmetadata=3,MULTIP,unit,"Description of column" \
| asttable --sort=MULTIP --output=selected.fits
@end example
@node Invoking asttable, , Operation precedence in Table, Table
@subsection Invoking Table
Table will read/write, select, modify, or show the information of the rows and columns in recognized Table formats (including FITS binary, FITS ASCII, and plain text table files, see @ref{Tables}).
Output columns can also be determined by number or regular expression matching of column names, units, or comments.
The executable name is @file{asttable} with the following general template
@example
$ asttable [OPTION...] InputFile
@end example
@noindent
One line examples:
@example
## Get the table column information (name, units, or data type), and
## the number of rows:
$ asttable table.fits --information
## Print columns named RA and DEC, followed by all the columns where
## the name starts with "MAG_":
$ asttable table.fits --column=RA --column=DEC --column=/^MAG_/
## Similar to the above, but with one call to `--column' (or `-c'),
## also sort the rows by the input's photometric redshift (`Z_PHOT')
## column. To confirm the sort, you can add `Z_PHOT' to the columns
## to print.
$ asttable table.fits -cRA,DEC,/^MAG_/ --sort=Z_PHOT
## Similar to the above, but only print rows that have a photometric
## redshift between 2 and 3.
$ asttable table.fits -cRA,DEC,/^MAG_/ --range=Z_PHOT,2:3
## Only print rows with a value in the 10th column above 100000:
$ asttable table.txt --range=10,10e5,inf
## Only print the 2nd column, and the third column multiplied by 5,
## Save the resulting two columns in `table.txt'
$ asttable table.fits -c2,'arith $2 5 x' -otable.fits
## Sort the output columns by the third column, save output:
$ asttable table.fits --sort=3 -ooutput.txt
## Subtract the first column from the second in `cat.txt' (can also
## be a FITS table) and keep the third and fourth columns.
$ asttable cat.txt -c'arith $2 $1 -',3,4 -ocat.fits
## Convert sexagesimal coordinates to degrees (same can be done in a
## large table given as argument).
$ echo "7h34m35.5498 31d53m14.352s" | asttable
## Convert RA and Dec in degrees to sexagesimal (same can be done in a
## large table given as argument).
$ echo "113.64812416667 31.88732" \
| asttable -c'arith $1 degree-to-ra $2 degree-to-dec'
## Extract columns 1 and 2, as well as all those between 12 to 58:
$ asttable table.fits -c1,2,$(seq -s',' 12 58)
@end example
Table's input dataset can be given either as a file or from Standard input (piped from another program, see @ref{Standard input}).
In the absence of selected columns, all the input's columns and rows will be written to the output.
The full set of operations Table can do are described in detail below, but for a more high-level introduction to the various operations, and their precedence, see @ref{Operation precedence in Table}.
If any output file is explicitly requested (with @option{--output}) the output table will be written in it.
When no output file is explicitly requested the output table will be written to the standard output.
If the specified output is a FITS file, the type of FITS table (binary or ASCII) will be determined from the @option{--tabletype} option.
If the output is not a FITS file, it will be printed as a plain text table (with space characters between the columns).
When the output is not binary (for example standard output or a plain-text), the @option{--txtf32*} or @option{--txtf64*} options can be used for the formatting of floating point columns (see @ref{Printing floating point numbers}).
When the columns are accompanied by meta-data (like column name, units, or comments), this information will also printed in the plain text file before the table, as described in @ref{Gnuastro text table format}.
For the full list of options common to all Gnuastro programs please see @ref{Common options}.
Options can also be stored in directory, user or system-wide configuration files to avoid repeating on the command-line, see @ref{Configuration files}.
Table does not follow Automatic output that is common in most Gnuastro programs, see @ref{Automatic output}.
Thus, in the absence of an output file, the selected columns will be printed on the command-line with no column information, ready for redirecting to other tools like @command{awk}.
@cartouche
@noindent
@strong{Sexagesimal coordinates as floats in plain-text tables:}
When a column is determined to be a floating point type (32-bit or 64-bit) in a plain-text table, it can contain sexagesimal values in the format of `@code{_h_m_s}' (for RA) and `@code{_d_m_s}' (for Dec), where the `@code{_}'s are place-holders for numbers.
In this case, the string will be immediately converted to a single floating point number (in units of degrees) and stored in memory with the rest of the column or table.
Besides being useful in large tables, with this feature, conversion to sexagesimal coordinates to degrees becomes very easy, for example:
@example
echo "7h34m35.5498 31d53m14.352s" | asttable
@end example
@noindent
The inverse can also be done with the more general column arithmetic
operators:
@example
echo "113.64812416667 31.88732" \
| asttable -c'arith $1 degree-to-ra $2 degree-to-dec'
@end example
@noindent
If you want to preserve the sexagesimal contents of a column, you should store that column as a string, see @ref{Gnuastro text table format}.
@end cartouche
@table @option
@item -i
@itemx --information
Only print the column information in the specified table on the command-line and exit.
Each column's information (number, name, units, data type, and comments) will be printed as a row on the command-line.
If the column is a multi-value (vector) a @code{[N]} is printed after the type, where @code{N} is the number of elements within that vector.
Note that the FITS standard only requires the data type (see @ref{Numeric data types}), and in plain text tables, no meta-data/information is mandatory.
Gnuastro has its own convention in the comments of a plain text table to store and transfer this information as described in @ref{Gnuastro text table format}.
This option will take precedence over all other operations in Table, so when it is called along with other operations, they will be ignored, see @ref{Operation precedence in Table}.
This can be useful if you forget the identifier of a column after you have already typed some on the command-line.
You can simply add a @option{-i} to your already-written command (without changing anything) and run Table, to see the whole list of column names and information.
Then you can use the shell history (with the up arrow key on the keyboard), and retrieve the last command with all the previously typed columns present, delete @option{-i} and add the identifier you had forgot.
@item --info-num-cols
Similar to @option{--information}, but only the number of the input table's columns will be printed as a single integer (useful in scripts for example).
@item --info-num-rows
Similar to @option{--information}, but only the number of the input table's rows will be printed as a single integer (useful in scripts for example).
@cindex AWK
@cindex GNU AWK
@item -c STR/INT
@itemx --column=STR/INT
Set the output columns either by specifying the column number, or name.
For more on selecting columns, see @ref{Selecting table columns}.
If a value of this option starts with `@code{arith }', column arithmetic will be activated, allowing you to edit/manipulate column contents.
For more on column arithmetic see @ref{Column arithmetic}.
To ask for multiple columns this option can be used in two ways: 1) multiple calls to this option, 2) using a comma between each column specifier in one call to this option.
These different solutions may be mixed in one call to Table: for example, `@option{-cRA,DEC,MAG}', or `@option{-cRA,DEC -cMAG}' are both equivalent to `@option{-cRA -cDEC -cMAG}'.
The order of the output columns will be the same order given to the option or in the configuration files (see @ref{Configuration file precedence}).
This option is not mandatory, if no specific columns are requested, all the input table columns are output.
When this option is called multiple times, it is possible to output one column more than once.
@cartouche
@noindent
@strong{Sequence of columns:} when dealing with a large number catalogs (hundreds for example!), it will be frustrating, annoying and buggy to insert the columns manually.
If you want to read all the input columns, you can use the special @option{_all} value to @option{--column} option.
A more generic solution (for example if you want every second one, or all the columns within a special range) is to use the @command{seq} command's features with an extra @option{-s','} (so a comma is used as the ``separator'').
For example if you want columns 1, 2 and all columns between 12 to 58 (inclusive), you can use the following command:
@example
$ asttable table.fits -c1,2,$(seq -s',' 12 58)
@end example
@end cartouche
@item -w FITS
@itemx --wcsfile=FITS
FITS file that contains the WCS to be used in the @code{wcs-to-img} and @code{img-to-wcs} operators of @ref{Column arithmetic}.
The extension name/number within the FITS file can be specified with @option{--wcshdu}.
If the value to this option is `@option{none}', no WCS will be written in the output.
@item -W STR
@itemx --wcshdu=STR
FITS extension/HDU in the FITS file given to @option{--wcsfile} (see the description of @option{--wcsfile} for more).
@item -L FITS/TXT
@itemx --catcolumnfile=FITS/TXT
Concatenate (or add, or append) the columns of this option's value (a filename) to the output columns.
This option may be called multiple times (to add columns from more than one file into the final output), the columns from each file will be added in the same order that this option is called.
The number of rows in the file(s) given to this option has to be the same as the input table (before any type of row-selection), see @ref{Operation precedence in Table}.
By default all the columns of the given file will be appended, if you only want certain columns to be appended, use the @option{--catcolumns} option to specify their name or number (see @ref{Selecting table columns}).
Note that the columns given to @option{--catcolumns} must be present in all the given files (if this option is called more than once with more than one file).
If the file given to this option is a FITS file, it is necessary to also define the corresponding HDU/extension with @option{--catcolumnhdu}.
Also note that no operation (such as row selection and arithmetic) is applied to the table given to this option.
If the appended columns have a name, and their name is already present in the table before adding those columns, the column names of each file will be appended with a @code{-N}, where @code{N} is a counter starting from 1 for each appended table.
Just note that in the FITS standard (and thus in Gnuastro), column names are not case-sensitive.
This is done because when concatenating columns from multiple tables (more than two) into one, they may have the same name, and it is not good practice to have multiple columns with the same name.
You can disable this feature with @option{--catcolumnrawname}.
Generally, you can use the @option{--colmetadata} option to update column metadata in the same command, after all the columns have been concatenated.
For example, let's assume you have two catalogs of the same objects (same number of rows) in different filters.
Such that @file{f160w-cat.fits} has a @code{MAGNITUDE} column that has the magnitude of each object in the @code{F160W} filter and similarly @file{f105w-cat.fits}, also has a @code{MAGNITUDE} column, but for the @code{F105W} filter.
You can use column concatenation like below to import the @code{MAGNITUDE} column from the @code{F105W} catalog into the @code{F160W} catalog, while giving each magnitude column a different name:
@example
asttable f160w-cat.fits --output=both.fits \
--catcolumnfile=f105w-cat.fits --catcolumns=MAGNITUDE \
--colmetadata=MAGNITUDE,MAG-F160W,log,"Magnitude in F160W" \
--colmetadata=MAGNITUDE-1,MAG-F105W,log,"Magnitude in F105W"
@end example
@noindent
For a more complete example, see @ref{Working with catalogs estimating colors}.
@cartouche
@noindent
@strong{Loading external columns with Arithmetic:} an alternative way to load external columns into your output is to use column arithmetic (@ref{Column arithmetic})
In particular the @option{load-col-} operator described in @ref{Loading external columns}.
But this operator will load only one column per file/HDU every time it is called.
So if you have many columns to insert, it is much faster to use @option{--catcolumnfile}.
Because @option{--catcolumnfile} will load all the columns in one opening of the file, and possibly even read them all into memory in parallel!
@end cartouche
@item -u STR/INT
@itemx --catcolumnhdu=STR/INT
The HDU/extension of the FITS file(s) that should be concatenated, or appended, by column with @option{--catcolumnfile}.
If @option{--catcolumn} is called more than once with more than one FITS file, it is necessary to call this option more than once.
The HDUs will be loaded in the same order as the FITS files given to @option{--catcolumnfile}.
@item -C STR/INT
@itemx --catcolumns=STR/INT
The column(s) in the file(s) given to @option{--catcolumnfile} to append.
When this option is not given, all the columns will be concatenated.
See @option{--catcolumnfile} for more.
@item --catcolumnrawname
Do not modify the names of the concatenated (appended) columns, see description in @option{--catcolumnfile}.
@item --transpose
Transpose (as in a matrix) the given vector column(s) individually.
When this operation is done (see @ref{Operation precedence in Table}), only vector columns of the same data type and with the same number of elements should exist in the table.
A usage of this operator is presented in the IFU spectroscopy tutorial in @ref{Extracting a single spectrum and plotting it}.
As a generic example, see the commands below.
The @file{in.txt} table below has two vector columns (each with three elements) in two rows.
After running @command{asttable} with @option{--transpose}, you can see how the vector columns have two elements per row (@code{u8(3)} has been replaced by @code{u8(2)}), and that the table now has three rows.
@example
$ cat in.txt
# Column 1: abc [nounits,u8(3),] First vector column.
# Column 2: def [nounits,u8(3),] Second vector column.
111 112 113 211 212 213
121 122 123 221 222 223
$ asttable in.txt --transpose -O
# Column 1: abc [nounits,u8(2),] First vector column.
# Column 2: def [nounits,u8(2),] Second vector column.
111 121 211 221
112 122 212 222
113 123 213 223
@end example
@item --fromvector=STR,INT[,INT[,INT]]
Extract the given tokens/elements from the given vector column into separate single-valued columns.
The input vector column can be identified by its name or counter, see @ref{Selecting table columns}.
After the columns are extracted, the input vector is deleted by default.
To preserve the input vector column, you can use @option{--keepvectfin} described below.
For a complete usage scenario see @ref{Vector columns}.
@item --tovector=STR/INT,STR/INT[,STR/INT]
Move the given columns into a newly created vector column.
The given columns can be identified by their name or counter, see @ref{Selecting table columns}.
After the columns are copied, they are deleted by default.
To preserve the inputs, you can use @option{--keepvectfin} described below.
For a complete usage scenario see @ref{Vector columns}.
@item -k
@itemx --keepvectfin
Do not delete the input column(s) when using @option{--fromvector} or @option{--tovector}.
@item -R FITS/TXT
@itemx --catrowfile=FITS/TXT
Add the rows of the given file to the output table.
The selected columns in the tables given to this option should have the same number and datatype and the rows before control reaches this phase (after column selection and column concatenation), for more see @ref{Operation precedence in Table}.
For example, if @file{a.fits}, @file{b.fits} and @file{c.fits} have the columns @code{RA}, @code{DEC} and @code{MAGNITUDE} (possibly in different column-numbers in their respective table, along with many more columns), the command below will add their rows into the final output that will only have these three columns:
@example
$ asttable a.fits --catrowfile=b.fits --catrowhdu=1 \
--catrowfile=c.fits --catrowhdu=1 \
-cRA,DEC,MAGNITUDE --output=allrows.fits
@end example
@cartouche
@cindex Provenance
@noindent
@strong{Provenance of each row:} When merging rows from separate catalogs, it is important to keep track of the source catalog of each row (its provenance).
To do this, you can use @option{--catrowfile} in combination with the @code{constant} operator and @ref{Column arithmetic}.
For a working example of this scenario, see the example within the documentation of the @code{constant} operator in @ref{New operands}.
@end cartouche
@cartouche
@noindent
@strong{How to avoid repetition when adding rows:} this option will simply add the rows of multiple tables into one, it does not check their contents!
Therefore if you use this option on multiple catalogs that may have some shared physical objects in some of their rows, those rows/objects will be repeated in the final table.
In such scenarios, to avoid potential repetition, it is better to use @ref{Match} (with @option{--notmatched} and @option{--outcols=AAA,BBB}) instead of Table.
For more on using Match for this scenario, see the description of @option{--outcols} in @ref{Invoking astmatch}.
@end cartouche
@item -X STR
@itemx --catrowhdu=STR
The HDU/extension of the FITS file(s) that should be concatenated, or appended, by rows with @option{--catrowfile}.
If @option{--catrowfile} is called more than once with more than one FITS file, it is necessary to call this option more than once also (once for every FITS table given to @option{--catrowfile}).
The HDUs will be loaded in the same order as the FITS files given to @option{--catrowfile}.
@item -O
@itemx --colinfoinstdout
@cindex Standard output
Add column metadata when the output is printed in the standard output.
Usually the standard output is used for a fast visual check, or to pipe into other metadata-agnostic programs (like AWK) for further processing.
So by default meta-data are not included.
But when piping to other Gnuastro programs (where metadata can be interpreted and used) it is recommended to use this option and use column names in the next program.
@item -r STR,FLT:FLT
@itemx --range=STR,FLT:FLT
Only output rows that have a value within the given range in the @code{STR} column (can be a name or counter).
Note that the range is only inclusive in the lower-limit.
For example, with @code{--range=sn,5:20} the output's columns will only contain rows that have a value in the @code{sn} column (not case-sensitive) that is greater or equal to 5, and less than 20.
Also you can use the comma for separating the values such as this @code{--range=sn,5,20}.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
This option can be called multiple times (different ranges for different columns) in one run of the Table program.
This is very useful for selecting the final rows from multiple criteria/columns.
The chosen column does not have to be in the output columns.
This is good when you just want to select using one column's values, but do not need that column anymore afterwards.
For one example of using this option, see the example under @option{--sigclip-median} in @ref{Invoking aststatistics}.
@item --inpolygon=STR1,STR2
Only return rows where the given coordinates are inside the polygon specified by the @option{--polygon} option.
The coordinate columns are the given @code{STR1} and @code{STR2} columns, they can be a column name or counter (see @ref{Selecting table columns}).
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
Note that the chosen columns does not have to be in the output columns (which are specified by the @code{--column} option).
For example, if we want to select rows in the polygon specified in @ref{Dataset inspection and cropping}, this option can be used like this (you can remove the double quotations and write them all in one line if you remove the white-spaces around the colon separating the column vertices):
@example
asttable table.fits --inpolygon=RA,DEC \
--polygon="53.187414,-27.779152 \
: 53.159507,-27.759633 \
: 53.134517,-27.787144 \
: 53.161906,-27.807208" \
@end example
@cartouche
@noindent
@strong{Flat/Euclidean space: } The @option{--inpolygon} option assumes a flat/Euclidean space so it is only correct for RA and Dec when the polygon size is very small like the example above.
If your polygon is a degree or larger, it may not return correct results.
Please get in touch if you need such a feature (see @ref{Suggest new feature}).
@end cartouche
@item --outpolygon=STR1,STR2
Only return rows where the given coordinates are outside the polygon specified by the @option{--polygon} option.
This option is very similar to the @option{--inpolygon} option, so see the description there for more.
@item --polygon=STR
@itemx --polygon=FLT,FLT:FLT,FLT:...
The polygon to use for the @code{--inpolygon} and @option{--outpolygon} options.
This option is parsed in an identical way to the same option in the Crop program, so for more information on how to use it, see @ref{Crop options}.
@item -e STR,INT/FLT/STR,...
@itemx --equal=STR,INT/FLT/STR,...
Only output rows that are equal to the given string(s)/number(s) in the given column.
The first value is the column identifier (name or number, see @ref{Selecting table columns}), after that you can specify any number of values.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
For example, @option{--equal=ID,5,6,8} will only print the rows that have a value of 5, 6, or 8 in the @code{ID} column.
This option can also be called multiple times, so @option{--equal=ID,4,5 --equal=ID,6,7} has the same effect as @option{--equal=4,5,6,7}.
@cartouche
@noindent
@strong{Equality and floating point numbers:} Floating point numbers are only approximate values (see @ref{Numeric data types}).
In this context, their equality depends on how the input table was originally stored (as a plain text table or as an ASCII/binary FITS table).
If you want to select floating point numbers, it is strongly recommended to use the @option{--range} option and set a very small interval around your desired number, do not use @option{--equal} or @option{--notequal}.
@end cartouche
The @option{--equal} and @option{--notequal} options also work when the given column has a string type.
In this case the given value to the option will also be parsed as a string, not as a number.
When dealing with string columns, be careful with trailing white space characters (the actual value maybe adjusted to the right, left, or center of the column's width).
If you need to account for such white spaces, you can use shell quoting.
For example, @code{--equal=NAME," myname "}.
@cartouche
@noindent
@strong{Strings with a comma (,):} When your desired column values contain a comma, you need to put a `@code{\}' before the internal comma (within the value).
Otherwise, the comma will be interpreted as a delimiter between multiple values, and anything after it will be interpreted as a separate string.
For example, assume column @code{AB} of your @file{table.fits} contains this value: `@code{cd,ef}' in your desired rows.
To extract those rows, you should use the command below:
@example
$ asttable table.fits --equal=AB,cd\,ef
@end example
@end cartouche
@item -n STR,INT/FLT,...
@itemx --notequal=STR,INT/FLT,...
Only output rows that are @emph{not} equal to the given number(s) in the given column.
The first argument is the column identifier (name or number, see @ref{Selecting table columns}), after that you can specify any number of values.
For example, @option{--notequal=ID,5,6,8} will only print the rows where the @code{ID} column does not have value of 5, 6, or 8.
This option can also be called multiple times, so @option{--notequal=ID,4,5 --notequal=ID,6,7} has the same effect as @option{--notequal=4,5,6,7}.
Be very careful if you want to use the non-equality with floating point numbers, see the special note under @option{--equal} for more.
This option also works when the given column has a string type, see the description under @option{--equal} (above) for more.
@item -b STR[,STR[,STR]]
@itemx --noblank=STR[,STR[,STR]]
Only output rows that are @emph{not} blank in the given column of the @emph{input} table.
Like above, the columns can be specified by their name or number (counting from 1).
This option can be called multiple times, so @option{--noblank=MAG --noblank=PHOTOZ} is equivalent to @option{--noblank=MAG,PHOTOZ}.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
For example, if @file{table.fits} has blank values (NaN in floating point types) in the @code{magnitude} and @code{sn} columns, with @code{--noblank=magnitude,sn}, the output will not contain any rows with blank values in these two columns.
If you want @emph{all} columns to be checked, simply set the value to @code{_all} (in other words: @option{--noblank=_all}).
This mode is useful when there are many columns in the table and you want a ``clean'' output table (with no blank values in any column): entering their name or number one-by-one can be buggy and frustrating.
In this mode, no other column name should be given.
For example, if you give @option{--noblank=_all,magnitude}, then Table will assume that your table actually has a column named @code{_all} and @code{magnitude}, and if it does not, it will abort with an error.
If you want to change column values using @ref{Column arithmetic} (and set some to blank, to later remove), or you want to select rows based on columns that you have imported from other tables, you should use the @option{--noblankend} option described below.
Also, see @ref{Operation precedence in Table}.
@item -s STR
@itemx --sort=STR
Sort the output rows based on the values in the @code{STR} column (can be a column name or number).
By default the sort is done in ascending/increasing order, to sort in a descending order, use @option{--descending}.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
The chosen column does not have to be in the output columns.
This is good when you just want to sort using one column's values, but do not need that column anymore afterwards.
@item -d
@itemx --descending
When called with @option{--sort}, rows will be sorted in descending order.
@item -H INT
@itemx --head=INT
Only print the given number of rows from the @emph{top} of the final table.
Note that this option only affects the @emph{output} table.
For example, if you use @option{--sort}, or @option{--range}, the printed rows are the first @emph{after} applying the sort sorting, or selecting a range of the full input.
This option cannot be called with @option{--tail}, @option{--rowrange} or @option{--rowrandom}.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
@cindex GNU Coreutils
If the given value to @option{--head} is 0, the output columns will not have any rows and if it is larger than the number of rows in the input table, all the rows are printed (this option is effectively ignored).
This behavior is taken from the @command{head} program in GNU Coreutils.
@item -t INT
@itemx --tail=INT
Only print the given number of rows from the @emph{bottom} of the final table.
See @option{--head} for more.
This option cannot be called with @option{--head}, @option{--rowrange} or @option{--rowrandom}.
@item --rowrange=INT,INT
Only return the rows within the requested positional range (inclusive on both sides).
Therefore, @code{--rowrange=5,7} will return 3 of the input rows, row 5, 6 and 7.
This option will abort if any of the given values is larger than the total number of rows in the table.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
With the @option{--head} or @option{--tail} options you can only see the top or bottom few rows.
However, with this option, you can limit the returned rows to a contiguous set of rows in the middle of the table.
Therefore this option cannot be called with @option{--head}, @option{--tail}, or @option{--rowrandom}.
@item --rowrandom=INT
@cindex Random row selection
@cindex Row selection, by random
Select @code{INT} rows from the input table by random (assuming a uniform distribution).
This option is applied @emph{after} the value-based selection options (such as @option{--sort}, @option{--range}, and @option{--polygon}).
On the other hand, only the row counters are randomly selected, this option does not change the order.
Therefore, if @option{--rowrandom} is called together with @option{--sort}, the returned rows are still sorted.
This option cannot be called with @option{--head}, @option{--tail}, or @option{--rowrange}.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
This option will only have an effect if @code{INT} is larger than the number of rows when it is activated (after the value-based selection options have been applied).
When there are fewer rows, a warning is printed, saying that this option has no effect.
The warning can be disabled with the @option{--quiet} option.
@cindex Reproducibility
Due to its nature (to be random), the output of this option differs in each run.
Therefore 5 calls to Table with @option{--rowrandom} on the same input table will generate 5 different outputs.
If you want a reproducible random selection, set the @code{GSL_RNG_SEED} environment variable and also use the @option{--envseed} option, for more see @ref{Generating random numbers}.
@item --envseed
Read the random number generator seed from the @code{GSL_RNG_SEED} environment variable for @option{--rowrandom} (instead of generating a different seed internally on every run).
This is useful if you want a reproducible random selection of the input rows.
For more, see @ref{Generating random numbers}.
@item -E STR[,STR[,STR]]
@itemx --noblankend=STR[,STR[,STR]]
Remove all rows in the requested @emph{output} columns that have a blank value.
Like above, the columns can be specified by their name or number (counting from 1).
This option can be called multiple times, so @option{--noblank=MAG --noblank=PHOTOZ} is equivalent to @option{--noblank=MAG,PHOTOZ}.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
for example, if your final output table (possibly after column arithmetic, or adding new columns) has blank values (NaN in floating point types) in the @code{magnitude} and @code{sn} columns, with @code{--noblankend=magnitude,sn}, the output will not contain any rows with blank values in these two columns.
If you want blank values to be removed from the main input table _before_ any further processing (like adding columns, sorting or column arithmetic), you should use the @option{--noblank} option.
With the @option{--noblank} option, the column(s) that is(are) given does not necessarily have to be in the output (it is just temporarily used for reading the inputs and selecting rows, but does not necessarily need to be present in the output).
However, the column(s) given to this option should exist in the output.
If you want @emph{all} columns to be checked, simply set the value to @code{_all} (in other words: @option{--noblankend=_all}).
This mode is useful when there are many columns in the table and you want a ``clean'' output table (with no blank values in any column): entering their name or number one-by-one can be buggy and frustrating.
In this mode, no other column name should be given.
For example, if you give @option{--noblankend=_all,magnitude}, then Table will assume that your table actually has a column named @code{_all} and @code{magnitude}, and if it does not, it will abort with an error.
This option is applied just before writing the final table (after @option{--colmetadata} has finished).
So in case you changed the column metadata, or added new columns, you can use the new names, or the newly defined column numbers.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
@item -m STR/INT,STR[,STR[,STR]]
@itemx --colmetadata=STR/INT,STR[,STR[,STR]]
Update the specified column metadata in the output table.
This option is applied after all other column-related operations are complete, for example, column arithmetic, or column concatenation.
For the precedence of this operation in relation to others, see @ref{Operation precedence in Table}.
The first value (before the first comma) given to this option is the column's identifier.
It can either be a counter (positive integer, counting from 1), or a name (the column's name in the output if this option was not called).
After the to-be-updated column is identified, at least one other string should be given, with a maximum of three strings.
The first string after the original name will the selected column's new name.
The next (optional) string will be the selected column's unit and the third (optional) will be its comments.
If the two optional strings are not given, the original column's units or comments will remain unchanged.
If any of the values contains a comma, you should place a `@code{\}' before the comma to avoid it getting confused with a delimiter.
For example, see the command below for a column description that contains a comma:
@example
$ asttable table.fits \
--colmetadata=NAME,UNIT,"Comments\, with a comma"
@end example
Generally, since the comma is commonly used as a delimiter in many scenarios, to avoid complicating your future analysis with the table, it is best to avoid using a comma in the column name and units.
Some examples of this option are available in the tutorials, in particular @ref{Working with catalogs estimating colors}.
Here are some more specific examples:
@table @option
@item --colmetadata=MAGNITUDE,MAG_F160W
This will convert name of the original @code{MAGNITUDE} column to @code{MAG_F160W}, leaving the unit and comments unchanged.
@item --colmetadata=3,MAG_F160W,mag
This will convert name of the third column of the final output to @code{MAG_F160W} and the units to @code{mag}, while leaving the comments untouched.
@item --colmetadata=MAGNITUDE,MAG_F160W,mag,"Magnitude in F160W filter"
This will convert name of the original @code{MAGNITUDE} column to @code{MAG_F160W}, and the units to @code{mag} and the comments to @code{Magnitude in F160W filter}.
Note the double quotations around the comment string, they are necessary to preserve the white-space characters within the column comment from the command-line, into the program (otherwise, upon reaching a white-space character, the shell will consider this option to be finished and cause un-expected behavior).
@end table
If your table is large and generated by a script, you can first do all your operations on your table's data and write it into a temporary file (maybe called @file{temp.fits}).
Then, look into that file's metadata (with @command{asttable temp.fits -i}) to see the exact column positions and possible names, then add the necessary calls to this option to your previous call to @command{asttable}, so it writes proper metadata in the same run (for example, in a script or Makefile).
Recall that when a name is given, this option will update the metadata of the first column that matches, so if you have multiple columns with the same name, you can call this options multiple times with the same first argument to change them all to different names.
Finally, if you already have a FITS table by other means (for example, by downloading) and you merely want to update the column metadata and leave the data intact, it is much more efficient to directly modify the respective FITS header keywords with @code{astfits}, using the keyword manipulation features described in @ref{Keyword inspection and manipulation}.
@option{--colmetadata} is mainly intended for scenarios where you want to edit the data so it will always load the full/partial dataset into memory, then write out the resulting datasets with updated/corrected metadata.
@item -f STR
@itemx --txtf32format=STR
The plain-text format of 32-bit floating point columns when output is not binary (this option is ignored for binary outputs like FITS tables, see @ref{Printing floating point numbers}).
The acceptable values are listed below.
This is just the format of the plain-text outputs; see @option{--txtf32precision} for customizing their precision.
@table @code
@item fixed
Fixed-point notation (for example @code{123.4567}).
@item exp
Exponential notation (for example @code{1.234567e+02}).
@end table
The default mode is @code{exp} since it is the most generic and will not cause any loss of data.
Be very cautious if you set it to @code{fixed}.
As a rule of thumb, the fixed-point notation is only good if the numbers are larger than 1.0, but not too large!
Given that the total number of accurate decimal digits is fixed the more digits you have on the left of the decimal point (integer part), the more un-accurate digits will be printed on the right of the decimal point.
@item -p STR
@itemx --txtf32precision=INT
Number of digits after (to the right side of) the decimal point (precision) for columns with a 32-bit floating point datatype (this option is ignored for binary outputs like FITS tables, see @ref{Printing floating point numbers}).
This can take any positive integer (including 0).
When given a value of zero, the floating point number will be rounded to the nearest integer.
@cindex IEEE 754
The default value to this option is 6.
This is because according to IEEE 754, 32-bit floating point numbers can be accurately presented to 7.22 decimal digits (see @ref{Printing floating point numbers}).
Since we only have an integer number of digits in a number, we'll round it to 7 decimal digits.
Furthermore, the precision is only defined to the right side of the decimal point.
In exponential notation (default of @option{--txtf32format}), one decimal digit will be printed on the left of the decimal point.
So the default value to this option is @mymath{7-1=6}.
@item -A STR
@itemx --txtf64format=STR
The plain-text format of 64-bit floating point columns when output is not binary (this option is ignored for binary outputs like FITS tables, see @ref{Printing floating point numbers}).
The acceptable values are listed below.
This is just the format of the plain-text outputs; see @option{--txtf64precision} for customizing their precision.
@table @code
@item fixed
Fixed-point notation (for example @code{12345.6789012345}).
@item exp
Exponential notation (for example @code{1.23456789012345e4}).
@end table
The default mode is @code{exp} since it is the most generic and will not cause any loss of data.
Be very cautious if you set it to @code{fixed}.
As a rule of thumb, the fixed-point notation is only good if the numbers are larger than 1.0, but not too large!
Given that the total number of accurate decimal digits is fixed the more digits you have on the left of the decimal point (integer part), the more un-accurate digits will be printed on the right of the decimal point.
@item -B STR
@itemx --txtf64precision=INT
Number of digits after the decimal point (precision) for columns with a 64-bit floating point datatype (this option is ignored for binary outputs like FITS tables, see @ref{Printing floating point numbers}).
This can take any positive integer (including 0).
When given a value of zero, the floating point number will be rounded to the nearest integer.
@cindex IEEE 754
The default value to this option is 15.
This is because according to IEEE 754, 64-bit floating point numbers can be accurately presented to 15.95 decimal digits (see @ref{Printing floating point numbers}).
Since we only have an integer number of digits in a number, we'll round it to 16 decimal digits.
Furthermore, the precision is only defined to the right side of the decimal point.
In exponential notation (default of @option{--txtf64format}), one decimal digit will be printed on the left of the decimal point.
So the default value to this option is @mymath{16-1=15}.
@item -Y
@itemx --txteasy
When output is a plain-text file or just gets printed on standard output (the terminal), all floating point columns are printed in fixed point notation (as in @code{123.456}) instead of the default exponential notation (as in @code{1.23456e+02}).
For 32-bit floating points, this option will use a precision of 3 digits (see @option{--txtf32precision}) and for 64-bit floating points use a precision of 6 digits (see @option{--txtf64precision}).
This can be useful for human readability, but be careful with some scenarios (for example @code{1.23e-120}, which will show only as @code{0.0}!).
When this option is called any value given the following options is ignored: @option{--txtf32format}, @option{--txtf32precision}, @option{--txtf64format} and @option{--txtf64precision}.
For example below you can see the output of table with and without this option:
@example
$ asttable table.fits --head=5 -O
# Column 1: OBJNAME [name ,str23, ] Name in HyperLeda.
# Column 2: RAJ2000 [deg ,f64 , ] Right Ascension.
# Column 3: DEJ2000 [deg ,f64 , ] Declination.
# Column 4: RADIUS [arcmin,f32 , ] Major axis radius.
NGC0884 2.3736267000000e+00 5.7138753300000e+01 8.994357e+00
NGC1629 4.4935191000000e+00 -7.1838322400000e+01 5.000000e-01
NGC1673 4.7109672000000e+00 -6.9820892700000e+01 3.499210e-01
NGC1842 5.1216920000000e+00 -6.7273195300000e+01 3.999171e-01
$ asttable table.fits --head=5 -O -Y
# Column 1: OBJNAME [name ,str23, ] Name in HyperLeda.
# Column 2: RAJ2000 [deg ,f64 , ] Right Ascension.
# Column 3: DEJ2000 [deg ,f64 , ] Declination.
# Column 4: RADIUS [arcmin,f32 , ] Major axis radius.
NGC0884 2.373627 57.138753 8.994
NGC1629 4.493519 -71.838322 0.500
NGC1673 4.710967 -69.820893 0.350
NGC1842 5.121692 -67.273195 0.400
@end example
This is also useful when you want to make outputs of other programs more ``easy'' to read, for example:
@example
$ echo 123.45678 | asttable
1.234567800000000e+02
$ echo 123.45678 | asttable -Y
123.456780
@end example
@cartouche
@noindent
@strong{Can result in loss of information}: be very careful with this option!
It can loose precision or generally the full value if the value is not within a "good" range like this example.
Such cases are the reason that this is not the default format of plain-text outputs.
@example
$ echo 123.4e-9 | asttable -Y
0.000000
@end example
@end cartouche
@end table
@node Query, , Table, Data containers
@section Query
@cindex IVOA
@cindex Query
@cindex TAP (Table Access Protocol)
@cindex ADQL (Astronomical Data Query Language)
@cindex Astronomical Data Query Language (ADQL)
There are many astronomical databases available for downloading astronomical data.
Most follow the International Virtual Observatory Alliance (IVOA, @url{https://ivoa.net}) standards (and in particular the Table Access Protocol, or TAP@footnote{@url{https://ivoa.net/documents/TAP}}).
With TAP, it is possible to submit your queries via a command-line downloader (for example, @command{curl}) to only get specific tables, targets (rows in a table) or measurements (columns in a table): you do not have to download the full table (which can be very large in some cases)!
These customizations are done through the Astronomical Data Query Language (ADQL@footnote{@url{https://ivoa.net/documents/ADQL}}).
Therefore, if you are sufficiently familiar with TAP and ADQL, you can easily custom-download any part of an online dataset.
However, you also need to keep a record of the URLs of each database and in many cases, the commands will become long and hard/buggy to type on the command-line.
On the other hand, most astronomers do not know TAP or ADQL at all, and are forced to go to the database's web page which is slow (it needs to download so many images, and has too much annoying information), requires manual interaction (further making it slow and buggy), and cannot be automated.
Gnuastro's Query program is designed to be the middle-man in this process: it provides a simple high-level interface to let you specify your constraints on what you want to download.
It then internally constructs the command to download the data based on your inputs and runs it to download your desired data.
Query also prints the full command before it executes it (if not called with @option{--quiet}).
Also, if you ask for a FITS output table, the full command is written into its 0-th extension along with other input parameters to query (all Gnuastro programs generally keep their input configuration parameters as FITS keywords in the zero-th output).
You can see it with Gnuastro's Fits program, like below:
@example
$ astfits query-output.fits -h0
@end example
With the full command used to download the dataset, you only need a minimal knowledge of ADQL to do lower-level customizations on your downloaded dataset.
You can simply copy that command and change the parts of the query string you want: ADQL is very powerful!
For example, you can ask the server to do mathematical operations on the columns and apply selections after those operations, or combine/match multiple datasets.
We will try to add high-level interfaces for such capabilities, but generally, do not limit yourself to the high-level operations (that cannot cover everything!).
@menu
* Available databases:: List of available databases to Query.
* Invoking astquery:: Inputs, outputs and configuration of Query.
@end menu
@node Available databases, Invoking astquery, Query, Query
@subsection Available databases
The current list of databases supported by Query are listed at the end of this section.
To get the list of available datasets within each database, you can use the @option{--information} option.
for example, with the command below you can get a list of the roughly 100 datasets that are available within the ESA Gaia server with their description:
@example
$ astquery gaia --information
@end example
@noindent
However, other databases like VizieR host many more datasets (tens of thousands!).
Therefore it is very inconvenient to get the @emph{full} information every time you want to find your dataset of interest (the full metadata file VizieR is more than 20Mb).
In such cases, you can limit the downloaded and displayed information with the @code{--limitinfo} option.
For example, with the first command below, you can get all datasets relating to the MUSE (an instrument on the Very Large Telescope), and those that include Roland Bacon (Principle Investigator of MUSE) as an author (@code{Bacon, R.}).
Recall that @option{-i} is the short format of @option{--information}.
@example
$ astquery vizier -i --limitinfo=MUSE
$ astquery vizier -i --limitinfo="Bacon R."
@end example
Once you find the recognized name of your desired dataset, you can see the column information of that dataset with adding the dataset name.
For example, with the command below you can see the column metadata in the @code{J/A+A/608/A2/udf10} dataset (one of the datasets in the search above) using this command:
@example
$ astquery vizier --dataset=J/A+A/608/A2/udf10 -i
@end example
@cindex SDSS DR12
For very popular datasets of a database, Query provides an easier-to-remember short name that you can feed to @option{--dataset}.
This short name will map to the officially recognized name of the dataset on the server.
In this mode, Query will also set positional columns accordingly.
For example, most VizieR datasets have an @code{RAJ2000} column (the RA and the epoch of 2000) so it is the default RA column name for coordinate search (using @option{--center} or @option{--overlapwith}).
However, some datasets do not have this column (for example, SDSS DR12).
So when you use the short name and Query knows about this dataset, it will internally set the coordinate columns that SDSS DR12 has: @code{RA_ICRS} and @code{DEC_ICRS}.
Recall that you can always change the coordinate columns with @option{--ccol}.
For example, in the VizieR and Gaia databases, the recognized name for data release 3 data is respectively @code{I/355/gaiadr3} and @code{gaiadr3.gaia_source}.
These technical names are hard to remember.
Therefore Query provides @code{gaiadr3} (for VizieR) and @code{dr3} (for ESA's Gaia database) shortcuts which you can give to @option{--dataset} instead.
They will be internally mapped to the fully recognized name by Query.
In the list below that describes the available databases, the available short names, that are recognized for each, are also listed.
@cartouche
@noindent
@strong{Not all datasets support TAP:} Large databases like VizieR have TAP access for all their datasets.
However, smaller databases have not implemented TAP for all their tables.
Therefore some datasets that are searchable in their web interface may not be available for a TAP search.
To see the full list of TAP-ed datasets in a database, use the @option{--information} (or @option{-i}) option with the dataset name like the command below.
@example
$ astquery astron -i
@end example
@noindent
If your desired dataset is not in this list, but has web-access, contact the database maintainers and ask them to add TAP access for it.
After they do it, you should see the name added to the output list of the command above.
@end cartouche
The list of databases recognized by Query (and their names in Query) is described below.
Since Query is a new member of the Gnuastro family (first available in Gnuastro 0.14), this list will hopefully grow significantly in the next releases.
If you have any particular datasets in mind, please let us know by sending an email to @code{bug-gnuastro@@gnu.org}.
If the dataset supports IVOA's TAP (Table Access Protocol), it should be very easy to add.
@table @code
@item astron
@cindex ASTRON
@cindex Radio astronomy
The ASTRON Virtual Observatory service (@url{https://vo.astron.nl}) is a database focused on radio astronomy data and images, primarily those collected by ASTRON itself.
A query to @code{astron} is submitted to @code{https://vo.astron.nl/__system__/tap/run/tap/sync}.
Here is the list of short names for dataset(s) in ASTRON's VO service:
@itemize
@item
@code{tgssadr --> tgssadr.main}
@end itemize
@item gaia
@cindex Gaia catalog
@cindex Catalog, Gaia
@cindex Database, Gaia
The Gaia project (@url{https://www.cosmos.esa.int/web/gaia}) database which is a large collection of star positions on the celestial sphere, as well as peculiar velocities, parallaxes and magnitudes in some bands among many others.
Besides scientific studies (like studying resolved stellar populations in the Galaxy and its halo), Gaia is also invaluable for raw data calibrations, like astrometry.
A query to @code{gaia} is submitted to @code{https://gea.esac.esa.int/tap-server/tap/sync}.
Here is the list of short names for popular datasets within Gaia:
@itemize
@item
@code{dr3 --> gaiadr3.gaia_source}
@item
@code{edr3 --> gaiaedr3.gaia_source}
@item
@code{dr2 --> gaiadr2.gaia_source}
@item
@code{dr1 --> gaiadr1.gaia_source}
@item
@code{tycho2 --> public.tycho2}
@item
@code{hipparcos --> public.hipparcos}
@end itemize
@item ned
@cindex NASA/IPAC Extragalactic Database (NED)
@cindex NED (NASA/IPAC Extragalactic Database)
The NASA/IPAC Extragalactic Database (NED, @url{http://ned.ipac.caltech.edu}) is a fusion database, integrating the information about extra-galactic sources from many large sky surveys into a single catalog.
It covers the full spectrum, from Gamma rays to radio frequencies and is updated when new data arrives.
A TAP query to @code{ned} is submitted to @code{https://ned.ipac.caltech.edu/tap/sync}.
@itemize
@item
@code{objdir --> NEDTAP.objdir}: default TAP-based dataset in NED.
@item
@cindex VOTable
@code{extinction}: A command-line interface to the @url{https://ned.ipac.caltech.edu/extinction_calculator, NED Extinction Calculator}.
It only takes a central coordinate and returns a VOTable of the calculated extinction in many commonly used filters at that point.
As a result, options like @option{--width} or @option{--radius} are not supported.
However, Gnuastro does not yet support the VOTable format.
Therefore, if you specify an @option{--output} file, it should have an @file{.xml} suffix and the downloaded file will not be checked.
Until VOTable support is added to Gnuastro, you can use GREP, AWK and SED to convert the VOTable data into a FITS table with a command like below (assuming the queried VOTable is called @file{ned-extinction.xml}):
@verbatim
grep '^<TR><TD>' ned-extinction.xml \
| sed -e's|<TR><TD>||' \
-e's|</TD></TR>||' \
-e's|</TD><TD>|@|g' \
| awk 'BEGIN{FS="@"; \
print "# Column 1: FILTER [name,str15] Filter name"; \
print "# Column 2: CENTRAL [um,f32] Central Wavelength"; \
print "# Column 3: EXTINCTION [mag,f32] Galactic Ext."; \
print "# Column 4: ADS_REF [ref,str50] ADS reference"} \
{printf "%-15s %g %g %s\n", $1, $2, $3, $4}' \
| asttable -oned-extinction.fits
@end verbatim
Once the table is in FITS, you can easily get the extinction for a certain filter (for example, the @code{SDSS r} filter) like the command below:
@example
asttable ned-extinction.fits --equal=FILTER,"SDSS r" \
-cEXTINCTION
@end example
@end itemize
@item vizier
@cindex VizieR
@cindex CDS, VizieR
@cindex Catalog, Vizier
@cindex Database, VizieR
Vizier (@url{https://vizier.u-strasbg.fr}) is arguably the largest catalog database in astronomy: containing more than 20500 catalogs as of mid January 2021.
Almost all published catalogs in major projects, and even the tables in many papers are archived and accessible here.
For example, VizieR also has a full copy of the Gaia database mentioned below, with some additional standardized columns (like RA and Dec in J2000).
The current implementation of @option{--limitinfo} only looks into the description of the datasets, but since VizieR is so large, there is still a lot of room for improvement.
Until then, if @option{--limitinfo} is not sufficient, you can use VizieR's own web-based search for your desired dataset: @url{http://cdsarc.u-strasbg.fr/viz-bin/cat}
Because VizieR curates such a diverse set of data from tens of thousands of projects and aims for interoperability between them, the column names in VizieR may not be identical to the column names in the surveys' own databases (Gaia in the example above).
A query to @code{vizier} is submitted to @code{http://tapvizier.u-strasbg.fr/TAPVizieR/tap/sync}.
@cindex 2MASS All-Sky Catalog
@cindex AKARI/FIS All-Sky Survey
@cindex AllWISE Data Release
@cindex AAVSO Photometric All Sky Survey, DR9
@cindex CatWISE 2020 catalog
@cindex Dark Energy Survey data release 1
@cindex GAIA Data Release (2 or 3)
@cindex All-sky Survey of GALEX DR5
@cindex Naval Observatory Merged Astrometric Dataset
@cindex Pan-STARRS Data Release 1
@cindex SDSS Photometric Catalogue, Release 12
@cindex Whole-Sky USNO-B1.0 Catalog
@cindex U.S. Naval Observatory CCD Astrograph Catalog
@cindex Band-merged unWISE Catalog
@cindex WISE All-Sky data Release
Here is the list of short names for popular datasets within VizieR (sorted alphabetically by their short name).
Please feel free to suggest other major catalogs (covering a wide area or commonly used in your field)..
For details on each dataset with necessary citations, and links to web pages, look into their details with their ViziR names in @url{https://vizier.u-strasbg.fr/viz-bin/VizieR}.
@itemize
@item
@code{2mass --> II/246/out} (2MASS All-Sky Catalog)
@item
@code{akarifis --> II/298/fis} (AKARI/FIS All-Sky Survey)
@item
@code{allwise --> II/328/allwise} (AllWISE Data Release)
@item
@code{apass9 --> II/336/apass9} (AAVSO Photometric All Sky Survey, DR9)
@item
@code{catwise --> II/365/catwise} (CatWISE 2020 catalog)
@item
@code{des1 --> II/357/des_dr1} (Dark Energy Survey data release 1)
@item
@code{gaiadr3 --> I/355/gaiadr3} (GAIA Data Release 3)
@item
@code{gaiaedr3 --> I/350/gaiadr3} (GAIA Early Data Release 3)
@item
@code{gaiadr2 --> I/345/gaia2} (GAIA Data Release 2)
@item
@code{galex5 --> II/312/ais} (All-sky Survey of GALEX DR5)
@item
@code{nomad --> I/297/out} (Naval Observatory Merged Astrometric Dataset)
@item
@code{panstarrs1 --> II/349/ps1} (Pan-STARRS Data Release 1).
@item
@code{ppmxl --> I/317/sample} (Positions and proper motions on the ICRS)
@item
@code{sdss12 --> V/147/sdss12} (SDSS Photometric Catalogue, Release 12)
@item
@code{usnob1 --> I/284/out} (Whole-Sky USNO-B1.0 Catalog)
@item
@code{ucac5 --> I/340/ucac5} (5th U.S. Naval Obs. CCD Astrograph Catalog)
@item
@code{unwise --> II/363/unwise} (Band-merged unWISE Catalog)
@item
@code{wise --> II/311/wise} (WISE All-Sky data Release)
@end itemize
@end table
@node Invoking astquery, , Available databases, Query
@subsection Invoking Query
Query provides a high-level interface to downloading subsets of data from databases.
The executable name is @file{astquery} with the following general template
@example
$ astquery DATABASE-NAME [OPTION...] ...
@end example
@noindent
One line examples:
@example
## Information about all datasets in ESA's GAIA database:
$ astquery gaia --information
## Only show catalogs in VizieR that have 'MUSE' in their
## description. The '-i' is short for '--information'.
$ astquery vizier -i --limitinfo=MUSE
## List of columns in 'J/A+A/608/A2/udf10' (one of the above).
$ astquery vizier --dataset=J/A+A/608/A2/udf10 -i
## ID, RA and Dec of all Gaia sources within an image.
$ astquery gaia --dataset=dr3 --overlapwith=image.fits \
-csource_id,ra,dec
## RA, Dec and Spectroscopic redshifts of objects in SDSS DR12
## spectroscopic redshift that overlap with 'image.fits'.
$ astquery vizier --dataset=sdss12 --overlapwith=image.fits \
-cRA_ICRS,DE_ICRS,zsp --range=zsp,1e-10,inf
## All columns of all entries in the Gaia DR3 catalog (hosted at
## VizieR) within 1 arc-minute of the given coordinate.
$ astquery vizier --dataset=gaiadr3 --output=my-gaia.fits \
--center=113.8729761,31.9027152 --radius=1/60 \
## Similar to above, but only ID, RA and Dec columns for objects with
## magnitude range 10 to 15. In VizieR, this column is called 'Gmag'.
## Also, using sexagesimal coordinates instead of degrees for center.
$ astquery vizier --dataset=gaiadr3 --output=my-gaia.fits \
--center=07h35m29.51,31d54m9.77 --radius=1/60 \
--range=Gmag,10:15 -cDR3Name,RAJ2000,DEJ2000
@end example
Query takes a single argument which is the name of the database.
For the full list of available databases and accessing them, see @ref{Available databases}.
There are two methods to query the databases, each is more fully discussed in its option's description below.
@itemize
@item
@strong{Low-level:}
With @option{--query} you can directly give a raw query statement that is recognized by the database.
This is very low level and will require a good knowledge of the database's query language, but of course, it is much more powerful.
If this option is given, the raw string is directly passed to the server and all other constraints/options (for Query's high-level interface) are ignored.
@item
@strong{High-level:}
With the high-level options (like @option{--column}, @option{--center}, @option{--radius}, @option{--range} and other constraining options below), the low-level query will be constructed automatically for the particular database.
This method is only limited to the generic capabilities that Query provides for all servers.
So @option{--query} is more powerful, however, in this mode, you do not need any knowledge of the database's query language.
You can see the internally generated query on the terminal (if @option{--quiet} is not used) or in the 0-th extension of the output (if it is a FITS file).
This full command contains the internally generated query.
@end itemize
The name of the downloaded output file can be set with @option{--output}.
The requested output format can have any of the @ref{Recognized table formats} (currently @file{.txt} or @file{.fits}).
Like all Gnuastro programs, if the output is a FITS file, the zero-th/first HDU of the output will contain all the command-line options given to Query as well as the full command used to access the server.
When @option{--output} is not set, the output name will be in the format of @file{NAME-STRING.fits}, where @file{NAME} is the name of the database and @file{STRING} is a randomly selected 6-character set of numbers and alphabetic characters.
With this feature, a second run of @command{astquery} that is not called with @option{--output} will not over-write an already downloaded one.
Generally, when calling Query more than once, it is recommended to set an output name for each call based on your project's context.
The outputs of Query will have a common output format, irrespective of the used database.
To achieve this, Query will ask the databases to provide a FITS table output (for larger tables, FITS can consume much less download volume).
After downloading is complete, the raw downloaded file will be read into memory once by Query, and written into the file given to @option{--output}.
The raw downloaded file will be deleted by default, but can be preserved with the @option{--keeprawdownload} option.
This strategy avoids unnecessary surprises depending on database.
For example, some databases can download a compressed FITS table, even though we ask for FITS.
But with the strategy above, the final output will be an uncompressed FITS file.
The metadata that is added by Query (including the full download command) is also very useful for future usage of the downloaded data.
Unfortunately many databases do not write the input queries into their generated tables.
@table @option
@item --dry-run
Only print the final download command to contact the server, do not actually run it.
This option is good when you want to check the finally constructed query or download options given to the download program.
You may also want to use the constructed command as a base to do further customizations on it and run it yourself.
@item -k
@itemx --keeprawdownload
Do not delete the raw downloaded file from the database.
The name of the raw download will have a @file{OUTPUT-raw-download.fits} format.
Where @file{OUTPUT} is either the base-name of the final output file (without a suffix).
@item -i
@itemx --information
Print the information of all datasets (tables) within a database or all columns within a database.
When @option{--dataset} is specified, the latter mode (all column information) is downloaded and printed and when it is not defined, all dataset information (within the database) is printed.
Some databases (like VizieR) contain tens of thousands of datasets, so you can limit the downloaded and printed information for available databases with the @option{--limitinfo} option (described below).
Dataset descriptions are often large and contain a lot of text (unlike column descriptions).
Therefore when printing the information of all datasets within a database, the information (e.g., database name) will be printed on separate lines before the description.
However, when printing column information, the output has the same format as a similar option in Table (see @ref{Invoking asttable}).
Important note to consider: the printed order of the datasets or columns is just for displaying in the printed output.
You cannot ask for datasets or columns based on the printed order, you need to use dataset or column names.
@item -L STR
@itemx --limitinfo=STR
Limit the information that is downloaded and displayed (with @option{--information}) to those that have the string given to this option in their description.
Note that @emph{this is case-sensitive}.
This option is only relevant when @option{--information} is also called.
Databases may have thousands (or tens of thousands) of datasets.
Therefore just the metadata (information) to show with @option{--information} can be tens of megabytes (for example, the full VizieR metadata file is about 23Mb as of January 2021).
Once downloaded, it can also be hard to parse manually.
With @option{--limitinfo}, only the metadata of datasets that contain this string @emph{in their description} will be downloaded and displayed, greatly improving the speed of finding your desired dataset.
@item -Q "STR"
@itemx --query="STR"
Directly specify the query to be passed onto the database.
The queries will generally contain space and other meta-characters, so we recommend placing the query within quotations.
@item -s STR
@itemx --dataset=STR
The dataset to query within the database (not compatible with @option{--query}).
This option is mandatory when @option{--query} or @option{--information} are not provided.
You can see the list of available datasets within a database using @option{--information} (possibly supplemented by @option{--limitinfo}).
The output of @option{--information} will contain the recognized name of the datasets within that database.
You can pass the recognized name directly to this option.
For more on finding and using your desired database, see @ref{Available databases}.
@item -c STR
@itemx --column=STR[,STR[,...]]
The column name(s) to retrieve from the dataset in the given order (not compatible with @option{--query}).
If not given, all the dataset's columns for the selected rows will be queried (which can be large!).
This option can take multiple values in one instance (for example, @option{--column=ra,dec,mag}), or in multiple instances (for example, @option{-cra -cdec -cmag}), or mixed (for example, @option{-cra,dec -cmag}).
In case, you do not know the full list of the dataset's column names a-priori, and you do not want to download all the columns (which can greatly decrease your download speed), you can use the @option{--information} option combined with the @option{--dataset} option, see @ref{Available databases}.
@item -H INT
@itemx --head=INT
Only ask for the first @code{INT} rows of the finally selected columns, not all the rows.
This can be good when your search can result a large dataset, but before downloading the full volume, you want to see the top rows and get a feeling of what the whole dataset looks like.
@item -v FITS
@itemx --overlapwith=FITS
File name of FITS file containing an image (in the HDU given by @option{--hdu}) to use for identifying the region to query in the give database and dataset.
Based on the image's WCS and pixel size, the sky coverage of the image is estimated and values to the @option{--center}, @option{--width} will be calculated internally.
Hence this option cannot be used with @code{--center}, @code{--width} or @code{--radius}.
Also, since it internally generates the query, it cannot be used with @code{--query}.
If the image is rotated in relation to RA/DEC, or the image has a large coverage on the sky, or it has WCS distortions (that the server can't know about), the retrieved catalog may not fully overlap with the image (correspond to a larger area in the sky).
To be sure that the final catalog you use actually has sources within the image, use the commands below.
Two points to have in mind in the example below: 1) use the @code{cols} variable to specify the names of the columns you want (this is necessary since this list of column names is necessary in two places of the commands below). 2) The last two commands are just for visual validation, they are not necessary in a script.
@example
$ cols=source_id,ra,dec,phot_g_mean_mag
$ astquery gaia --dataset=dr3 --overlapwith=image.fits \
--column=$cols --output=gaia.fits
$ astcrop image.fits --catalog=gaia.fits --coordcol=ra \
--coordcol=dec --mode=wcs --width=1 \
--widthinpix --log --oneelemstdout --quiet \
--output=crop-log.fits > /dev/null
$ asttable gaia.fits --catcolumnfile=crop-log.fits \
--catcolumns=NUM_INPUTS -O \
| asttable --equal=NUM_INPUTS,1 \
--column=$cols \
--output=gaia-in-image.fits
$ astscript-ds9-region gaia-in-image.fits --column=ra,dec \
--radius=5 --width=5 \
--output=stars-in.reg
$ astscript-fits-view image.fits --region=stars-in.reg
@end example
Note that if the image has WCS distortions and the reference point for the WCS is not within the image, the WCS will not be well-defined.
Therefore the resulting catalog may not overlap, or correspond to a larger/small area in the sky.
@item -C FLT,FLT
@itemx --center=FLT,FLT
The spatial center position (mostly RA and Dec) to use for the automatically generated query (not compatible with @option{--query}).
The comma-separated values can either be in degrees (a single number), or sexagesimal (@code{_h_m_} for RA, @code{_d_m_} for Dec, or @code{_:_:_} for both).
The given values will be compared to two columns in the database to find/return rows within a certain region around this center position will be requested and downloaded.
Pre-defined RA and Dec column names are defined in Query for every database, however you can use @option{--ccol} to select other columns to use instead.
The region can either be a circle and the point (configured with @option{--radius}) or a box/rectangle around the point (configured with @option{--width}).
@item --ccol=STR,STR
The name of the coordinate-columns in the dataset to compare with the values given to @option{--center}.
Query will use its internal defaults for each dataset (for example, @code{RAJ2000} and @code{DEJ2000} for VizieR data).
But each dataset is treated separately and it is not guaranteed that these columns exist in all datasets.
Also, more than one coordinate system/epoch may be present in a dataset and you can use this option to construct your spatial constraint based on the others coordinate systems/epochs.
@item -r FLT
@itemx --radius=FLT
The radius about the requested center to use for the automatically generated query (not compatible with @option{--query}).
The radius is in units of degrees, but you can use simple division with this option directly on the command-line.
For example, if you want a radius of 20 arc-minutes or 20 arc-seconds, you can use @option{--radius=20/60} or @option{--radius=20/3600} respectively (which is much more human-friendly than @code{0.3333} or @code{0.005556}).
@item -w FLT[,FLT]
@itemx --width=FLT[,FLT]
The square (or rectangle) side length (width) about the requested center to use for the automatically generated query (not compatible with @option{--query}).
If only one value is given to @code{--width} the region will be a square, but if two values are given, the widths of the query box along each dimension will be different.
The value(s) is (are) in the same units as the coordinate column (see @option{--ccol}, usually RA and Dec which are degrees).
You can use simple division for each value directly on the command-line if you want relatively small (and more human-friendly) sizes.
For example, if you want your box to be 1 arc-minutes along the RA and 2 arc-minutes along Dec, you can use @option{--width=1/60,2/60}.
@item -g STR,FLT,FLT
@itemx --range=STR,FLT,FLT
The column name and numerical range (inclusive) of acceptable values in that column (not compatible with @option{--query}).
This option can be called multiple times for applying range limits on many columns in one call (thus greatly reducing the download size).
For example, when used on the ESA gaia database, you can use @code{--range=phot_g_mean_mag,10:15} to only get rows that have a value between 10 and 15 (inclusive on both sides) in the @code{phot_g_mean_mag} column.
If you want all rows larger, or smaller, than a certain number, you can use @code{inf}, or @code{-inf} as the first or second values respectively.
For example, if you want objects with SDSS spectroscopic redshifts larger than 2 (from the VizieR @code{sdss12} database), you can use @option{--range=zsp,2,inf}
If you want the interval to not be inclusive on both sides, you can run @code{astquery} once and get the command that it executes.
Then you can edit it to be non-inclusive on your desired side.
@item -b STR[,STR]
@item --noblank=STR[,STR]
Only ask for rows that do not have a blank value in the @code{STR} column.
This option can be called many times, and each call can have multiple column names (separated by a comma or @key{,}).
For example, if you want the retrieved rows to not have a blank value in columns @code{A}, @code{B}, @code{C} and @code{D}, you can use @command{--noblank=A -bB,C,D}.
@item --sort=STR[,STR]
Ask for the server to sort the downloaded data based on the given columns.
For example, let's assume your desired catalog has column @code{Z} for redshift and column @code{MAG_R} for magnitude in the R band.
When you call @option{--sort=Z,MAG_R}, it will primarily sort the columns based on the redshift, but if two objects have the same redshift, they will be sorted by magnitude.
You can add as many columns as you like for higher-level sorting.
@end table
@node Data manipulation, Data analysis, Data containers, Top
@chapter Data manipulation
Images are one of the major formats of data that is used in astronomy.
The functions in this chapter explain the GNU Astronomy Utilities which are provided for their manipulation.
For example, cropping out a part of a larger image or convolving the image with a given kernel or applying a transformation to it.
@menu
* Crop:: Crop region(s) from a dataset.
* Arithmetic:: Arithmetic on input data.
* Convolve:: Convolve an image with a kernel.
* Warp:: Warp/Transform an image to a different grid.
@end menu
@node Crop, Arithmetic, Data manipulation, Data manipulation
@section Crop
@cindex Section of an image
@cindex Crop part of image
@cindex Postage stamp images
@cindex Large astronomical images
@pindex @r{Crop (}astcrop@r{)}
Astronomical images are often very large, filled with thousands of galaxies.
It often happens that you only want a section of the image, or you have a catalog of sources and you want to visually analyze them in small postage stamps.
Crop is made to do all these things.
When more than one crop is required, Crop will divide the crops between multiple threads to significantly reduce the run time.
@cindex Mosaicing
@cindex Image tiles
@cindex Image mosaic
@cindex COSMOS survey
@cindex Imaging surveys
@cindex Hubble Space Telescope (HST)
Astronomical surveys are usually extremely large.
So large in fact, that the whole survey will not fit into a reasonably sized file.
Because of this, surveys usually cut the final image into separate tiles and store each tile in a file.
For example, the COSMOS survey's Hubble space telescope, ACS F814W image consists of 81 separate FITS images, with each one having a volume of 1.7 Gigabytes.
@cindex Stitch multiple images
Even though the tile sizes are chosen to be large enough that too many galaxies/targets do not fall on the edges of the tiles, inevitably some do.
So when you simply crop the image of such targets from one tile, you will miss a large area of the surrounding sky (which is essential in estimating the noise).
Therefore in its WCS mode, Crop will stitch parts of the tiles that are relevant for a target (with the given width) from all the input images that cover that region into the output.
Of course, the tiles have to be present in the list of input files.
Besides cropping postage stamps around certain coordinates, Crop can also crop arbitrary polygons from an image (or a set of tiles by stitching the relevant parts of different tiles within the polygon), see @option{--polygon} in @ref{Invoking astcrop}.
Alternatively, it can crop out rectangular regions through the @option{--section} option from one image, see @ref{Crop section syntax}.
@menu
* Crop modes:: Basic modes to define crop region.
* Crop section syntax:: How to define a section to crop.
* Blank pixels:: Pixels with no value.
* Invoking astcrop:: Calling Crop on the command-line
@end menu
@node Crop modes, Crop section syntax, Crop, Crop
@subsection Crop modes
In order to be comprehensive, intuitive, and easy to use, there are two ways to define the crop:
@enumerate
@item
From its center and side length.
For example, if you already know the coordinates of an object and want to inspect it in an image or to generate postage stamps of a catalog containing many such coordinates.
@item
The vertices of the crop region, this can be useful for larger crops over
many targets, for example, to crop out a uniformly deep, or contiguous,
region of a large survey.
@end enumerate
Irrespective of how the crop region is defined, the coordinates to define the crop can be in Image (pixel) or World Coordinate System (WCS) standards.
All coordinates are read as floating point numbers (not integers, except for the @option{--section} option, see below).
By setting the @emph{mode} in Crop, you define the standard that the given coordinates must be interpreted.
Here, the different ways to specify the crop region are discussed within each standard.
For the full list options, please see @ref{Invoking astcrop}.
When the crop is defined by its center, the respective (integer) central pixel position will be found internally according to the FITS standard.
To have this pixel positioned in the center of the cropped region, the final cropped region will have an add number of pixels (even if you give an even number to @option{--width} in image mode).
Furthermore, when the crop is defined as by its center, Crop allows you to only keep crops what do not have any blank pixels in the vicinity of their center (your primary target).
This can be very convenient when your input catalog/coordinates originated from another survey/filter which is not fully covered by your input image, to learn more about this feature, please see the description of the @option{--checkcenter} option in @ref{Invoking astcrop}.
@table @asis
@item Image coordinates
In image mode (@option{--mode=img}), Crop interprets the pixel coordinates and widths in units of the input data-elements (for example, pixels in an image, not world coordinates).
In image mode, only one image may be input.
The output crop(s) can be defined in multiple ways as listed below.
@table @asis
@item Center of multiple crops (in a catalog)
The center of (possibly multiple) crops are read from a text file.
In this mode, the columns identified with the @option{--coordcol} option are interpreted as the center of a crop with a width of @option{--width} pixels along each dimension.
The columns can contain any floating point value.
The value to @option{--output} option is seen as a directory which will host (the possibly multiple) separate crop files, see @ref{Crop output} for more.
For a tutorial using this feature, please see @ref{Reddest clumps cutouts and parallelization}.
@item Center of a single crop (on the command-line)
The center of the crop is given on the command-line with the @option{--center} option.
The crop width is specified by the @option{--width} option along each dimension.
The given coordinates and width can be any floating point number.
@item Vertices of a single crop
In Image mode there are two options to define the vertices of a region to crop: @option{--section} and @option{--polygon}.
The former is lower-level (does not accept floating point vertices, and only a rectangular region can be defined), it is also only available in Image mode.
Please see @ref{Crop section syntax} for a full description of this method.
The latter option (@option{--polygon}) is a higher-level method to define any polygon (with any number of vertices) with floating point values.
Please see the description of this option in @ref{Invoking astcrop} for its syntax.
@end table
@item WCS coordinates
In WCS mode (@option{--mode=wcs}), the coordinates and width are interpreted using the World Coordinate System (WCS, that must accompany the dataset), not pixel coordinates.
You can optionally use @option{--widthinpix} for the width to be interpreted in pixels (even though the coordinates are in WCS).
In WCS mode, Crop accepts multiple datasets as input.
When the cropped region (defined by its center or vertices) overlaps with multiple of the input images/tiles, the overlapping regions will be taken from the respective input (they will be stitched when necessary for each output crop).
In this mode, the input images do not necessarily have to be the same size, they just need to have the same orientation and pixel resolution.
Currently only orientation along the celestial coordinates is accepted, if your input has a different orientation or resolution you can use Warp's @option{--gridfile} option to align the image before cropping it (see @ref{Warp}).
Each individual input image/tile can even be smaller than the final crop.
In any case, any part of any of the input images which overlaps with the desired region will be used in the crop.
Note that if there is an overlap in the input images/tiles, the pixels from the last input image read are going to be used for the overlap.
Crop will not change pixel values, so it assumes your overlapping tiles were cutout from the same original image.
There are multiple ways to define your cropped region as listed below.
@table @asis
@item Center of multiple crops (in a catalog)
Similar to catalog inputs in Image mode (above), except that the values along each dimension are assumed to have the same units as the dataset's WCS information.
For example, the central RA and Dec value for each crop will be read from the first and second calls to the @option{--coordcol} option.
The width of the cropped box (in units of the WCS, or degrees in RA and Dec mode) must be specified with the @option{--width} option.
You can optionally use @option{--widthinpix} for the value of @option{--width} to be interpreted in pixels.
@item Center of a single crop (on the command-line)
You can specify the center of only one crop box with the @option{--center} option.
If it exists in the input images, it will be cropped similar to the catalog mode, see above also for @code{--width}.
@item Vertices of a single crop
The @option{--polygon} option is a high-level method to define any convex polygon (with any number of vertices).
Please see the description of this option in @ref{Invoking astcrop} for its syntax.
@end table
@cartouche
@noindent
@strong{CAUTION:} In WCS mode, the image has to be aligned with the celestial coordinates, such that the first FITS axis is parallel (opposite direction) to the Right Ascension (RA) and the second FITS axis is parallel to the declination.
If these conditions are not met for an image, Crop will warn you and abort.
You can use Warp to align the input image to standard celestial coordinates, see @ref{Warp}.
@end cartouche
@end table
As a summary, if you do not specify a catalog, you have to define the cropped region manually on the command-line.
In any case the mode is mandatory for Crop to be able to interpret the values given as coordinates or widths.
@node Crop section syntax, Blank pixels, Crop modes, Crop
@subsection Crop section syntax
@cindex Crop a given section of image
When in image mode, one of the methods to crop only one rectangular section from the input image is to use the @option{--section} option.
Crop has a powerful syntax to read the box parameters from a string of characters.
If you leave certain parts of the string to be empty, Crop can fill them for you based on the input image sizes.
@cindex Define section to crop
To define a box, you need the coordinates of two points: the first (@code{X1}, @code{Y1}) and the last pixel (@code{X2}, @code{Y2}) pixel positions in the image, or four integer numbers in total.
The four coordinates can be specified with one string in this format: `@command{X1:X2,Y1:Y2}'.
This string is given to the @option{--section} option.
Therefore, the pixels along the first axis that are @mymath{\geq}@command{X1} and @mymath{\leq}@command{X2} will be included in the cropped image.
The same goes for the second axis.
Note that each different term will be read as an integer, not a float.
The reason it only accepts integers is that @option{--section} is a low-level option (which is also very fast!).
For a higher-level way to specify region (any polygon, not just a box), please see the @option{--polygon} option in @ref{Crop options}.
Also note that in the FITS standard, pixel indexes along each axis start from unity(1) not zero(0).
@cindex Crop section format
You can omit any of the values and they will be filled automatically.
The left hand side of the colon (@command{:}) will be filled with @command{1}, and the right side with the image size.
So, @command{2:,:} will include the full range of pixels along the second axis and only those with a first axis index larger than @command{2} in the first axis.
If the colon is omitted for a dimension, then the full range is automatically used.
So the same string is also equal to @command{2:,} or @command{2:} or even @command{2}.
If you want such a case for the second axis, you should set it to: @command{,2}.
If you specify a negative value, it will be seen as before the indexes of the image which are outside the image along the bottom or left sides when viewed in SAO DS9.
In case you want to count from the top or right sides of the image, you can use an asterisk (@option{*}).
When confronted with a @option{*}, Crop will replace it with the maximum length of the image in that dimension.
So @command{*-10:*+10,*-20:*+20} will mean that the crop box will be @math{20\times40} pixels in size and only include the top corner of the input image with 3/4 of the image being covered by blank pixels, see @ref{Blank pixels}.
If you feel more comfortable with space characters between the values, you can use as many space characters as you wish, just be careful to put your value in double quotes, for example, @command{--section="5:200, 123:854"}.
If you forget the quotes, anything after the first space will not be seen by @option{--section} and you will most probably get an error because the rest of your string will be read as a filename (which most probably does not exist).
See @ref{Command-line} for a description of how the command-line works.
@node Blank pixels, Invoking astcrop, Crop section syntax, Crop
@subsection Blank pixels
@cindex Blank pixel
The cropped box can potentially include pixels that are beyond the image range.
For example, when a target in the input catalog was very near the edge of the input image.
The parts of the cropped image that were not in the input image will be filled with the following two values depending on the data type of the image.
In both cases, SAO DS9 will not color code those pixels.
@itemize
@item
If the data type of the image is a floating point type (float or double), IEEE NaN (Not a number) will be used.
@item
For integer types, pixels out of the image will be filled with the value of the @command{BLANK} keyword in the cropped image header.
The value assigned to it is the lowest value possible for that type, so you will probably never need it any way.
Only for the unsigned character type (@command{BITPIX=8} in the FITS header), the maximum value is used because it is unsigned, the smallest value is zero which is often meaningful.
@end itemize
You can ask for such blank regions to not be included in the output crop image using the @option{--noblank} option.
In such cases, there is no guarantee that the image size of your outputs are what you asked for.
In some survey images, unfortunately they do not use the @command{BLANK} FITS keyword.
Instead they just give all pixels outside of the survey area a value of zero.
So by default, when dealing with float or double image types, any values that are 0.0 are also regarded as blank regions.
This can be turned off with the @option{--zeroisnotblank} option.
@node Invoking astcrop, , Blank pixels, Crop
@subsection Invoking Crop
Crop will crop a region from an image.
If in WCS mode, it will also stitch parts from separate images in the input files.
The executable name is @file{astcrop} with the following general template
@example
$ astcrop [OPTION...] [ASCIIcatalog] ASTRdata ...
@end example
@noindent
One line examples:
@example
## Crop all objects in cat.txt from image.fits:
$ astcrop --catalog=cat.txt image.fits
## Crop all options in catalog (with RA,DEC) from all the files
## ending in `_drz.fits' in `/mnt/data/COSMOS/':
$ astcrop --mode=wcs --catalog=cat.txt /mnt/data/COSMOS/*_drz.fits
## Crop the outer 10 border pixels of the input image and give
## the output HDU a name ('EXTNAME' keyword in FITS) of 'mysection'.
$ astcrop --section=10:*-10,10:*-10 --hdu=2 image.fits \
--metaname=mysection
## Crop region around RA and Dec of (189.16704, 62.218203):
$ astcrop --mode=wcs --center=189.16704,62.218203 goodsnorth.fits
## Same crop above, but coordinates given in sexagesimal (you can
## also use ':' between the sexagesimal components).
$ astcrop --mode=wcs --center=12h36m40.08,62d13m5.53 goodsnorth.fits
## Crop region around pixel coordinate (568.342, 2091.719):
$ astcrop --mode=img --center=568.342,2091.719 --width=201 image.fits
## Crop all HDUs within a FITS file at a certain coordinate, while
## preserving the names of the HDUs in the output.
$ for hdu in $(astfits input.fits --listimagehdus); do \
astcrop input.fits --hdu=$hdu --append --output=crop.fits \
--metaname=$hdu --mode=wcs --center=189.16704,62.218203 \
--width=10/3600
done
@end example
@noindent
Crop has one mandatory argument which is the input image name(s), shown above with @file{ASTRdata ...}.
You can use shell expansions, for example, @command{*} for this if you have lots of images in WCS mode.
If the crop box centers are in a catalog, you can use the @option{--catalog} option.
In other cases, you have to provide the single cropped output parameters must be given with command-line options.
See @ref{Crop output} for how the output file name(s) can be specified.
For the full list of general options to all Gnuastro programs (including Crop), please see @ref{Common options}.
Floating point numbers can be used to specify the crop region (except the @option{--section} option, see @ref{Crop section syntax}).
In such cases, the floating point values will be used to find the desired integer pixel indices based on the FITS standard.
Hence, Crop ultimately does not do any sub-pixel cropping (in other words, it does not change pixel values).
If you need such crops, you can use @ref{Warp} to first warp the image to the a new pixel grid, then crop from that.
For example, let's assume you want a crop from pixels 12.982 to 80.982 along the first dimension.
You should first translate the image by @mymath{-0.482} (note that the edge of a pixel is at integer multiples of @mymath{0.5}).
So you should run Warp with @option{--translate=-0.482,0} and then crop the warped image with @option{--section=13:81}.
There are two ways to define the cropped region: with its center or its vertices.
See @ref{Crop modes} for a full description.
In the former case, Crop can check if the central region of the cropped image is indeed filled with data or is blank (see @ref{Blank pixels}), and not produce any output when the center is blank, see the description under @option{--checkcenter} for more.
@cindex Asynchronous thread allocation
When in catalog mode, Crop will run in parallel unless you set @option{--numthreads=1}, see @ref{Multi-threaded operations}.
Note that when multiple outputs are created with threads, the outputs will not be created in the same order.
This is because the threads are asynchronous and thus not started in order.
This has no effect on each output, see @ref{Reddest clumps cutouts and parallelization} for a tutorial on effectively using this feature.
@menu
* Crop options:: A list of all the options with explanation.
* Crop output:: The outputs of Crop.
* Crop known issues:: Known issues in running Crop.
@end menu
@node Crop options, Crop output, Invoking astcrop, Invoking astcrop
@subsubsection Crop options
The options can be classified into the following contexts: Input, Output and operating mode options.
Options that are common to all Gnuastro program are listed in @ref{Common options} and will not be repeated here.
When you are specifying the crop vertices yourself (through @option{--section}, or @option{--polygon}) on relatively small regions (depending on the resolution of your images) the outputs from image and WCS mode can be approximately equivalent.
However, as the crop sizes get large, the curved nature of the WCS coordinates have to be considered.
For example, when using @option{--section}, the right ascension of the bottom left and top left corners will not be equal.
If you only want regions within a given right ascension, use @option{--polygon} in WCS mode.
@noindent
Input image parameters:
@table @option
@item --hstartwcs=INT
Specify the first keyword card (line number) to start finding the input image world coordinate system information.
This is useful when certain header keywords of the input may cause bad conflicts with your crop (see an example described below).
To get line numbers of the header keywords, you can pipe the fully printed header into @command{cat -n} like below:
@example
$ astfits image.fits -h1 | cat -n
@end example
@cindex CANDELS survey
For example, distortions have only been present in WCSLIB from version 5.15 (released in mid 2016).
Therefore some pipelines still apply their own specific set of WCS keywords for distortions and put them into the image header along with those that WCSLIB does recognize.
So now that WCSLIB recognizes most of the standard distortion parameters, they will get confused with the old ones and give wrong results.
For example, in the CANDELS-GOODS South images that were created before WCSLIB 5.15@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
The two @option{--hstartwcs} and @option{--hendwcs} are thus provided so when using older datasets, you can specify what region in the FITS headers you want to use to read the WCS keywords.
Note that this is only relevant for reading the WCS information, basic data information like the image size are read separately.
These two options will only be considered when the value to @option{--hendwcs} is larger than that of @option{--hstartwcs}.
So if they are equal or @option{--hstartwcs} is larger than @option{--hendwcs}, then all the input keywords will be parsed to get the WCS information of the image.
@item --hendwcs=INT
Specify the last keyword card to read for specifying the image world coordinate system on the input images.
See @option{--hstartwcs}
@end table
@noindent
Crop box parameters:
@table @option
@item -c FLT[,FLT[,...]]
@itemx --center=FLT[,FLT[,...]]
The central position of the crop in the input image.
The positions along each dimension must be separated by a comma (@key{,}) and fractions are also acceptable.
The comma-separated values can either be in degrees (a single number), or sexagesimal (@code{_h_m_} for RA, @code{_d_m_} for Dec, or @code{_:_:_} for both).
The number of values given to this option must be the same as the dimensions of the input dataset.
The width of the crop should be set with @code{--width}.
The units of the coordinates are read based on the value to the @option{--mode} option, see below.
@item -O STR
@itemx --mode=STR
Mode to interpret the crop's coordinates (for example with @option{--center}, @option{--catalog} or @option{--polygon}).
The value must either be @option{img} (to assume image/pixel coordinates) or @option{wcs} (to assume WCS, usually RA/Dec, coordinates), see @ref{Crop modes} for a full description.
@item -w FLT[,FLT[,...]]
@itemx --width=FLT[,FLT[,...]]
Width of the cropped region about coordinate given to @option{--center}.
If in WCS mode, value(s) given to this option will be read in the same units as the dataset's WCS information along this dimension (unless @option{--widthinpix} is given).
This option may take either a single value (to be used for all dimensions: @option{--width=10} in image-mode will crop a @mymath{10\times10} pixel image) or multiple values (a specific value for each dimension: @option{--width=10,20} in image-mode will crop a @mymath{10\times20} pixel image).
The @code{--width} option also accepts fractions.
For example, if you want the width of your crop to be 3 by 5 arcseconds along RA and Dec respectively and you are in wcs-mode, you can use: @option{--width=3/3600,5/3600}.
The final output will have an odd number of pixels to allow easy identification of the pixel which keeps your requested coordinate (from @option{--center} or @option{--catalog}).
If you want an even sided crop, you can run Crop afterwards with @option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which side you do not need), see @ref{Crop section syntax}.
The basic reason for making an odd-sided crop is that your given central coordinate will ultimately fall within a discrete pixel in the image (defined by the FITS standard).
When the crop has an odd number of pixels in each dimension, that pixel can be very well defined as the ``central'' pixel of the crop, making it unambiguously easy to identify.
However, for an even-sided crop, it will be very hard to identify the central pixel (it can be on any of the four pixels adjacent to the central point of the image!).
@item -X
@itemx --widthinpix
In WCS mode, interpret the value to @option{--width} as number of pixels, not the WCS units like degrees.
This is useful when you want a fixed crop size in pixels, even though your center coordinates are in WCS (for example, RA and Dec).
@item -l STR
@itemx -l FLT:FLT,...
@itemx --polygon=STR
@itemx --polygon=FLT,FLT:FLT,FLT:...
@cindex Sexagesimal
Polygon vertice coordinates (when value is in @option{FLT,FLT:FLT,FLT:...} format) or the filename of a SAO DS9 region file (when the value has no @file{,} or @file{:} characters).
Each vertice can either be in degrees (a single floating point number) or sexagesimal (in formats of `@code{_h_m_}' for RA and `@code{_d_m_}' for Dec, or simply `@code{_:_:_}' for either of them).
The vertices are used to define the polygon: in the same order given to this option.
When the vertices are not necessarily ordered in the proper order (for example, one vertice in a square comes after its diagonal opposite), you can add the @option{--polygonsort} option which will attempt to sort the vertices before cropping.
Note that for concave polygons, sorting is not recommended because there is no unique solution, for more, see the description under @option{--polygonsort}.
This option can be used both in the image and WCS modes, see @ref{Crop modes}.
If a SAO DS9 region file is used, the coordinate mode of Crop will be determined by the contents of the file and any value given to @code{--mode} is ignored.
The cropped image will be the size of the rectangular region that completely encompasses the polygon.
By default all the pixels that are outside of the polygon will be set as blank values (see @ref{Blank pixels}).
However, if @option{--polygonout} is called all pixels internal to the vertices will be set to blank.
In WCS-mode, you may provide many FITS images/tiles: Crop will stitch them to produce this cropped region, then apply the polygon.
The syntax for the polygon vertices is similar to, and simpler than, that for @option{--section}.
In short, the dimensions of each coordinate are separated by a comma (@key{,}) and each vertex is separated by a colon (@key{:}).
You can define as many vertices as you like.
If you would like to use space characters between the dimensions and vertices to make them more human-readable, then you have to put the value to this option in double quotation marks.
For example, let's assume you want to work on the deepest part of the WFC3/IR images of Hubble Space Telescope eXtreme Deep Field (HST-XDF).
@url{https://archive.stsci.edu/prepds/xdf/, According to the web page}@footnote{@url{https://archive.stsci.edu/prepds/xdf/}} the deepest part is contained within the coordinates:
@example
[ (53.187414,-27.779152), (53.159507,-27.759633),
(53.134517,-27.787144), (53.161906,-27.807208) ]
@end example
They have provided mask images with only these pixels in the WFC3/IR images, but what if you also need to work on the same region in the full resolution ACS images? Also what if you want to use the CANDELS data for the shallow region? Running Crop with @option{--polygon} will easily pull out this region of the image for you, irrespective of the resolution.
If you have set the operating mode to WCS mode in your nearest configuration file (see @ref{Configuration files}), there is no need to call @option{--mode=wcs} on the command-line.
@example
$ astcrop --mode=wcs desired-filter-image(s).fits \
--polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
53.134517,-27.787144 : 53.161906,-27.807208"
@end example
@cindex SAO DS9 region file
@cindex Region file (SAO DS9)
More generally, you have an image and want to define the polygon yourself (it is not already published like the example above).
As the number of vertices increases, checking the vertex coordinates on a FITS viewer (for example, SAO DS9) and typing them in, one by one, can be very tedious and prone to typo errors.
In such cases, you can make a polygon ``region'' in DS9 and using your mouse, easily define (and visually see) it. Given that SAO DS9 has a graphic user interface (GUI), if you do not have the polygon vertices before-hand, it is much more easier build your polygon there and pass it onto Crop through the region file.
You can take the following steps to make an SAO DS9 region file containing your polygon.
Open your desired FITS image with SAO DS9 and activate its ``region'' mode with @clicksequence{Edit@click{}Region}.
Then define the region as a polygon with @clicksequence{Region@click{}Shape@click{}Polygon}.
Click on the approximate center of the region you want and a small square will appear.
By clicking on the vertices of the square you can shrink or expand it, clicking and dragging anywhere on the edges will enable you to define a new vertex.
After the region has been nicely defined, save it as a file with @clicksequence{Region@click{}``Save Regions''}.
You can then select the name and address of the output file, keep the format as @command{REG (*.reg)} and press the ``OK'' button.
In the next window, keep format as ``ds9'' and ``Coordinate System'' as ``icrs'' or ``fk5'' for RA and Dec (or ``Image'' for pixel coordinates).
A plain text file is now created (let's call it @file{ds9.reg}) which you can pass onto Crop with @command{--polygon=ds9.reg}.
For the expected format of the region file, see the description of @code{gal_ds9_reg_read_polygon} in @ref{SAO DS9 library}.
However, since SAO DS9 makes this file for you, you do not usually need to worry about its internal format unless something un-expected happens and you find a bug.
@item --polygonout
Keep all the regions outside the polygon and mask the inner ones with blank pixels (see @ref{Blank pixels}).
This is practically the inverse of the default mode of treating polygons.
Note that this option only works when you have only provided one input image.
If multiple images are given (in WCS mode), then the full area covered by all the images has to be shown and the polygon excluded.
This can lead to a very large area if large surveys like COSMOS are used.
So Crop will abort and notify you.
In such cases, it is best to crop out the larger region you want, then mask the smaller region with this option.
@item --polygonsort
Sort the given set of vertices to the @option{--polygon} option.
For a concave polygon it will sort the vertices correctly, however for a convex polygon it there is no unique sorting, so be careful because the crop may not be what you expected.
@cindex Convex polygons
@cindex Concave polygons
@cindex Polygons, Convex
@cindex Polygons, Concave
Polygons come in two classes: convex and concave (or generally, non-convex!), see below for a demonstration.
Convex polygons are those where all inner angles are less than 180 degrees.
By contrast, a concave polygon is one where an inner angle may be more than 180 degrees.
@example
Concave Polygon Convex Polygon
D --------C D------------- C
\ | E / |
\E | \ |
/ | \ |
A--------B A ----------B
@end example
@item -s STR
@itemx --section=STR
Section of the input image which you want to be cropped.
See @ref{Crop section syntax} for a complete explanation on the syntax required for this input.
@item -C FITS/TXT
@itemx --catalog=FITS/TXT
File name of catalog for making multiple crops from the input images/cubes.
The catalog can be in any of Gnuastro's recognized @ref{Recognized table formats}.
The columns containing the coordinates for the crop centers can be specified with the @option{--coordcol} option (using column names or numbers, see @ref{Selecting table columns}).
The catalog can also contain the name of each crop, you can specify the column containing the name with the @option{--namecol}.
@item --cathdu=STR/INT
The HDU (extension) containing the catalog (if the file given to @option{--catalog} is a FITS file).
This can either be the HDU name (if it has one) or number (counting from 0).
By default (if this option is not given), the second HDU will be used (equivalent to @option{--cathdu=1}.
For more on how to specify the HDU, see the explanation of the @option{--hdu} option in @ref{Input output options}.
@item -x STR/INT
@itemx --coordcol=STR/INT
The column in a catalog to read as a coordinate.
The value can be either the column number (starting from 1), or a match/search in the table meta-data, see @ref{Selecting table columns}.
This option must be called multiple times, depending on the number of dimensions in the input dataset.
If it is called more than necessary, the extra columns (later calls to this option on the command-line or configuration files) will be ignored, see @ref{Configuration file precedence}.
@item -n STR/INT
@item --namecol=STR/INT
Column selection of crop file name.
The value can be either the column number (starting from 1), or a match/search in the table meta-data, see @ref{Selecting table columns}.
This option can be used both in Image and WCS modes, and not a mandatory.
When a column is given to this option, the final crop base file name will be taken from the contents of this column.
The directory will be determined by the @option{--output} option (current directory if not given) and the value to @option{--suffix} will be appended.
When this column is not given, the row number will be used instead.
@end table
@node Crop output, Crop known issues, Crop options, Invoking astcrop
@subsubsection Crop output
The string given to @option{--output} option will be interpreted depending on how many crops were requested, see @ref{Crop modes}:
@itemize
@item
When a catalog is given, the value of the @option{--output} (see @ref{Common options}) will be read as the directory to store the output cropped images.
Hence if it does not already exist, Crop will abort with an ``No such file or directory'' error.
The crop file names will consist of two parts: a variable part (the row number of each target starting from 1) along with a fixed string which you can set with the @option{--suffix} option.
Optionally, you may also use the @option{--namecol} option to define a column in the input catalog to use as the file name instead of numbers.
@item
When only one crop is desired, the value to @option{--output} will be read as a file name.
If no output is specified or if it is a directory, the output file name will follow the automatic output names of Gnuastro, see @ref{Automatic output}: The string given to @option{--suffix} will be replaced with the @file{.fits} suffix of the input.
@end itemize
When the desired crop is not within the input image(s) crop will not produce any image for that crop.
If the @option{--quiet} option is not given, crop will report which one of the inputs was created and which wasn't (due to an overlap or the center being empty, see @option{--checkcenter}.
This information can be formally written in an optional log-file also, see below.
At the end, Crop will return successfully to the shell if at least one output file was created.
If no output was created at all, then crop will return to the shell with a failure.
By default, as suggested by the FITS standard and implemented in all Gnuastro programs, the first/primary extension of the output files will only contain metadata.
The cropped images/cubes will be written into the 2nd HDU of their respective FITS file (which is actually counted as @code{1} because HDU counting starts from @code{0}).
However, if you want the cropped data to be written into the primary (0-th) HDU, run Crop with the @option{--primaryimghdu} option.
If the output file already exists by default Crop will re-write it (so that all existing HDUs in it will be deleted).
If you want the cropped HDU to be appended to existing HDUs, use @option{--append} described below.
The 0-th HDU of each output cropped image will contain the names of the input image(s) it was cut from.
If a name is longer than the 70 character space that the FITS standard allows for header keyword values, the name will be cut into several keywords from the nearest slash (@key{/}).
The keywords have the following format: @command{ICFn_m} (for Crop File).
Where @command{n} is the number of the image used in this crop and @command{m} is the part of the name (it can be broken into multiple keywords).
Following the name is another keyword named @command{ICFnPIX} which shows the pixel range from that input image in the same syntax as @ref{Crop section syntax}.
So this string can be directly given to the @option{--section} option later.
Once done, a log file can be created in the current directory with the @code{--log} option.
This file will have three columns and the same number of rows as the number of cropped images.
There are also comments on the top of the log file explaining basic information about the run and descriptions for the columns.
A short description of the columns is also given below:
@enumerate
@item
The cropped image file name for that row.
@item
The number of input images that were used to create that image.
@item
A @code{0} if the central few pixels (value to the @option{--checkcenter} option) are blank and @code{1} if they are not.
When the crop was not defined by its center (see @ref{Crop modes}), or @option{--checkcenter} was given a value of 0 (see @ref{Invoking astcrop}), the center will not be checked and this column will be given a value of @code{-1}.
@end enumerate
If the output crop(s) have a single element (pixel in an image) and @option{--oneelemstdout} has been called, no output file will be produced!
Instead, the single element's value is printed on the standard output.
See the description of @option{--oneelemstdout} below for more:
@table @option
@item -p STR
@itemx --suffix=STR
The suffix (or post-fix) of the output files for when you want all the cropped images to have a special ending.
One case where this might be helpful is when besides the science images, you want the weight images (or exposure maps, which are also distributed with survey images) of the cropped regions too.
So in one run, you can set the input images to the science images and @option{--suffix=_s.fits}.
In the next run you can set the weight images as input and @option{--suffix=_w.fits}.
@item -a STR
@itemx --metaname=STR
Name of cropped HDU (value to the @code{EXTNAME} keyword of FITS).
If not given, a default @code{CROP} will be placed there (so the @code{EXTNAME} keyword will always be present in the output).
If crop produces many outputs from a catalog, they will be given the same string as @code{EXTNAME} (the file names containing the cropped HDU will be different).
@item -A
@itemx --append
If the output file already exists, append the cropped image HDU to the end of any existing HDUs.
By default (when this option isn't given), if an output file already exists, any existing HDU in it will be deleted.
If the output file doesn't exist, this option is redundant.
@item --primaryimghdu
Write the output into the primary (0-th) HDU/extension of the output.
By default, like all Gnuastro's default outputs, no data is written in the primary extension because the FITS standard suggests keeping that extension free of data and only for metadata.
@item -t
@itemx --oneelemstdout
When a crop only has a single element (a single pixel), print it to the standard output instead of making a file.
By default (without this option), a single-pixel crop will be saved to a file, just like a crop of any other size.
When a single crop is requested (either through @option{--center}, or a catalog of one row is given), the single value alone is printed with nothing else.
This makes it easy to immediately write the value into a shell variable for example:
@example
value=$(astcrop img.fits --mode=wcs --center=1.234,5.678 \
--width=1 --widthinpix --oneelemstdout \
--quiet)
@end example
If a catalog of coordinates is given (that would produce multiple crops; or multiple values in this scenario), the solution for a single value will not work!
Recall that Crop will do the crops in parallel, therefore each time you run it, the order of the rows will be different and not correspond to the order of the inputs.
To allow identification of each value (which row of the input catalog it corresponds to), Crop will first print the name of the would-be created file name, and print the value after it (separated by an empty SPACE character).
In other words, the file in the first column will not actually be created, but the value of the pixel it would have contained (if this option was not called) is printed after it.
@item -c FLT/INT
@itemx --checkcenter=FLT/INT
@cindex Check center of crop
Square box width of region in the center of the image to check for blank values.
If any of the pixels in this central region of a crop (defined by its center) are blank, then it will not be stored in an output file.
If the value to this option is zero, no checking is done.
This check is only applied when the cropped region(s) are defined by their center (not by the vertices, see @ref{Crop modes}) and when FITS files are made (@option{--oneelemstdout} is not called).
The units of the value are interpreted based on the @option{--mode} value (in WCS or pixel units).
The ultimate checked region size (in pixels) will be an odd integer around the center (converted from WCS, or when an even number of pixels are given to this option).
In WCS mode, the value can be given as fractions, for example, if the WCS units are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 arcseconds.
Because survey regions do not often have a clean square or rectangle shape, some of the pixels on the sides of the survey FITS image do not commonly have any data and are blank (see @ref{Blank pixels}).
So when the catalog was not generated from the input image, it often happens that the image does not have data over some of the points.
When the given center of a crop falls in such regions or outside the dataset, and this option has a non-zero value, no crop will be created.
Therefore with this option, you can specify a width of a small box (3 pixels is often good enough) around the central pixel of the cropped image.
You can check which crops were created and which were not from the command-line (if @option{--quiet} was not called, see @ref{Operating mode options}), or in Crop's log file (see @ref{Crop output}).
@item -b
@itemx --noblank
Pixels outside of the input image that are in the crop box will not be used.
By default they are filled with blank values (depending on type), see @ref{Blank pixels}.
This option only applies only in Image mode, see @ref{Crop modes}.
@item -z
@itemx --zeroisnotblank
In float or double images, it is common to give the value of zero to blank pixels.
If the input image type is one of these two types, such pixels will also be considered as blank.
You can disable this behavior with this option, see @ref{Blank pixels}.
@item --log
Generate a log file that contains the status of each of each crop in three columns: the name of the crop, if its center overlapped with the input(s) and if the central pixels were blank or not.
The name of the log file will be based on the output file name:
@itemize
@item
If no output file name is given, it will be @file{astcrop-log.fits}.
@item
If there is only a single crop and the output is a FITS image, the log file will have the same base name as the output crop, but with a @file{-log.fits} suffix.
In case @option{--oneelemstdout} is called with a single-pixel width then no output file will be created, so the log-file's name will be the string given to @option{--output}.
@item
If the output is a directory name (when a catalog is given), the log file name will be the directory name along with a @file{-log.fits} suffix.
In case @option{--oneelemstdout} is called with a single-pixel width then no output file will be created, so the log-file's name will be the string given to @option{--output}.
@end itemize
@end table
@node Crop known issues, , Crop output, Invoking astcrop
@subsubsection Crop known issues
When running Crop, you may encounter strange errors and bugs.
In these cases, please report a bug and we will try to fix it as soon as possible, see @ref{Report a bug}.
However, some things are beyond our control, or may take too long to fix directly.
In this section we list such known issues that may occur in known cases and suggest the hack (or work-around) to fix the problem:
@table @asis
@item Crash with @samp{Killed} when cropping catalog from @file{.fits.gz}
This happens because CFISTIO (that reads and writes FITS files) will internally decompress the file in a temporary place (possibly in the RAM), then start reading from it.
On the other hand, by default when given a catalog (with many crops) and not specifying @option{--numthreads}, Crop will use the maximum number of threads available on your system to do each crop faster.
On an normal (not compressed) file, parallel access will not cause a problem, however, when attempting parallel access with the maximum number of threads on a compressed file, CFITSIO crashes with @code{Killed}.
Therefore the following solutions can be used to fix this crash:
@itemize
@item
Decrease the number of threads (at the minimum, set @option{--numthreads=1}).
Since this solution does not attempt to change any of your previous Crop command components or does not change your local file structure, it is the preferred way.
@item
Decompress the file (with the command below) and feed the @file{.fits} file into Crop without changing the number of threads.
@example
$ gunzip -k image.fits.gz
@end example
@end itemize
@end table
@node Arithmetic, Convolve, Crop, Data manipulation
@section Arithmetic
It is commonly necessary to do operations on some or all of the elements of a dataset independently (pixels in an image).
For example, in the reduction of raw data it is necessary to subtract the Sky value (@ref{Sky value}) from each image image.
Later (once the images as warped into a single grid using Warp for example, see @ref{Warp}), the images are co-added (the output pixel grid is the average of the pixels of the individual input images).
Arithmetic is Gnuastro's program for such operations on your datasets directly from the command-line.
It currently uses the reverse polish or post-fix notation, see @ref{Reverse polish notation} and will work on the native data types of the input images/data to reduce CPU and RAM resources, see @ref{Numeric data types}.
For more information on how to run Arithmetic, please see @ref{Invoking astarithmetic}.
@menu
* Reverse polish notation:: The current notation style for Arithmetic.
* Integer benefits and pitfalls:: Integers have benefits, but require care.
* Noise basics:: Introduction various noise models.
* Arithmetic operators:: List of operators known to Arithmetic.
* Invoking astarithmetic:: How to run Arithmetic: options and output.
@end menu
@node Reverse polish notation, Integer benefits and pitfalls, Arithmetic, Arithmetic
@subsection Reverse polish notation
@cindex Post-fix notation
@cindex Reverse Polish Notation
The most common notation for arithmetic operations is the @url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where the operator goes between the two operands, for example, @mymath{4+5}.
The infix notation is the preferred way in most programming languages which come with scripting features for large programs.
This is because the infix notation requires a way to define precedence when more than one operator is involved.
For example, consider the statement @code{5 + 6 / 2}.
Should 6 first be divided by 2, then added by 5?
Or should 5 first be added with 6, then divided by 2?
Therefore we need parenthesis to show precedence: @code{5+(6/2)} or @code{(5+6)/2}.
Furthermore, if you need to leave a value for later processing, you will need to define a variable for it; for example, @code{a=(5+6)/2}.
Gnuastro provides libraries where you can also use infix notation in C or C++ programs.
However, Gnuastro's programs are primarily designed to be run on the command-line and the level of complexity that infix notation requires can be annoying/confusing to write on the command-line (where they can get confused with the shell's parenthesis or variable definitions).
Therefore Gnuastro's Arithmetic and Table (when doing column arithmetic) programs use the post-fix notation, also known as @url{https://en.wikipedia.org/wiki/Reverse_Polish_notation, reverse polish notation}.
For example, instead of writing @command{5+6}, we write @command{5 6 +}.
The Wikipedia article on the reverse polish notation provides some excellent explanation on this notation but here we will give a short summary here for self-sufficiency.
In short, in the reverse polish notation, the operator is placed after the operands.
As we will see below this removes the need to define parenthesis and lets you use previous values without needing to define a variable.
In the future@footnote{@url{https://savannah.gnu.org/task/index.php?13867}} we do plan to also optionally allow infix notation when arithmetic operations on datasets are desired, but due to time constraints on the developers we cannot do it immediately.
To easily understand how the reverse polish notation works, you can think of each operand (@code{5} and @code{6} in the example above) as a node in a ``last-in-first-out'' stack.
One such stack in daily life is a stack of dishes in the kitchen: you put a clean dish, on the top of a stack of dishes when it is ready for later usage.
Later, when you need a dish, you pick the top one (hence the ``last'' dish placed ``in'' the stack is the ``first'' dish that comes ``out'' when necessary).
Each operator will need a certain number of operands (in the example above, the @code{+} operator needs two operands: @code{5} and @code{6}).
In the kitchen metaphor, an operator can be an oven.
Every time an operator is confronted, the operator takes (or ``pops'') the number of operands it needs from the top of the stack (so they do not exist in the stack any more), does its operation, and places (or ``pushes'') the result back on top of the stack.
So if you want the average of 5 and 6, you would write: @command{5 6 + 2 /}.
The operations that are done are:
@enumerate
@item
@command{5} is an operand, so Arithmetic pushes it to the top of the stack (which is initially empty).
In the kitchen metaphor, you can visualize this as taking a new dish from the cabinet, putting the number 5 inside of the dish, and putting the dish on top of the (empty) cooking table in front of you.
You now have a stack of one dish on the table in front of you.
@item
@command{6} is also an operand, so it is pushed to the top of the stack.
Like before, you can visualize this as taking a new dish from the cabinet, putting the number 6 in it and placing it on top of the previous dish.
You now have a stack of two dishes on the table in front of you.
@item
@command{+} is a @emph{binary} operator, so it will pop the top two elements of the stack out of it, and perform addition on them (the order is @mymath{5+6} in the example above).
The result is @command{11} which is pushed to the top of the stack.
To visualize this, you can think of the @code{+} operator as an oven with a place for two dishes.
You pick up the top-most dish (that has the number 6 in it) and put it in the oven.
The top dish is now the one that has the number 5.
You also pick it up and put it in the oven, and close the oven door.
When the oven has finished its cooking, it produces a single output (in one dish, with the number 11 inside of it).
You take that output dish and put it back on the table.
You now have a stack of one dish on the table in front of you.
@item
@command{2} is an operand so push it onto the top of the stack.
In the kitchen metaphor, you again go to the cabinet, pick up a dish and put the number 2 inside of it and put the dish over the previous dish (that has the number 11).
You now have a stack of two dishes on the table in front of you.
@item
@command{/} (division) is a binary operator, so pull out the top two elements of the stack (top-most is @command{2}, then @command{11}) and divide the second one by the first.
In the kitchen metaphor, the @command{/} operator can be visualized as a microwave that takes two dishes.
But unlike the oven (@code{+} operator) before, the order of inputs matters (they are on top of each other: with the top dish holder being the numerator and the bottom one being the denominator).
Again, you look at your stack of dishes on the table.
You pick up the top one (with value 2 inside of it) and put it in the microwave's bottom (denominator) dish holder.
Then you go back to your stack of dishes on the table and pick up the top dish (with value 11 inside of it) and put that in the top (nominator) dish holder.
The microwave will do its work and when it is finished, returns a new dish with the single value 5.5 inside of it.
You pick up the dish from the microwave and place it back on the table.
@item
There are no more operands or operators, so simply return the remaining operand in the output.
In the kitchen metaphor, you see that your recipe has no more steps, so you just pick up the remaining dish and take it to the dining room to enjoy a good dinner.
@end enumerate
In the Arithmetic program, the operands can be FITS images of any dimensionality, or numbers (see @ref{Invoking astarithmetic}).
In Table's column arithmetic, they can be any column in the table (a series of numbers in an array) or a single number (see @ref{Column arithmetic}).
With this notation, very complicated procedures can be created without the need for parenthesis or worrying about precedence.
Even functions which take an arbitrary number of arguments can be defined in this notation.
This is a very powerful notation and is used in languages like Postscript @footnote{See the EPS and PDF part of @ref{Recognized file formats} for a little more on the Postscript language.} which produces PDF files when compiled.
@node Integer benefits and pitfalls, Noise basics, Reverse polish notation, Arithmetic
@subsection Integer benefits and pitfalls
Integers are the simplest numerical data types (@ref{Numeric data types}).
Because of this, their storage space is much less, and their processing is much faster than floating point types.
You can confirm this on your computer with the series of commands below.
You will make four 5000 by 5000 pixel images filled with random values.
Two of them will be saved as signed 8-bit integers, and two with 64-bit floating point types.
The last command prints the size of the created images.
@example
$ astarithmetic 5000 5000 2 makenew 5 mknoise-sigma int8 -oint-1.fits
$ astarithmetic 5000 5000 2 makenew 5 mknoise-sigma int8 -oint-2.fits
$ astarithmetic 5000 5000 2 makenew 5 mknoise-sigma float64 -oflt-1.fits
$ astarithmetic 5000 5000 2 makenew 5 mknoise-sigma float64 -oflt-2.fits
$ ls -lh int-*.fits flt-*.fits
@end example
The 8-bit integer images are only 24MB, while the 64-bit floating point images are 191 MB!
Besides helping in storage (on your disk, or in RAM, while the program is running), the small size of these files also helps in faster reading of the inputs.
Furthermore, CPUs can process integer operations much faster than floating points.
In the integers, the ones with a smaller width (number of bits) can be processed much faster. You can see this with the two commands below where you will add the integer images with each other and the floats with each other:
@example
$ astarithmetic flt-1.fits flt-2.fits + -oflt-sum.fits -g1
$ astarithmetic int-1.fits int-2.fits + -oint-sum.fits -g1
@end example
Have a look at the running time of the two commands above (that is printed on their last line).
On the system that this paragraph was written on, the floating point and integer image sums were respectively done in 0.481 and 0.089 seconds (the integer operation was almost 5 times faster!).
@cartouche
@noindent
@strong{If your data does not have decimal points, use integer types:} integer types are much faster and can take much less space in your storage or RAM (while the program is running).
@end cartouche
@cartouche
@noindent
@strong{Select the smallest width that can host the range/precision of values:} for example, if the largest possible value in your dataset is 1000 and all numbers are integers, store it as a 16-bit integer.
Also, if you know the values can never become negative, store it as an unsigned 16-bit integer.
For floating point types, if you know you will not need a precision of more than 6 significant digits, use the 32-bit floating point type.
For more on the range (for integers) and precision (for floats), see @ref{Numeric data types}.
@end cartouche
There is a price to be paid for this improved efficiency in integers: your wisdom!
If you have not selected your types wisely, strange situations may happen.
For example, try the command below:
@example
$ astarithmetic 125 10 +
@end example
@cindex Integer overflow
@cindex Overflow, integer
@noindent
You expect the output to be @mymath{135}, but it will be @mymath{-121}!
The reason is that when Arithmetic (or column-arithmetic in Table) confronts a number on the command-line, it use the principles above to select the most efficient type for each number.
Both @mymath{125} and @mymath{10} can safely fit within a signed, 8-bit integer type, so arithmetic will store both as an 8-bit integer.
However, the sum (@mymath{135}) is larger than the maximum possible value of an 8-bit signed integer (@mymath{127}).
Therefore an integer overflow will occur, and the bits will be over-written.
As a result, the value will be @mymath{135-128=7} more than the minimum value of this type (@mymath{-128}), which is @mymath{-128+7=-121}.
When you know situations like this may occur, you can simply use @ref{Numerical type conversion operators}, to set just one of the inputs to a wider data type (the smallest, wider type to avoid wasting resources).
In the example above, this would be @code{uint16}:
@example
$ astarithmetic 125 uint16 10 +
@end example
The reason this worked is that @mymath{125} is now converted into an unsigned 16-bit integer before the @code{+} operator.
Since this is larger than an 8-bit integer, the C programming language's automatic type conversion will treat both as the wider type and store the result of the binary operation (@code{+}) in that type.
For such a basic operation like the command above, a faster hack would be any of the two commands below (which are equivalent).
This is because @code{125.0} or @code{125.} are interpreted as floating-point types and they do not suffer from such issues (converting only on one input is enough):
@example
$ astarithmetic 125. 10 +
$ astarithmetic 125.0 10 +
@end example
For this particular command, the fix above will be as fast as the @code{uint16} solution.
This is because there are only two numbers, and the overhead of Arithmetic (reading configuration files, etc.) dominates the running time.
However, for large datasets, the @code{uint16} solution will be faster (as you saw above), Arithmetic will consume less RAM while running, and the output will consume less storage in your system (all major benefits)!
It is possible to do internal checks in Gnuastro and catch integer overflows and correct them internally.
However, we have not opted for this solution because all those checks will consume significant resources and slow down the program (especially with large datasets where RAM, storage and running time become important).
To be optimal, we therefore trust that you (the wise Gnuastro user!) make the appropriate type conversion in your commands where necessary (recall that the operators are available in @ref{Numerical type conversion operators}).
@node Noise basics, Arithmetic operators, Integer benefits and pitfalls, Arithmetic
@subsection Noise basics
@cindex Noise
@cindex Image noise
Deep astronomical images, like those used in extragalactic studies, seriously suffer from noise in the data.
Generally speaking, the sources of noise in an astronomical image are photon counting noise and Instrumental noise which are discussed in @ref{Photon counting noise} and @ref{Instrumental noise}.
This review finishes with @ref{Generating random numbers} which is a short introduction on how random numbers are generated.
We will see that while software random number generators are not perfect, they allow us to obtain a reproducible series of random numbers through setting the random number generator function and seed value.
Therefore in this section, we will also discuss how you can set these two parameters in Gnuastro's programs (including the arithmetic operators in @ref{Random number generators}).
@menu
* Photon counting noise:: Poisson noise
* Instrumental noise:: Readout, dark current and other sources.
* Final noised pixel value:: How the final noised value is calculated.
* Generating random numbers:: How random numbers are generated.
@end menu
@node Photon counting noise, Instrumental noise, Noise basics, Noise basics
@subsubsection Photon counting noise
@cindex Counting error
@cindex de Moivre, Abraham
@cindex Poisson distribution
@cindex Photon counting noise
@cindex Poisson, Sim@'eon Denis
With the very accurate electronics used in today's detectors, photon counting noise@footnote{In practice, we are actually counting the electrons that are produced by each photon, not the actual photons.} is the most significant source of uncertainty in most datasets.
To understand this noise (error in counting) and its effect on the images of astronomical targets, let's start by reviewing how a distribution produced by counting can be modeled as a parametric function.
Counting is an inherently discrete operation, which can only produce positive integer outputs (including zero).
For example, we cannot count @mymath{3.2} or @mymath{-2} of anything.
We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
The distribution of values, as a result of counting efforts is formally known as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson distribution}.
It is associated to Sim@'eon Denis Poisson, because he discussed it while working on the number of wrongful convictions in court cases in his 1837 book@footnote{[From Wikipedia] Poisson's result was also derived in a previous study by Abraham de Moivre in 1711.
Therefore some people suggest it should rightly be called the de Moivre distribution.}.
@cindex Probability density function
Let's take @mymath{\lambda} to represent the expected mean count of something.
Furthermore, let's take @mymath{k} to represent the output of a counting attempt (hence @mymath{k} is a positive integer).
The probability density function of getting @mymath{k} counts (in each attempt, given the expected/mean count of @mymath{\lambda}) can be written as:
@cindex Poisson distribution
@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2, 3, \dots @}}
@cindex Skewed Poisson distribution
Because the Poisson distribution is only applicable to positive integer values (note the factorial operator, which only applies to non-negative integers), naturally it is very skewed when @mymath{\lambda} is near zero.
One qualitative way to understand this behavior is that for smaller values near zero, there simply are not enough integers smaller than the mean, than integers that are larger.
Therefore to accommodate all possibilities/counts, it has to be strongly skewed to the positive when the mean is small.
For more on Skewness, see @ref{Skewness caused by signal and its measurement}.
@cindex Compare Poisson and Gaussian
As @mymath{\lambda} becomes larger, the distribution becomes more and more symmetric, and the variance of that distribution is equal to its mean.
In other words, the standard deviation is the square root of the mean.
It can also be proved that when the mean is large, say @mymath{\lambda>1000}, the Poisson distribution approaches the @url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian) distribution} with mean @mymath{\mu=\lambda} and standard deviation @mymath{\sigma=\sqrt{\lambda}}.
In other words, a Poisson distribution (with a sufficiently large @mymath{\lambda}) is simply a Gaussian that has one free parameter (@mymath{\mu=\lambda} and @mymath{\sigma=\sqrt{\lambda}}), instead of the two parameters that the Gaussian distribution originally has (independent @mymath{\mu} and @mymath{\sigma}).
@cindex Sky value
@cindex Background flux
@cindex Undetected objects
In real situations, the photons/flux from our targets are combined with photons from a certain background (observationally, the @emph{Sky} value).
The Sky value is defined to be the average flux of a region in the dataset with no targets.
Its physical origin can be the brightness of the atmosphere (for ground-based instruments), possible stray light within the imaging instrument, the average flux of undetected targets, etc.
The Sky value is thus an ideal definition, because in real datasets, what lies deep in the noise (far lower than the detection limit) is never known@footnote{In a real image, a relatively large number of very faint objects can be fully buried in the noise and never detected.
These undetected objects will bias the background measurement to slightly larger values.
Our best approximation is thus to simply assume they are uniform, and consider their average effect.
See Figure 1 (a.1 and a.2) and Section 2.2 in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.}.
To account for all of these, the sky value is defined to be the average count/value of the undetected regions in the image.
In a mock image/dataset, we have the luxury of setting the background (Sky) value.
@cindex Simulating noise
@cindex Noise simulation
In summary, the value in each element of the dataset (pixel in an image) is the sum of contributions from various galaxies and stars (after convolution by the PSF, see @ref{PSF}).
Let's name the convolved sum of possibly overlapping objects in each pixel as @mymath{I_{nn}}.
@mymath{nn} represents `no noise'.
For now, let's assume the background (@mymath{B}) is constant and sufficiently high for the Poisson distribution to be approximated by a Gaussian.
Then the flux of that pixel, after adding noise, is @emph{a random value} taken from a Gaussian distribution with the following mean (@mymath{\mu}) and standard deviation (@mymath{\sigma}):
@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{B+I_{nn}}}
@cindex Bias level in detectors
@cindex Dark level in detectors
In astronomical instruments, @mymath{B} is enhanced by adding a ``bias'' level to each pixel before the shutter is even opened (for the exposure to start).
As the exposure is ongoing and photo-electrons are accumulating from the astronomical objects, a ``dark'' current (due to thermal radiation of the instrument) also builds up in the pixels.
The ``dark'' current will accumulate even when the shutter is closed, but the CCD electronics are working (hence the name ``dark'').
This added dark level further enhances the mean value in a real observation compared to the raw background value (from the atmosphere for example).
Since this type of noise is inherent in the objects we study, it is usually measured on the same scale as the astronomical objects, namely the magnitude system, see @ref{Brightness flux magnitude}.
It is then internally converted to the flux scale for further processing.
The equations above clearly show the importance of the background value and its effect on the final signal to noise ratio in each pixel of a science image.
It is therefore, one of the most important factors in understanding the noise (and properly simulating observations where necessary).
An inappropriately bright background value can hide the signal of the mock profile hide behind the noise.
In other words, a brighter background has larger standard deviation and vice versa.
As a result, the only necessary parameter to define photon-counting noise over a mock image of simulated profiles is the background.
For a complete example, see @ref{Sufi simulates a detection}.
To better understand the correlation between the mean (or background) value and the noise standard deviation, let's use an analogy.
Consider the profile of your galaxy to be analogous to the profile of a ship that is sailing in the sea.
The height of the ship would therefore be analogous to the maximum flux difference between your galaxy's minimum and maximum values.
Furthermore, let's take the depth of the sea to represent the background value: a deeper sea, corresponds to a brighter background.
In this analogy, the ``noise'' would be the height of the waves that surround the ship: in deeper waters, the waves would also be taller (the square root of the mean depth at the ship's position).
If the ship is in deep waters, the height of waves are greater than when the ship is near to the beach (at lower depths).
Therefore, when the ship is in the middle of the sea, there are high waves that are capable of hiding a significant part of the ship from our perspective.
This corresponds to a brighter background value in astronomical images: the resulting noise from that brighter background can completely wash out the signal from a fainter galaxy, star or solar system object.
@node Instrumental noise, Final noised pixel value, Photon counting noise, Noise basics
@subsubsection Instrumental noise
@cindex Readout noise
@cindex Instrumental noise
@cindex Noise, instrumental
While taking images with a camera, a bias current is fed to the pixels, the variation of the value of this bias current over the pixels, also adds to the final image noise.
Another source of noise is the readout noise that is produced by the electronics in the detector.
Specifically, the parts that attempt to digitize the voltage produced by the photo-electrons in the analog to digital converter.
With the current generation of instruments, this source of noise is not as significant as the noise due to the background Sky discussed in @ref{Photon counting noise}.
Let @mymath{C} represent the combined standard deviation of all these instrumental sources of noise.
When only this source of noise is present, the noised pixel value would be a random value chosen from a Gaussian distribution with
@dispmath{\mu=I_{nn}, \quad \sigma=\sqrt{C^2+I_{nn}}}
@cindex ADU
@cindex Gain
@cindex Counts
This type of noise is independent of the signal in the dataset, it is only determined by the instrument.
So the flux scale (and not magnitude scale) is most commonly used for this type of noise.
In practice, this value is usually reported in analog-to-digital units or ADUs, not flux or electron counts.
The gain value of the device can be used to convert between these two, see @ref{Brightness flux magnitude}.
@node Final noised pixel value, Generating random numbers, Instrumental noise, Noise basics
@subsubsection Final noised pixel value
Based on the discussions in @ref{Photon counting noise} and @ref{Instrumental noise}, depending on the values you specify for @mymath{B} and @mymath{C} from the above, the final noised value for each pixel is a random value chosen from a Gaussian distribution with
@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{C^2+B+I_{nn}}}
@node Generating random numbers, , Final noised pixel value, Noise basics
@subsubsection Generating random numbers
@cindex Random numbers
@cindex Numbers, random
As discussed above, to generate noise we need to make random samples of a particular distribution.
So it is important to understand some general concepts regarding the generation of random numbers.
For a very complete and nice introduction we strongly advise reading Donald Knuth's ``The art of computer programming'', volume 2, chapter 3@footnote{Knuth, Donald. 1998.
The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
Quoting from the GNU Scientific Library manual, ``If you do not own it, you should stop reading right now, run to the nearest bookstore, and buy it''@footnote{For students, running to the library might be more affordable!}!
@cindex Psuedo-random numbers
@cindex Numbers, psuedo-random
Using only software, we can only produce what is called a psuedo-random sequence of numbers.
A true random number generator is a hardware (let's assume we have made sure it has no systematic biases), for example, throwing dice or flipping coins (which have remained from the ancient times).
More modern hardware methods use atmospheric noise, thermal noise or other types of external electromagnetic or quantum phenomena.
All pseudo-random number generators (software) require a seed to be the basis of the generation.
The advantage of having a seed is that if you specify the same seed for multiple runs, you will get an identical sequence of random numbers which allows you to reproduce the same final noised image.
@cindex Environment variables
@cindex GNU Scientific Library
The programs in GNU Astronomy Utilities (for example, MakeNoise or MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
GSL allows the user to set the random number generator through environment variables, see @ref{Installation directory} for an introduction to environment variables.
In the chapter titled ``Random Number Generation'' they have fully explained the various random number generators that are available (there are a lot of them!).
Through the two environment variables @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} you can specify the generator and its seed respectively.
@cindex Seed, Random number generator
@cindex Random number generator, Seed
If you do not specify a value for @code{GSL_RNG_TYPE}, GSL will use its default random number generator type.
The default type is sufficient for most general applications.
If no value is given for the @code{GSL_RNG_SEED} environment variable and you have asked Gnuastro to read the seed from the environment (through the @option{--envseed} option), then GSL will use the default value of each generator to give identical outputs.
If you do not explicitly tell Gnuastro programs to read the seed value from the environment variable, then they will use the system time (accurate to within a microsecond) to generate (apparently random) seeds.
In this manner, every time you run the program, you will get a different random number distribution.
There are two ways you can specify values for these environment variables.
You can call them on the same command-line for example:
@example
$ GSL_RNG_TYPE="taus" GSL_RNG_SEED=345 astarithmetic input.fits \
mknoise-sigma \
--envseed
@end example
@noindent
In this manner the values will only be used for this particular execution of Arithmetic.
However, it makes your code hard to read!
Alternatively, you can define them for the full period of your terminal session or script, using the shell's @command{export} command with the two separate commands below (for a script remove the @code{$} signs):
@example
$ export GSL_RNG_TYPE="taus"
$ export GSL_RNG_SEED=345
@end example
@cindex Startup scripts
@cindex @file{.bashrc}
@noindent
The subsequent programs which use GSL's random number generators will hence forth use these values in this session of the terminal you are running or while executing this script.
In case you want to set fixed values for these parameters every time you use the GSL random number generator, you can add these two lines to your @file{.bashrc} startup script@footnote{Do not forget that if you are going to give your scripts (that use the GSL random number generator) to others you have to make sure you also tell them to set these environment variable separately.
So for scripts, it is best to keep all such variable definitions within the script, even if they are within your @file{.bashrc}.}, see @ref{Installation directory}.
@strong{IMPORTANT NOTE:} If the two environment variables @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} are defined, GSL will report them by default, even if you do not use the @option{--envseed} option.
For example, see this call to MakeProfiles:
@example
$ export GSL_RNG_TYPE=taus
$ export GSL_RNG_SEED=345
$ astmkprof -s1 --kernel=gaussian,2,5
GSL_RNG_TYPE=taus
GSL_RNG_SEED=345
MakeProfiles V.VV started on DDD MMM DDD HH:MM:SS YYYY
- Building one gaussian kernel
- Random number generator (RNG) type: taus
- Basic RNG seed: 1618960836
---- ./kernel.fits created.
-- Output: ./kernel.fits
MakeProfiles finished in 0.068945 seconds
@end example
@noindent
@cindex Seed, Random number generator
@cindex Random number generator, Seed
The first two output lines (showing the names and values of the GSL environment variables) are printed by GSL before MakeProfiles actually starts generating random numbers.
Gnuastro's programs will report the actual values they use independently (after the name of the program), you should check them for the final values used, not GSL's printed values.
In the example above, did you notice how the random number generator seed above is different between GSL and MakeProfiles?
However, if @option{--envseed} was given, both printed seeds would be the same.
@node Arithmetic operators, Invoking astarithmetic, Noise basics, Arithmetic
@subsection Arithmetic operators
In this section, list of recognized operators in Arithmetic (and the Table program's @ref{Column arithmetic}) and discussed in detail with examples.
As mentioned before, to be able to easily do complex operations on the command-line, the Reverse Polish Notation is used (where you write `@mymath{4\quad5\quad+}' instead of `@mymath{4 + 5}'), if you are not already familiar with it, before continuing, please see @ref{Reverse polish notation}.
The operands to all operators can be a data array (for example, a FITS image or data cube) or a number, the output will be an array or number according to the inputs.
For example, a number multiplied by an array will produce an array.
The numerical data type of the output of each operator is described within it.
Here are some generic tips and tricks (relevant to all operators):
@table @asis
@item Multiple operators in one command
When you need to use arithmetic commands in several consecutive operations, you can use one command instead of multiple commands and perform all calculations in the same command.
For example, assume you want to apply a threshold of 10 on your image, and label the connected groups of pixel above this threshold.
You need two operators for this: @code{gt} (for ``greater than'', see @ref{Conditional operators}) and @code{connected-components} (see @ref{Mathematical morphology operators}).
The bad (non-optimized and slow) way of doing this is to call Arithmetic two times:
@example
$ astarithmetic image.fits 10 gt --output=thresh.fits
$ astarithmetic thresh.fits 2 connected-components \
--output=labeled.fits
$ rm thresh.fits
@end example
The good (optimal) way is to call them after each other (remember @ref{Reverse polish notation}):
@example
$ astarithmetic image.fits 10 gt 2 connected-components \
--output=labeled.fits
@end example
You can similarly add any number of operations that must be done sequentially in a single command and benefit from the speed and lack of intermediate files.
When your commands become long, you can use the @code{set-AAA} operator to make it more readable, see @ref{Operand storage in memory or a file}.
@item Blank pixels in Arithmetic
Blank pixels in the image (see @ref{Blank pixels}) will be stored based on the data type.
When the input is floating point type, blank values are NaN.
One aspect of NaN values is that by definition they will fail on @emph{any} comparison.
Also, any operator that includes a NaN as a an operand will produce a NaN (irrespective of its other operands).
Hence both equal and not-equal operators will fail when both their operands are NaN!
Therefore, the only way to guarantee selection of blank pixels is through the @command{isblank} operator explained above.
One way you can exploit this property of the NaN value to your advantage is when you want a fully zero-valued image (even over the blank pixels) based on an already existing image (with same size and world coordinate system settings).
The following command will produce this for you:
@example
$ astarithmetic input.fits nan eq --output=all-zeros.fits
@end example
@noindent
Note that on the command-line you can write NaN in any case (for example, @command{NaN}, or @command{NAN} are also acceptable).
Reading NaN as a floating point number in Gnuastro is not case-sensitive.
@end table
@menu
* Basic mathematical operators:: For example, +, -, /, log, and pow.
* Trigonometric and hyperbolic operators:: sin, cos, atan, asinh, etc.
* Constants:: Physical and Mathematical constants.
* Coordinate conversion operators:: For example equatorial J2000 to Galactic.
* Unit conversion operators:: Various unit conversions necessary.
* Statistical operators:: Statistics of a single dataset (for example, mean).
* Coadding operators:: Coadding or combining multiple datasets into one.
* Filtering operators:: Smoothing a dataset through mixing pixel with neighbors.
* Pooling operators:: Reducing size through statistics of pixels in window.
* Interpolation operators:: Giving blank pixels a value.
* Dimensionality changing operators:: Collapse or expand a dataset.
* Conditional operators:: Select certain pixels within the dataset.
* Mathematical morphology operators:: Work on binary images, for example, erode.
* Bitwise operators:: Work on bits within one pixel.
* Numerical type conversion operators:: Convert the numeric datatype of a dataset.
* Random number generators:: Random numbers can be used to add noise for example.
* Coordinate and border operators:: Return edges of 2D boxes.
* Loading external columns:: Read a column from a table into the stack.
* Size and position operators:: Extracting image size and pixel positions.
* New operands:: How to construct an empty dataset from scratch.
* Operand storage in memory or a file:: Tools for complex operations in one command.
@end menu
@node Basic mathematical operators, Trigonometric and hyperbolic operators, Arithmetic operators, Arithmetic operators
@subsubsection Basic mathematical operators
These are some of the most common operations you will be doing on your data and include, so no further explanation is necessary.
If you are new to Gnuastro, just read the description of each carefully.
@table @command
@item +
Addition, so ``@command{4 5 +}'' is equivalent to @mymath{4+5}.
For example, in the command below, the value 20000 is added to each pixel's value in @file{image.fits}:
@example
$ astarithmetic 20000 image.fits +
@end example
You can also use this operator to sum the values of one pixel in two images (which have to be the same size).
For example, in the commands below (which are identical, see paragraph after the commands), each pixel of @file{sum.fits} is the sum of the same pixel's values in @file{a.fits} and @file{b.fits}.
@example
$ astarithmetic a.fits b.fits + -h1 -h1 --output=sum.fits
$ astarithmetic a.fits b.fits + -g1 --output=sum.fits
@end example
The HDU/extension has to be specified for each image with @option{-h}.
However, if the HDUs are the same in all inputs, you can use @option{-g} to only specify the HDU once
If you need to add more than one dataset, one way is to use this operator multiple times, for example, see the two commands below that are identical in the Reverse Polish Notation (@ref{Reverse polish notation}):
@example
$ astarithmetic a.fits b.fits + c.fits + -osum.fits
$ astarithmetic a.fits b.fits c.fits + + -osum.fits
@end example
However, this can get annoying/buggy if you have more than three or four images, in that case, a better way to sum data is to use the @code{sum} operator (which also ignores blank pixels), that is discussed in @ref{Coadding operators}.
@cartouche
@noindent
@strong{NaN values:} if a single argument of @code{+} has a NaN value, the output will also be NaN.
To ignore NaN values, use the @code{sum} operator of @ref{Coadding operators}.
You can see the difference with the two commands below:
@example
$ astarithmetic --quiet 1.0 2.0 3.0 nan + + +
nan
$ astarithmetic --quiet 1.0 2.0 3.0 nan 4 sum
6.000000e+00
@end example
The same goes for all the @ref{Coadding operators} so if your data may include NaN pixels, be sure to use the coadding operators.
@end cartouche
@item -
Subtraction, so ``@command{4 5 -}'' is equivalent to @mymath{4-5}.
Usage of this operator is similar to @command{+} operator, for example:
@example
$ astarithmetic 20000 image.fits -
$ astarithmetic a.fits b.fits - -g1 --output=sub.fits
@end example
@item x
Multiplication, so ``@command{4 5 x}'' is equivalent to @mymath{4\times5}.
For example, in the command below, the value of each output pixel is 5 times its value in @file{image.fits}:
@example
$ astarithmetic image.fits 5 x
@end example
And you can multiply the value of each pixel in two images, like this:
@example
$ astarithmetic a.fits a.fits x -g1 –output=multip.fits
@end example
@item /
Division, so ``@command{4 5 /}'' is equivalent to @mymath{4/5}.
Like the multiplication, for example
@example
$ astarithmetic image.fits 5 -h1 /
$ astarithmetic a.fits b.fits / -g1 –output=div.fits
@end example
@item %
Modulo (remainder), so ``@command{3 2 %}'' will return @mymath{1}.
Note that the modulo operator only works on integer types (see @ref{Numeric data types}).
This operator is therefore not defined for most processed astronomical astronomical images that have floating-point value.
However it is useful in labeled images, for example, @ref{Segment output}).
In such cases, each pixel is the integer label of the object it is associated with hence with the example command below, we can change the labels to only be between 1 and 4 and decrease all objects on the image to 4/5th (all objects with a label that is a multiple of 5 will be set to 0).
@example
$ astarithmetic label.fits 5 1 %
@end example
@item abs
Absolute value of first operand, so ``@command{4 abs}'' is equivalent to @mymath{|4|}.
For example, the output of the command bellow will not have any negative pixels (all negative pixels will be multiplied by @mymath{-1} to become positive)
@example
$ astarithmetic image.fits abs
@end example
@item pow
First operand to the power of the second, so ``@command{4.3 5 pow}'' is equivalent to @mymath{4.3^{5}}.
For example, with the command below all pixels will be squared
@example
$ astarithmetic image.fits 2 pow
@end example
@item sqrt
The square root of the first operand, so ``@command{5 sqrt}'' is equivalent to @mymath{\sqrt{5}}.
Since the square root is only defined for positive values, any negative-valued pixel will become NaN (blank).
The output will have a floating point type, but its precision is determined from the input: if the input is a 64-bit floating point, the output will also be 64-bit.
Otherwise, the output will be 32-bit floating point (see @ref{Numeric data types} for the respective precision).
Therefore if you require 64-bit precision in estimating the square root, convert the input to 64-bit floating point first, for example, with @code{5 float64 sqrt}.
For example, each pixel of the output of the command below will be the square root of that pixel in the input.
@example
$ astarithmetic image.fits sqrt
@end example
If you just want to scale an image with negative values using this operator (for better visual inspection, and the actual values do not matter for you), you can subtract the image from its minimum value, then take its square root:
@example
$ astarithmetic image.fits image.fits minvalue - sqrt -g1
@end example
Alternatively, to avoid reading the image into memory two times, you can use the @option{set-} operator to read it into the variable @option{i} and use @option{i} two times to speed up the operation (described below):
@example
$ astarithmetic image.fits set-i i i minvalue - sqrt
@end example
@item log
Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to @mymath{ln(4)}.
Negative pixels will become NaN, and the output type is determined from the input, see the explanation under @command{sqrt} for more on these features.
For example, the command below will take the natural logarithm of every pixel in the input.
@example
$ astarithmetic image.fits log --output=log.fits
@end example
@item log10
Base-10 logarithm of first popped operand, so ``@command{4 log10}'' is equivalent to @mymath{log_{10}(4)}.
Negative pixels will become NaN, and the output type is determined from the input, see the explanation under @command{sqrt} for more on these features.
For example, the command below will take the base-10 logarithm of every pixel in the input.
@example
$ astarithmetic image.fits log10
@end example
@end table
@node Trigonometric and hyperbolic operators, Constants, Basic mathematical operators, Arithmetic operators
@subsubsection Trigonometric and hyperbolic operators
All the trigonometric and hyperbolic functions are described here.
One good thing with these operators is that they take inputs and outputs in degrees (which we usually need as input or output), not radians (like most other programs/libraries).
@table @command
@item sin
@itemx cos
@itemx tan
@cindex Trigonometry
Basic trigonometric functions.
They take one operand, in units of degrees.
@item asin
@itemx acos
@itemx atan
Inverse trigonometric functions.
They take one operand and the returned values are in units of degrees.
@item atan2
Inverse tangent (output in units of degrees) that uses the signs of the input coordinates to distinguish between the quadrants.
This operator therefore needs two operands: the first popped operand is assumed to be the X axis position of the point, and the second popped operand is its Y axis coordinate.
For example, see the commands below.
To be more clear, we are using Table's @ref{Column arithmetic} which uses exactly the same internal library function as the Arithmetic program for images.
We are showing the results for four points in the four quadrants of the 2D space (if you want to try running them, you do not need to type/copy the parts after @key{#}).
The first point (2,2) is in the first quadrant, therefore the returned angle is 45 degrees.
But the second, third and fourth points are in the quadrants of the same order, and the returned angles reflect the quadrant.
@example
$ echo " 2 2" | asttable -c'arith $2 $1 atan2' # --> 45
$ echo " 2 -2" | asttable -c'arith $2 $1 atan2' # --> -45
$ echo "-2 -2" | asttable -c'arith $2 $1 atan2' # --> -135
$ echo "-2 2" | asttable -c'arith $2 $1 atan2' # --> 135
@end example
However, if you simply use the classic arc-tangent operator (@code{atan}) for the same points, the result will only be in two quadrants as you see below:
@example
$ echo " 2 2" | asttable -c'arith $2 $1 / atan' # --> 45
$ echo " 2 -2" | asttable -c'arith $2 $1 / atan' # --> -45
$ echo "-2 -2" | asttable -c'arith $2 $1 / atan' # --> 45
$ echo "-2 2" | asttable -c'arith $2 $1 / atan' # --> -45
@end example
@item sinh
@itemx cosh
@itemx tanh
@cindex Hyperbolic functions
Hyperbolic sine, cosine, and tangent.
These operators take a single operand.
@item asinh
@itemx acosh
@itemx atanh
Inverse Hyperbolic sine, cosine, and tangent.
These operators take a single operand.
@end table
@node Constants, Coordinate conversion operators, Trigonometric and hyperbolic operators, Arithmetic operators
@subsubsection Constants
@cindex Pi
During your analysis it is often necessary to have certain constants like the number @mymath{\pi}.
The ``operators'' in this section do not actually take any operand, they just replace the desired constant into the stack.
So in effect, these are actually operands.
But since their value is not inserted by the user, we have placed them in the list of operators.
@table @code
@item e
@cindex e (base of natural logarithm)
@cindex Euler's number (@mymath{e})
@cindex Base of natural logarithm (@mymath{e})
Euler’s number, or the base of the natural logarithm (no units).
See @url{https://en.wikipedia.org/wiki/E_(mathematical_constant), Wikipedia}.
@item pi
@cindex Pi
Ratio of circle’s circumference to its diameter (no units).
See @url{https://en.wikipedia.org/wiki/Pi, Wikipedia}.
@item c
@cindex Speed of light
The speed of light in vacuum, in units of @mymath{m/s}.
see @url{https://en.wikipedia.org/wiki/Speed_of_light, Wikipedia}.
@item G
@cindex @mymath{g} (gravitational constant)
@cindex Gravitational constant (@mymath{g})
The gravitational constant, in units of @mymath{m^3/kg/s^2}.
See @url{https://en.wikipedia.org/wiki/Gravitational_constant, Wikipedia}.
@item h
@cindex @mymath{h} (Plank's constant)
@cindex Plank's constant (@mymath{h})
Plank's constant, in units of @mymath{J/Hz} or @mymath{kg\times m^2/s}.
See @url{https://en.wikipedia.org/wiki/Planck_constant, Wikipedia}.
@item au
@cindex Astronomical Unit (AU)
@cindex AU (Astronomical Unit)
Astronomical Unit, in units of meters.
See @url{https://en.wikipedia.org/wiki/Astronomical_unit, Wikipedia}.
@item ly
@cindex Light year
Distance covered by light in vacuum in one year, in units of meters.
See @url{https://en.wikipedia.org/wiki/Light-year, Wikipedia}.
@item avogadro
@cindex Avogradro's number
Avogadro's constant, in units of @mymath{1/mol}.
See @url{https://en.wikipedia.org/wiki/Avogadro_constant, Wikipedia}.
@item fine-structure
@cindex Fine structure constant
The fine-structure constant (no units).
See @url{https://en.wikipedia.org/wiki/Fine-structure_constant, Wikipedia}.
@end table
@node Coordinate conversion operators, Unit conversion operators, Constants, Arithmetic operators
@subsubsection Coordinate conversion operators
@cindex Galactic coordinate system
@cindex Ecliptic coordinate system
@cindex Equatorial coordinate system
Different celestial coordinate systems are useful for different scenarios.
For example, assume you have the RA and Dec of large sample of galaxies that you plan to study the halos of galaxies from.
For such studies, you prefer to stay as far away as possible from the Galactic plane, because the density of stars and interstellar filaments (cirrus) significantly increases as you get close to the Milky way's disk.
But the @url{https://en.wikipedia.org/wiki/Equatorial_coordinate_system, Equatorial coordinate system} which defines the RA and Dec and is based on Earth's equator; and does not show the position of your objects in relation to the galactic disk.
The best way forward in the example above is to convert your RA and Dec table into the @url{https://en.wikipedia.org/wiki/Galactic_coordinate_system, Galactic coordinate system}; and select those with a large (positive or negative) Galactic latitude.
Alternatively, if you observe a bright point on a galaxy and want to confirm if it was actually a super-nova and not a moving asteroid, a first step is to convert your RA and Dec to the @url{https://en.wikipedia.org/wiki/Ecliptic_coordinate_system, Ecliptic coordinate system} and confirm if you are sufficiently distant from the ecliptic (plane of the Solar System; where fast moving objects are most common).
The operators described in this section are precisely for the purpose above: to convert various celestial coordinate systems that are supported within Gnuastro into each other.
For example, if you want to convert the RA and Dec equatorial (at the Julian year 2000 equinox) coordinates (within the @code{RA} and @code{DEC} columns) of @file{points.fits} into Galactic longitude and latitude, you can use the command below (the column metadata are not mandatory, but to avoid later confusion, it is always good to have them in your output.
@example
$ asttable points.fits -c'arith RA DEC eq-j2000-to-galactic' \
--colmetadata=1,GLON,deg,"Galactic longitude" \
--colmetadata=2,GLAT,deg,"Galactic latitude" \
--output=points-gal.fits
@end example
One important thing to consider is that the equatorial and ecliptic coordinates are not static: they include the dynamics of Earth in the solar system: in particular, the reference point on the equator moves over decades.
Therefore these two (equatorial and ecliptic) coordinate systems are defined within epochs: the 1950 epoch is defined by @url{https://en.wikipedia.org/wiki/Epoch_(astronomy)#Besselian_years, Besselian years}, while the 2000 epoch is defined in @url{https://en.wikipedia.org/wiki/Epoch_(astronomy)#Julian_years_and_J2000, Julian years}.
So when dealing with these coordinates one of the `@code{-b1950}' or `@code{-j2000}' suffixes are necessary (for example @code{eq-j2000} or @code{ec-b1950}).
@cindex ICRS
The Galactic or Supergalactic coordinates are not defined based on the Earth's dynamics; therefore they do not have any epoch associated with them.
Extra-galactic studies do not depend on the dynamics of the earth, but the equatorial coordinate system is the most dominant in that field.
Therefore in its 23rd General Assembly, the International Astronomical Union approved the @url{https://en.wikipedia.org/wiki/International_Celestial_Reference_System_and_its_realizations, International Celestial Reference System} or ICRS based on quasars (which are static within our observational limitations)viewed through long baseline radio interferometry (the most accurate method of observation that we currently have).
ICRS is designed to be within the errors of the Equatorial J2000 coordinate system, so they are currently very similar; but ICRS has much better accuracy.
We will be adding ICRS in the operators below soon.
@strong{Floating point errors:} The operation to convert between the coordinate systems involves many sines, cosines (and their inverse).
Therefore, floating point errors (due to the limited precision of the definition of floating points in bits) can cause small offsets.
For example see the code below were we convert equatorial to galactic and back, then compare the input and output (which is in the 5th and 6th decimal of a degree; or about 0.2 or 0.01 arcseconds).
@example
$ sys1=eq-j2000
$ sys2=galactic
$ echo "10.2345689 45.6789012" \
| asttable -Afixed -B8 \
-c'arith $1 $2 '$sys1'-to-'$sys2' \
'$sys2'-to-'$sys1' set-lat set-lng \
lng $1 - lat $2 -'
0.00000363 -0.00007725
@end example
If you set @code{sys2=ec-j2000} or @code{sys2=supergalactic}, it will be zero until the full set of 8 decimals that are printed here (the displayed precision can be changed with the value of the @code{-B} option above).
It is therefore useful to have your original coordinates (in the same table for example) and not do too many conversions on conversions (to propagate this problem).
@table @code
@item eq-b1950-to-eq-j2000
@itemx eq-b1950-to-ec-b1950
@itemx eq-b1950-to-ec-j2000
@itemx eq-b1950-to-galactic
@itemx eq-b1950-to-supergalactic
Convert Equatorial (B1950 equinox) coordinates into the respective coordinate system within each operator's name.
@item eq-j2000-to-eq-b1950
@itemx eq-j2000-to-ec-b1950
@itemx eq-j2000-to-ec-j2000
@itemx eq-j2000-to-galactic
@itemx eq-j2000-to-supergalactic
Convert Equatorial (J2000 equinox) coordinates into the respective coordinate system within each operator's name.
@item ec-b1950-to-eq-b1950
@itemx ec-b1950-to-eq-j2000
@itemx ec-b1950-to-ec-j2000
@itemx ec-b1950-to-galactic
@itemx ec-b1950-to-supergalactic
Convert Ecliptic (B1950 equinox) coordinates into the respective coordinate system within each operator's name.
@item ec-j2000-to-eq-b1950
@itemx ec-j2000-to-eq-j2000
@itemx ec-j2000-to-ec-b1950
@itemx ec-j2000-to-galactic
@itemx ec-j2000-to-supergalactic
Convert Ecliptic (J2000 equinox) coordinates into the respective coordinate system within each operator's name.
@item galactic-to-eq-b1950
@itemx galactic-to-eq-j2000
@itemx galactic-to-ec-b1950
@itemx galactic-to-ec-j2000
@itemx galactic-to-supergalactic
Convert Galactic coordinates into the respective coordinate system within each operator's name.
@item supergalactic-to-eq-b1950
@itemx supergalactic-to-eq-j2000
@itemx supergalactic-to-ec-b1950
@itemx supergalactic-to-ec-j2000
@itemx supergalactic-to-galactic
Convert Supergalactic coordinates into the respective coordinate system within each operator's name.
@end table
@node Unit conversion operators, Statistical operators, Coordinate conversion operators, Arithmetic operators
@subsubsection Unit conversion operators
It often happens that you have data in one unit (for example, counts on your CCD), but would like to convert it into another (for example, magnitudes, to measure the brightness of a galaxy).
While the equations for the unit conversions can be easily found on the internet, the operators in this section are designed to simplify the process and let you do it easily and fast without having to remember constants and relations.
@table @command
@item counts-to-mag
Convert counts (usually CCD outputs) to magnitudes using the given zero point.
The zero point is the first popped operand and the count image or value is the second popped operand.
For example, assume you have measured the standard deviation of the noise in an image to be @code{0.1} counts, and the image's zero point is @code{22.5} and you want to measure the @emph{per-pixel} surface brightness limit of the dataset@footnote{The @emph{per-pixel} surface brightness limit is the magnitude of the noise standard deviation. For more on surface brightness see @ref{Brightness flux magnitude}.
In the example command, because the output is a single number, we are using @option{--quiet} to avoid printing extra information.}.
To apply this operator on an image, simply replace @code{0.1} with the image name, as described below.
@example
$ astarithmetic 0.1 22.5 counts-to-mag --quiet
@end example
Of course, you can also convert every pixel in an image (or table column in Table's @ref{Column arithmetic}) with this operator if you replace the second popped operand with an image/column name.
For an example of applying this operator on an image, see the description of surface brightness in @ref{Brightness flux magnitude}, where we will convert an image's pixel values to surface brightness.
@item mag-to-counts
Convert magnitudes to counts (usually CCD outputs) using the given zero point.
The zero point is the first popped operand and the magnitude value is the second.
For example, if an object has a magnitude of 20, you can estimate the counts corresponding to it (when the image has a zero point of 24.8) with this command:
Note that because the output is a single number, we are using @option{--quiet} to avoid printing extra information.
@example
$ astarithmetic 20 24.8 mag-to-counts --quiet
@end example
@item mag-to-luminosity
@cindex Luminosity to Apparent Magnitude
@cindex Apparent Magnitude to Luminosity
Convert the given apparent magnitude to luminosity (in units of the luminosity of a reference object).
It takes the following three operands (in the order written on the command-line; see example below):
@enumerate
@item
The measured apparent magnitude that is corrected for the ISM absorption/extinction.
You can find the ISM absorption/extinction of a certain position on the sky using Gnuastro's Query program, which gives you direct access to the NED Extinction Calculator as described in @ref{Available databases}.
@item
@cindex Distance modulus
@cindex Sun's absolute magnitude
@cindex Absolute magnitude of Sun
Absolute magnitude of the reference object in the same filter and magnitude system (for example Vega or AB) that the measured apparent magnitude was derived from.
The reference object is conventionally the Sun.
The Sun's absolute magnitude in various commonly used filters and magnitude systems from table 3 of Willmer @url{https://arxiv.org/abs/1804.07788,2018}.
For example the Sun's absolute magnitude in the SDSS u, g, r, i and z filters (with the AB magnitude system) is respectively 6.39, 5.11, 4.65, 4.53 and 4.50.
@item
@cindex K-Correction
Difference of apparent (@mymath{m}) and absolute (@mymath{M}) magnitudes: @mymath{m-M}.
At small distances or when the input is bolometric (across all wavelengths), the distance modulus can be used for this.
See @option{--distancemodulus} in @ref{CosmicCalculator basic cosmology calculations} for more details and why @option{--absmagconv} is preferred to the distance modulus in the absence of SED-based methods.
@end enumerate
For example, let's assume the apparent AB magnitude of a galaxy (after correcting Galactic extinction) is 20 in the SDSS g filter, and that its redshift is 0.01 and we need its luminosity (in units of solar luminosity) in the same filter.
To do this, we need the apparent-absolute magnitude difference (using CosmicCalculator), as well as the Sun's absolute magnitude, from Willmer @url{https://arxiv.org/abs/1804.07788,2018} in this filter (which is 5.11):
@example
$ conv=$(astcosmiccal --absmagconv --redshift=0.01)
$ astarithmetic 20 5.11 $conv mag-to-luminosity
@end example
@item luminosity-to-mag
Convert luminosity (in units of solar luminosity) to apparent magnitude using the distance modulus.
This is the inverse of @option{mag-to-luminosity}, see its description for the details and usage example.
@item counts-to-sb
Convert counts to surface brightness using the zero point and area (in units of arcsec@mymath{^2}).
The first popped operand is the area (in arcsec@mymath{^2}), the second popped operand is the zero point and the third are the count values.
Estimating the surface brightness involves taking the logarithm.
Therefore this operator will produce NaN for counts with a negative value.
For example, with the commands below, we read the zero point from the image headers (assuming it is in the @code{ZPOINT} keyword), we calculate the pixel area from the image itself, and we call this operator to convert the image pixels (in counts) to surface brightness (mag/arcsec@mymath{^2}).
@example
$ zeropoint=$(astfits image.fits --keyvalue=ZPOINT -q)
$ pixarea=$(astfits image.fits --pixelareaarcsec2)
$ astarithmetic image.fits $zeropoint $pixarea counts-to-sb \
--output=image-sb.fits
@end example
For more on the definition of surface brightness see @ref{Brightness flux magnitude}, and for a fully tutorial on optimal usage of this, see @ref{FITS images in a publication}.
@item sb-to-counts
Convert surface brightness using the zero point and area (in units of arcsec@mymath{^2}) to counts.
The first popped operand is the area (in arcsec@mymath{^2}), the second popped operand is the zero point and the third are the surface brightness values.
See the description of @command{counts-to-sb} for more.
@item mag-to-sb
Convert magnitudes to surface brightness over a certain area (in units of arcsec@mymath{^2}).
The first popped operand is the area and the second is the magnitude.
For example, let's assume you have a table with the two columns of magnitude (called @code{MAG}) and area (called @code{AREAARCSEC2}).
In the command below, we will use @ref{Column arithmetic} to return the surface brightness.
@example
$ asttable table.fits -c'arith MAG AREAARCSEC2 mag-to-sb'
@end example
@item sb-to-mag
Convert surface brightness to magnitudes over a certain area (in units of arcsec@mymath{^2}).
The first popped operand is the area and the second is the magnitude.
See the description of @code{mag-to-sb} for more.
@item counts-to-jy
@cindex AB magnitude
@cindex Magnitude, AB
Convert counts (usually CCD outputs) to Janskys through an AB-magnitude based zero point.
The top-popped operand is assumed to be the AB-magnitude zero point and the second-popped operand is assumed to be a dataset in units of counts (an image in Arithmetic, and a column in Table's @ref{Column arithmetic}).
For the full equation and basic definitions, see @ref{Brightness flux magnitude}.
@cindex SDSS
@cindex Nanomaggy
For example, SDSS images are calibrated in units of nanomaggies, with a fixed zero point magnitude of 22.5.
Therefore you can convert the units of SDSS image pixels to Janskys with the command below:
@example
$ astarithmetic sdss-image.fits 22.5 counts-to-jy
@end example
@item jy-to-counts
Convert Janskys to counts (usually CCD outputs) through an AB-magnitude based zero point.
This is the inverse operation of the @code{counts-to-jy}, see there for usage example.
@item zeropoint-change
@cindex Zero point change
Change the zero point of the input dataset to a new zero point (output will always be 64-bit floating point).
For example, with the command below, we are changing the zero point of an image from 26.892 to 27.0:
@example
$ astarithmetic image.fits 26.892 27.0 zeropoint-change
@end example
Note that the input or output zero points can also be an image (with the same size as the input image).
This is very useful to correct situations where the zero point can change over the image (happens in single exposures) and you want to unify it across the whole image (to build a deep coadded image).
@item counts-to-nanomaggy
@cindex Nanomaggy
Convert counts to Nanomaggy (with fixed zero point of 22.5, used as the pixel units of many surveys like SDSS).
For example if your image has a zero point of 24.93, you can convert it to Nanomaggies with the command below:
@example
$ astarithmetic image.fits 24.93 counts-to-nanomaggy
@end example
This is just a wrapper over the @code{zeropoint-change} operator where the output zero point is 22.5.
@item nanomaggy-to-counts
@cindex Nanomaggy
Convert Nanomaggy to counts.
Nanomaggy is defined to have a fixed zero point of 22.5 and is the pixel units of many surveys like SDSS.
For example if you would like to convert an image in units of Nanomaggy (for example from SDSS) to the counts of a camera with a zero point of 25.92, you can use the command below:
@example
$ astarithmetic image.fits 25.92 nanomaggy-to-counts
@end example
This is just a wrapper over the @code{zeropoint-change} operator where the input zero point is 22.5.
@item mag-to-jy
Convert AB magnitudes to Janskys, see @ref{Brightness flux magnitude}.
@item jy-to-mag
Convert Janskys to AB magnitude, see @ref{Brightness flux magnitude}.
@item jy-to-wavelength-flux-density
@cindex Jansky (Jy)
@cindex Wavelength flux density
@cindex Flux density (wavelength)
Convert Janskys to wavelength flux density (in units of @mymath{erg/cm^2/s/\AA}) at a certain wavelength (given in units of Angstroms).
Recall that Jansky is also a unit of spectral flux density, but it is in units of frequency (@mymath{erg/cm^2/s/Hz}).
For example at the wavelength of 5556, Vega's frequency flux density is 3500 Janskys.
To convert this to wavelength flux density, you can use this command:
@example
$ astarithmetic 3.5e3 5556 jy-to-wavelength-flux-density
@end example
If your input values are in units of counts or magnitudes, you can use the @code{mag-to-jy} or @code{count-to-jy} operators that are described above.
For example, if you want the wavelength flux density of a source with magnitude 21 in the J-PAS J0660 filter (centered at approximately @mymath{6600\AA}), you can use this command:
@example
$ astarithmetic 21 mag-to-jy 6600 \
jy-to-wavelength-flux-density
@end example
The conversion is done based on this derivation: the speed of light (@mymath{c}) can be written as:
@dispmath{c=2.99792458\times10^8 m/s = 2.99792458\times10^{18} \AA Hz}
The speed of light connects the wavelength (@mymath{\lambda}) and frequency (@mymath{\nu}) of photons: @mymath{c=\nu\lambda}.
Therefore, @mymath{\nu=c/\lambda} and taking the derivative: @mymath{d\nu=(c/\lambda^2)d\lambda} or @mymath{d\nu/d\lambda=c/\lambda^2}. Inserting physical values and units:
@dispmath{d\nu/d\lambda = \frac{2.99792458\times10^{18} \AA Hz}{\lambda^2\AA^2} = \frac{2.99792458\times10^{18}}{\lambda^2} Hz/A}
Recall that to convert a function of @mymath{\nu} into a function of @mymath{\lambda}, where @mymath{\nu} and @mymath{\lambda} are also related to each other, we have the following equation: @mymath{f(\lambda) = \frac{d\nu}{d\lambda} f(\nu)}.
Here, @mymath{f(\nu)} is the value in Janskys (@mymath{J\times10^{-23}erg/s/cm^2/Hz}; see @ref{Brightness flux magnitude}).
Replacing @mymath{d\nu/d\lambda} from above we get:
@dispmath{F_\lambda=\frac{2.99792458\times10^8}{\lambda^2}Hz/A \times J\times10^{-23}erg/s/cm^2/Hz}
@dispmath{F_\lambda=J\times\frac{2.99792458\times10^{-5}}{\lambda^2} erg/s/cm^2/\AA}
@item wavelength-flux-density-to-jy
Convert wavelength flux density (in units of @mymath{erg/cm^2/s/\AA}) to Janskys at a certain wavelength (given in units of Angstroms).
For details and usage examples, see the description of @code{jy-to-wavelength-flux-density} (the inverse of this function).
@item sblim-diff
Calculate the difference in surface brightness limit after using a different exposed radius (of a different telescope) or a different exposure time.
The filter used, the multiple of sigma and the extrapolated area of the derived surface brightness limit will be the same as your reference surface brightness limit.
For more on the surface brightness limit, the exposed radius, the extrapolated area and the equation behind this operator, see @ref{Surface brightness limit of image}.
For example, we know that in the rSDSS filter, the SDSS survey reaches a @mymath{3\sigma} surface brightness limit of 26.5 mag/arcsec@mymath{^2} over 100 arcsec@mymath{^2}.
Let's use this to find the expected surface brightness limit of the LSST survey in the same filter (after its observations have been completed).
To do this, we need to know the exposed radius and exposure time of SDSS and LSST to feed into the command below:
@itemize
@item
The SDSS telescope@footnote{@url{https://voyages.sdss.org/preflight/capturing-recording-light/sdss-telescope}} has an exposed radius of @mymath{\sqrt{1.25^2-0.54^2}=1.13} meters and it had a 1 minute (60 seconds) exposure time.
@item
The Vera C. Rubin telescope@footnote{https://www.lsst.org/sites/default/files/docs/ivezic_086.02.pdf} (used for LSST) has an exposed radius of @mymath{\sqrt{4.2^2-2.5^2}=3.37} meters and at its end, it will have 8 hours (28800 seconds) of total exposure on all observed regions@footnote{@url{https://www.lsst.org/sites/default/files/docs/ivezic_086.02.pdf}} (in all filters).
Because it has 6 filters, in the rSDSS filter it will have a total exposure time of about @mymath{28800/6=4800} seconds in its observed regions.
@end itemize
@example
$ astarithmetic 3.37 1.13 / 4800 60 / sblim-diff 26.5 +
3.00652410811219e+01
Arithmetic finished in 0.012210 seconds
@end example
To have an easy to read number, you can pipe it to the Table program and use the @option{-Y} option.
But the time-reporting feature of Arithmetic will be problematic, so you should also pass @option{--quiet} (or @option{-q}) to Arithmetic.
@example
$ astarithmetic 3.37 1.13 / 4800 60 / sblim-diff 26.5 + -q \
| asttable -Y
30.065241
@end example
Therefore, judging purely based on exposed area and time (too idealistic!), the LSST will reach a @mymath{3\sigma} surface brightness limit of 30.07 over 100 arcsec@mymath{^2}.
A simple note for the second command: this trick is only useful for operators like this that are sure to produce numbers that are larger than @mymath{10^{-2}}.
You will loose precision for values smaller than that and using the default scientific notation is best.
@item au-to-pc
@cindex Parsecs
@cindex Astronomical Units (AU)
Convert Astronomical Units (AUs) to Parsecs (PCs).
This operator takes a single argument which is interpreted to be the input AUs.
The conversion is based on the definition of Parsecs: @mymath{1 \rm{PC} = 1/tan(1^{\prime\prime}) \rm{AU}}, where @mymath{1^{\prime\prime}} is one arcseconds.
In other words, @mymath{1 (\rm{PC}) = 648000/\pi (\rm{AU})}.
For example, if we take Pluto's average distance to the Sun to be 40 AUs, we can obtain its distance in Parsecs using this command:
@example
echo 40 | asttable -c'arith $1 au-to-pc'
@end example
@item pc-to-au
Convert Parsecs (PCs) to Astronomical Units (AUs).
This operator takes a single argument which is interpreted to be the input PCs.
For more on the conversion equation, see description of @code{au-to-pc}.
For example, Proxima Centauri (the nearest star to the Solar system) is 1.3020 Parsecs from the Sun, we can calculate this distance in units of AUs with the command below:
@example
echo 1.3020 | asttable -c'arith $1 pc-to-au'
@end example
@item ly-to-pc
@cindex Light-year
Convert Light-years (LY) to Parsecs (PCs).
This operator takes a single argument which is interpreted to be the input LYs.
The conversion is done from IAU's definition of the light-year (9460730472580800 m @mymath{\approx} 63241.077 AU = 0.306601 PC, for the conversion of AU to PC, see the description of @code{au-to-pc}).
For example, the distance of Andromeda galaxy to our galaxy is 2.5 million light-years, so its distance in kilo-Parsecs can be calculated with the command below (note that we want the output in kilo-parsecs, so we are dividing the output of this operator by 1000):
@example
echo 2.5e6 | asttable -c'arith $1 ly-to-pc 1000 /'
@end example
@item pc-to-ly
Convert Parsecs (PCs) to Light-years (LY).
This operator takes a single argument which is interpreted to be the input PCs.
For the conversion and an example of the inverse of this operator, see the description of @code{ly-to-pc}.
@item ly-to-au
Convert Light-years (LY) to Astronomical Units (AUs).
This operator takes a single argument which is interpreted to be the input LYs.
For the conversion and a similar example, see the description of @code{ly-to-pc}.
@item au-to-ly
Convert Astronomical Units (AUs) to Light-years (LY).
This operator takes a single argument which is interpreted to be the input AUs.
For the conversion and a similar example, see the description of @code{ly-to-pc}.
@end table
@node Statistical operators, Coadding operators, Unit conversion operators, Arithmetic operators
@subsubsection Statistical operators
The operators in this section take a single dataset as input, and will return the desired statistic as a single value.
@table @command
@item minvalue
Minimum value in the first popped operand, so ``@command{a.fits minvalue}'' will push the minimum pixel value in this image onto the stack.
When this operator acts on a single image, the output (operand that is put back on the stack) will no longer be an image, but a number.
The output of this operand is in the same type as the input.
This operator is mainly intended for multi-element datasets (for example, images or data cubes), if the popped operand is a number, it will just return it without any change.
Note that when the final remaining/output operand is a single number, it is printed onto the standard output.
For example, with the command below the minimum pixel value in @file{image.fits} will be printed in the terminal:
@example
$ astarithmetic image.fits minvalue
@end example
However, the output above also includes a lot of extra information that are not relevant in this context.
If you just want the final number, run Arithmetic in quiet mode:
@example
$ astarithmetic image.fits minvalue -q
@end example
Also see the description of @option{sqrt} for other example usages of this operator.
@item maxvalue
Maximum value of first operand in the same type, similar to @command{minvalue}, see the description there for more.
For example
@example
$ astarithmetic image.fits maxvalue -q
@end example
@item numbervalue
Number of non-blank elements in first operand in the @code{uint64} type (since it is always a positive integer, see @ref{Numeric data types}).
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits numbervalue -q
@end example
@item sumvalue
Sum of non-blank elements in first operand in the @code{float32} type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits sumvalue -q
@end example
@item meanvalue
Mean value of non-blank elements in first operand in the @code{float32} type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits meanvalue -q
@end example
@item stdvalue
Standard deviation of non-blank elements in first operand in the @code{float32} type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits stdvalue -q
@end example
@item medianvalue
Median of non-blank elements in first operand with the same type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits medianvalue -q
@end example
@item label-area
Calculate the area (number of pixels) that have a unique label and replace that label's pixels with its area.
The output will be an unsigned 32-bit integer (see @ref{Numeric data types}).
The input labeled image should have an integer type smaller than a signed 32-bit integer (so unsigned 32-bit or 64-bit integers are not acceptable).
For example the output of @code{connected-components} (see @ref{Mathematical morphology operators}), or Gnuastro's @ref{Segment} program.
This operator can be useful when you want to separate the different labels by their area.
For example let's assume you want to remove all labels that are smaller than 3 pixels.
With the command below you can do that and generate new labels (starting from one).
@example
$ astarithmeitc label.fits label-area \
3 gt 2 connected-components
@end example
@item label-minimum
@item label-maximum
Calculate the minimum/maximum value of first popped operand (values image) that have a unique label in the second popped operand (labels image).
The output will have the same type as the values image and the input labeled image should have the same properties as the @code{label-area} operator described above.
For example, Let's assume you want to find ``thinner'' objects in the image (to keep or delete them).
You can use the command below (using @code{number-neighbors} of @ref{Mathematical morphology operators}) as the second popped operand of this operator and then apply a threshold of 6 for example and generate new labels for the remaining objects.
@example
astarithmetic lab.fits set-l \
l l 0 gt 2 number-neighbors label-maximum \
6 uint8 gt 2 connected-components
@end example
@item madclip-maskfilled
@item sigclip-maskfilled
Mask (set to blank/NaN) all the outlying elements (defined by @mymath{\sigma} or MAD clipping) in the inputs and put all the inputs back on the stack.
The first popped operand is the termination criteria of the clipping, the second popped operand is the multiple of @mymath{\sigma} or MAD and the third is the number of input datasets that will be popped for the actual operation.
If you are not yet familiar with @mymath{\sigma} or MAD clipping, it is recommended to read this tutorial: @ref{Clipping outliers}.
When more than 95@mymath{\%} of the area of an operand is masked, the full operand will be masked.
This is necessary in like this: one of your inputs has many outliers (for example it is much more noisy than the rest or its sky level has not been subtracted properly).
Because this operator fills holes between outlying pixels, most of the area of the input will be masked, but the thin edges (where there are no ``holes'') will remain, causing different statistics in those thin edges of that input in your final coadd.
Through this mask coverage fraction (which is currently hard-coded@footnote{Please get in touch with us at @code{bug-gnuastro@@gnu.org} if you notice this problem and feel the fraction needs to be lowered (or generally to be set in each run).}), we ensure that such thin edges do not cause artifacts in the final coadd.
For example, with the second command below, we are masking the MAD clipped pixels of the 9 inputs (that are generated in @ref{Clipping outliers}) and writing them as separate HDUs of the output.
The clipping is done with 5 times the MAD and the clipping starts when the relative difference between subsequent MADs is 0.01.
Finally, with the third command, we see 10 HDUs in the output (because the first, or 0-th, is just metadata).
@example
$ ls in-*.fits
in-1.fits in-3.fits in-5.fits in-7.fits in-9.fits
in-2.fits in-4.fits in-6.fits in-8.fits
$ astarithmetic in-*.fits 9 5 0.01 madclip-maskfilled \
-g1 --writeall --output=clipped.fits
$ astfits clipped.fits --numhdus
10
@end example
In the Arithmetic command above, @option{--writeall} is necessary because this operator puts all its inputs back on the stack of operands.
This is because these are usually just intermediate operators.
For example, after masking the outliers from each input, you may want to coadd them into one deeper image (with the @ref{Coadding operators}.
After the coadding is done, only one operand will be on the stack, and @option{--writeall} will no longer be necessary.
For example if you want to see how many images were used in the final coadd's pixels, you can use the @option{number} operator like below:
@example
$ astarithmetic in-*.fits 9 5 0.01 madclip-maskfilled \
9 number -g1 --output=num-good.fits
@end example
@item unique
Remove all duplicate (and blank) elements from the first popped operand.
The unique elements of the dataset will be stored in a single-dimensional dataset.
Recall that by default, single-dimensional datasets are stored as a table column in the output.
But you can use @option{--onedasimage} or @option{--onedonstdout} to respectively store them as a single-dimensional FITS array/image, or to print them on the standard output.
Although you can use this operator on the floating point dataset, due to floating-point errors it may give non-reasonable values: because the tenth digit of the decimal point is also considered although it may be statistically meaningless, see @ref{Numeric data types}.
It is therefore better/recommended to use it on the integer dataset like the labeled images of @ref{Segment output} where each pixel has the integer label of the object/clump it is associated with.
For example, let's assume you have cropped a region of a larger labeled image and want to find the labels/objects that are within the crop.
With this operator, this job is trivial:
@example
$ astarithmetic seg-crop.fits unique
@end example
@item noblank
Remove all blank elements from the first popped operand.
Since the blank pixels are being removed, the output dataset will always be single-dimensional, independent of the dimensionality of the input.
Recall that by default, single-dimensional datasets are stored as a table column in the output.
But you can use @option{--onedasimage} or @option{--onedonstdout} to respectively store them as a single-dimensional FITS array/image, or to print them on the standard output.
For example, with the command below, the non-blank pixel values of @file{cropped.fits} are printed on the command-line (the @option{--quiet} option is used to remove the extra information that Arithmetic prints as it reads the inputs, its version and its running time).
@example
$ astarithmetic cropped.fits noblank --onedonstdout --quiet
@end example
@end table
@node Coadding operators, Filtering operators, Statistical operators, Arithmetic operators
@subsubsection Coadding operators
@cindex Coadding
@cindex Coaddition
The operators in this section are used when you have multiple datasets that you would like to merge into one.
For example, you have taken ten exposures of your scientific target, and you would like to combine them all into one deep stacked image that is deeper.
This is commonly known as ``stacking'' or ``coaddition''.
We use the latter in Gnuastro because ``stack'' refers to the intermediate 3D data set (if we are coadding 2D images).
However, the final product of this operation has the same number of dimensions as each of the inputs.
The astronomical community has traditionally used the term ``coadd'' to specify that the nature of the output 2D image is different from each of the input 2D images (it is created from them).
Furthermore, within Arithmetic, ``Stack'' is already reserved for the stack of operands that the operators read from (see @ref{Reverse polish notation}).
@cartouche
@noindent
@strong{Masking outliers (before coadding):} Outliers in one of the inputs (for example star ghosts, satellite trails, or cosmic rays, can leave their imprints in the final coadd.
One good way to remove them is the @code{madclip-maskfilled} operator that can be called before the operators here.
It is described in @ref{Statistical operators}; and a full tutorial on understanding outliers and how best to remove them is available in @ref{Clipping outliers}.
@end cartouche
@cartouche
@noindent
@strong{Hundreds or thousands of images to coadd:} It can happen that you need to coadd hundreds or thousands of images.
Added with the possibly long file/directory names, this can lead to an extremely long shell command that may cause an ``Argument list too long'' error in your shell.
To avoid this, you should use Arithmetic's @option{--arguments} option, see @ref{Invoking astarithmetic}.
@end cartouche
When calling the coadding operators you should determine how many operands they should take in: unlike the rest of the operators that have a fixed number of input operands, these operators have a variable number of input operators.
As described in the first operand below, you do this through their first (or early) popped operands (which should be a single integer number that is larger than one).
Below are some important points for all the coadding operators described in this section:
@itemize
@item
@cindex NaN
NaN/blank pixels will be ignored, see @ref{Blank pixels}.
@item
The operation will be multi-threaded, greatly speeding up the process if you have large and numerous data to coadd.
You can disable multi-threaded operations with the @option{--numthreads=1} option (see @ref{Multi-threaded operations}).
@end itemize
@table @command
@item min
@itemx max
@itemx sum
@itemx std
@itemx mad
@itemx mean
@itemx median
@itemx number
For each pixel, calculate the respective statistic from in all given datasets.
For the @code{min} and @code{max} operators, the output will have the same type as the input. For the @code{number} operator, the output will have an unsigned 32-bit integer type and the rest will be 32-bit floating point.
The first popped operand to this operator must be a positive integer number which specifies how many further operands should be popped from the stack.
All the subsequently popped operands must have the same type and size.
This operator (and all the variable-operand operators similar to it that are discussed below) will work in multi-threaded mode unless Arithmetic is called with the @option{--numthreads=1} option, see @ref{Multi-threaded operations}.
For example, the following command will produce an image with the same size and type as the three inputs, but each output pixel value will be the minimum of the same pixel's values in all three input images.
@example
$ astarithmetic a.fits b.fits c.fits 3 min --output=min.fits
@end example
Regarding the @code{number} operator: some datasets may have blank values (which are also ignored in all similar operators like @command{min}, @command{sum}, @command{mean} or @command{median}).
Hence, the final pixel values of this operator will not, in general, be equal to the number of inputs.
This operator is therefore mostly called in parallel with those operators to know the ``weight'' of each pixel (in case you want to only keep pixels that had the full exposure for example).
@item quantile
For each pixel, find the quantile from all given datasets.
The output will have the same numeric data type and size as the input datasets.
Besides the input datasets, the quantile operator also needs a single parameter (the requested quantile).
The parameter should be the first popped operand, with a value between (and including) 0 and 1.
The second popped operand must be the number of datasets to use.
In the example below, the first-popped operand (@command{0.7}) is the quantile, the second-popped operand (@command{3}) is the number of datasets to pop.
@example
astarithmetic a.fits b.fits c.fits 3 0.7 quantile
@end example
@item sigclip-mad
@itemx sigclip-std
@itemx sigclip-mean
@itemx sigclip-median
@cindex Sigma-clipping
@cindex Coadding through sigma-clipping
Return the respective statistic after @mymath{\sigma}-clipping the values of the same pixel of all the input operands.
The respective statistic will be stored in a 32-bit floating point number.
The number of inputs used to make the desired measurement for each pixel is also returned as a second output operand; see below for more on how to deal with the second output operand.
For a complete tutorial on clipping outliers when coadding images see @ref{Clipping outliers} (if you haven't read it yet, we encourage you to read through it before continuing).
In particular, the most robust solution is to first use @code{madclip-maskfilled} (described in @ref{Statistical operators}), then use any of these.
This operator is very similar to @command{min}, with the exception that it expects two extra operands (parameters for MAD-clipping) before the total number of inputs.
The first popped operand is the termination criteria and the second is the multiple of the median absolute deviation.
For example, in the command below, the first popped operand of @code{sigclip-mean} (@command{0.1}) is the @mymath{\sigma}-clipping termination criteria.
If the termination criteria is larger than, or equal to 1, it is interpreted as the total number of clips.
But if it is between 0 and 1, then it is the tolerance level on the change in the median absolute deviation (see @ref{Sigma clipping}).
The second popped operand (@command{4}) is the multiple of sigma (STD) to use.
The third popped operand (@command{3}) is number of datasets that should be coadded (similar to the first popped operand to @command{min}).
Two other side-notes should be mentioned here:
@itemize
@item
As mentioned above, before this operator, we are masking the filled MAD-clipped elements with @code{madclip-maskfilled}.
As described in @ref{Clipping outliers}, this is very important for removing the types of outliers that we have in astronomical imaging.
@item
We are using @option{--writeall} because this operator places two operands on the coadd: your desired statistics, and the number of inputs that were used in it (after clipping).
@end itemize
@example
$ astarithmetic a.fits b.fits c.fits -g1 --writeall \
3 5 0.01 madclip-maskfilled \
3 4 0.1 sigclip-mean
@end example
The numbers image has the smallest unsigned integer type that fits the total number of your input datasets (see @ref{Numeric data types}).
For example if you have less than 255 input operands (not pixels!), then it will have an unsigned 8-bit integer type, if you have 1000 input operands (or any number less than 65534 inputs), it will be an unsigned 16-bit integer.
Recall that when you have many input files to coadd, it may be necessary to write the arguments into a text file and use @option{--arguments} (see @ref{Invoking astarithmetic}).
The numbers image is included by default because it is usually important in clipping based coadds (where the number of inputs used in the calculation of each pixel can be different from another pixel, and this affects the final output noise).
In case you are not interested in the numbers image, you should first @code{swap} the two output operands, then @code{free} the top operand like below.
@example
$ astarithmetic a.fits b.fits c.fits -g1 \
3 5 0.01 madclip-maskfilled \
3 4 0.1 sigclip-mean swap free \
--output=single-hdu.fits
@end example
In case you just want the numbers image, you can use @option{sigclip-median} (which is always calculated as part of the clipping process: no extra overhead), and @code{free} the top operand (without the @code{swap}: the median-coadd image), leaving only the numbers image:
@example
$ astarithmetic a.fits b.fits c.fits -g1 \
3 5 0.01 madclip-maskfilled \
3 4 0.1 sigclip-median free \
--output=single-hdu-only-numbers.fits
@end example
@item madclip-mad
@itemx madclip-std
@itemx madclip-mean
@itemx madclip-median
Similar to the @option{sigclip-*} operators, but using Median Absolute Deviation (MAD) as the measure of spread.
See @ref{MAD clipping} and more generally @ref{Clipping outliers}.
See the description of @option{sigclip-*} for usage details; just remember that o1 MAD is equivalent to @mymath{0.67449\sigma}.
@item sigclip-all
@item madclip-all
Similar to the @option{sigclip-*} or @option{madclip-*} operators, but instead of returning only a single measured statistic (mean, STD, median or MAD) all of them are returned, and in the following order: mean, STD, median, MAD, number.
This will greatly speed up your analysis pipelines when more than one statistic is required (for example you need both the mean and the standard deviation).
Just note that as described under the sigma-clip operators, @option{--writeall} will be necessary if you want them all to be put in the output.
For example, let's assume you want the mean and standard deviation after a mad-clipping like the examples before (not just one of them).
In that case, the cleanest way is to use a combination of the @code{set-} and @code{free} operators like the example below.
The first (@code{set-} pops the top operand from the stack and stores it in a named variable (that you can use any number of times later), while the second (@option{free}) simply pops and deletes the top operand on the stack.
For more on these operators, see @ref{Operand storage in memory or a file}.
Finally, to help in later usage of the HDUs, we use the @option{--metadata} operator to add a name to each output HDU in the same Arithmetic call.
@example
$ astarithmetic a.fits b.fits c.fits -g1 --writeall \
3 7.5 0.2 madclip-all set-m set-s \
free free free s m --metaname=MEAN,STD \
--output=out.fits
@end example
Different combinations of the various statistics are commonly necessary for different purposes.
The reason we haven't specified an option for each combination is because that would dramatically increase the operator-count and would confuse the developers and users.
On the other hand, the process of mask-filling or even just clipping is much more computationally expensive on large datasets than these measurements combined.
We therefore recommend the process above to extract the statistics you want.
@end table
@node Filtering operators, Pooling operators, Coadding operators, Arithmetic operators
@subsubsection Filtering (smoothing) operators
Image filtering is commonly used for smoothing: every pixel value in the output image is created by applying a certain statistic to the pixels in its vicinity.
@table @command
@item filter-mean
Apply mean filtering (or @url{https://en.wikipedia.org/wiki/Moving_average, moving average}) on the input dataset.
During mean filtering, each pixel (data element) is replaced by the mean value of all its surrounding pixels (excluding blank values).
The number of surrounding pixels in each dimension (to calculate the mean) is determined through the earlier operands that have been pushed onto the stack prior to the input dataset.
The number of necessary operands is determined by the dimensions of the input dataset (first popped operand).
The order of the dimensions on the command-line is the order in FITS format.
Here is one example:
@example
$ astarithmetic 5 4 image.fits filter-mean
@end example
@noindent
In this example, each pixel is replaced by the mean of a 5 by 4 box around it.
The box is 5 pixels along the first FITS dimension (horizontal when viewed in ds9) and 4 pixels along the second FITS dimension (vertical).
Each pixel will be placed in the center of the box that the mean is calculated on.
If the given width along a dimension is even, then the center is assumed to be between the pixels (not in the center of a pixel).
When the pixel is close to the edge, the pixels of the box that fall outside the image are ignored.
Therefore, on the edge, less points will be used in calculating the mean.
The final effect of mean filtering is to smooth the input image, it is essentially a convolution with a kernel that has identical values for all its pixels (is flat), see @ref{Convolution process}.
Note that blank pixels will also be affected by this operator: if there are any non-blank elements in the box surrounding a blank pixel, in the filtered image, it will have the mean of the non-blank elements, therefore it will not be blank any more.
If blank elements are important for your analysis, you can use the @code{isblank} operator with the @code{where} operator to set them back to blank after filtering.
For example in the command below, we are first filtering the image, then setting its original blank elements back to blank in the output of filtering (all within one Arithmetic command).
Note how we are using the @code{set-} operator to give names to the temporary outputs of steps and simplify the code (see @ref{Operand storage in memory or a file}).
@example
$ astarithmetic image.fits -h1 set-in \
5 4 in filter-mean set-filtered \
filtered in isblank nan where \
--output=out.fits
@end example
@item filter-median
Apply @url{https://en.wikipedia.org/wiki/Median_filter, median filtering} on the input dataset.
This is very similar to @command{filter-mean}, except that instead of the mean value of the box pixels, the median value is used to replace a pixel's value.
For more on how to use this operator, please see @command{filter-mean}.
The median is less susceptible to outliers compared to the mean.
As a result, after median filtering, the pixel values will be more discontinuous than mean filtering.
@item filter-minimum
@itemx filter-maximum
Apply a minimum/maximum filter to the input.
This is very similar to @command{filter-mean}, except that instead of the mean value of the box pixels, the minimum/maximum value is used to replace a pixel's value.
For more on how to use this operator, please see @command{filter-mean}.
@item filter-sigclip-mean
Apply a @mymath{\sigma}-clipped mean filtering onto the input dataset.
This is very similar to @code{filter-mean}, except that all outliers (identified by the @mymath{\sigma}-clipping algorithm) have been removed, see @ref{Sigma clipping} for more on the basics of this algorithm.
As described there, two extra input parameters are necessary for @mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the termination criteria.
@code{filter-sigclip-mean} therefore needs to pop two other operands from the stack after the dimensions of the box.
For example, the line below uses the same box size as the example of @code{filter-mean}.
However, all elements in the box that are iteratively beyond @mymath{3\sigma} of the distribution's median are removed from the final calculation of the mean until the change in @mymath{\sigma} is less than @mymath{0.2}.
@example
$ astarithmetic 3 0.2 5 4 image.fits filter-sigclip-mean
@end example
The median (which needs a sorted dataset) is necessary for @mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be significantly slower than @code{filter-mean}.
However, if there are strong outliers in the dataset that you want to ignore (for example, emission lines on a spectrum when finding the continuum), this is a much better solution.
Note that @mymath{\sigma}-clipping is easily biased by outliers, so @code{filter-madclip-mean} is more robust.
@item filter-sigclip-std
@itemx filter-sigclip-mad
@itemx filter-sigclip-median
Apply a @mymath{\sigma}-clipped STD/MAD/median filtering onto the input dataset.
This operator and its necessary operands are almost identical to @code{filter-sigclip-mean}, except that after @mymath{\sigma}-clipping, the median value (which is less affected by outliers than the mean) is added back to the stack.
Note that @mymath{\sigma}-clipping is easily biased by outliers, so @code{filter-madclip-median} is more robust.
@item filter-madclip-mean
Apply a MAD-clipped mean filtering onto the input dataset.
MAD stands for Median Absolute Deviation, this is much more robust to outliers than the @mymath{\sigma}-clipping.
This operator is called in a similar way to @code{filter-madclip-mean}, for example in the command below, we use @mymath{4\times}MAD clipping with a termination criteria of 0.01 on an image:
@example
$ astarithmetic 4 0.01 5 5 image.fits filter-madclip-mean
@end example
@item filter-madclip-std
@itemx filter-madclip-mad
@itemx filter-madclip-median
Apply a MAD-clipped STD/MAD/median filtering onto the input dataset; see the description of @code{filter-madclip-mean} for more.
@end table
@node Pooling operators, Interpolation operators, Filtering operators, Arithmetic operators
@subsubsection Pooling operators
@cindex Pooling
@cindex Convolutional Neural Networks
Pooling is one way of reducing the complexity of the input image by grouping multiple input pixels into one output pixel (using any statistical measure).
As a result, the output image has fewer pixels (less complexity).
In Computer Vision, Pooling is commonly used in @url{https://en.wikipedia.org/wiki/Convolutional_neural_network, Convolutional Neural Networks} (CNNs).
@cindex Stride
In pooling, the inputs are an image (e.g., a FITS file) and a square window pixel size that is known as a pooling window.
The window has to be smaller than the input's number of pixels in both dimensions and its width is called the ``pool size''.
The pooling window starts at the top-left corner pixel of the input and calculates statistical operations on the pixels that overlap with it.
It slides forward by the ``stride'' pixels, moving over all pixels in the input from the top-left corner to the bottom-right corner, and repeats the same calculation for the overlapping pixels in each position.
Usually, the stride (or spacing between the windows as they slide over the input) is equal to the window-size.
In other words, in pooling, the separate ``windows'' do not overlap with each other on the input.
However, you can choose any size for the stride.
Remember this, It's crucial to ensure that the stride size is less than the pool size.
If not, some pixels may be missed during the pooling process.
Therefore there are two major differences with @ref{Spatial domain convolution} or @ref{Filtering operators}, but pooling has some similarities to the @ref{Warp}.
@itemize
@item
In convolution or filtering the input and output sizes are the same.
However, when the stride is larger than 1 then, the output of pooling must have fewer pixels.
@item
In convolution or filters, the kernels slide over the input in a pixel-by-pixel manner.
As a result, the same pixel's value will be used in many of the output pixels.
However, in pooling each input pixel may be only used in a single output pixel (if the stride and the pool size are the same).
@item
Special cases of Warping an image are similar to pooling.
For example calling @code{pool-sum} with pool size of 2 will give the same pixel values (except the outer edges) as giving the same image to @command{astwarp} with @option{--scale=1/2 --centeroncorner}.
However, warping will only provide the sum of the input pixels, there is no easy way to generically define something like @code{pool-max} in Warp (which is far more general than pooling).
Also, due to its generic features (for example for non-linear warps), Warp is slower than the @code{pool-max} that is introduced here.
@end itemize
@cartouche
@noindent
@strong{No WCS in output:} As of Gnuastro @value{VERSION}, the output of pooling will not contain WCS information (primarily due to a lack of time by developers).
Please inform us of your interest in having it, by contacting us at @command{bug-gnuastro@@gnu.org}.
If you need @code{pool-sum}, you can use @ref{Warp} (which also modifies the WCS, see note above).
@end cartouche
If the width or height of input is not divisible by the stride size, the pool window will go beyond the input pixel grid.
In this case, the window pixels that do not overlap with the input are given a blank value (and thus ignored in the calculation of the desired statistical operation).
The simple ASCII figure below shows the pooling operation where the input is a @mymath{3\times3} pixel image with a pool size of 2 pixels.
In the center of the second row, you see the intermediate input matrix that highlights how the input and output pixels relate with each other.
Since the input is @mymath{3\times3} and we have a stride size of 2, as mentioned above blank pseudo-pixels are added with a value of @code{B} (for blank).
@example
Pool window: Input:
+-----------+ +-------------+
| | | | 10 12 9 |
| _ _ | _ _ |___________________________| 31 4 1 |
| | | || || | 16 5 8 |
| | | || || +-------------+
+-----------+ || ||
The pooling window 2*2 || ||
stride 2 \/ \/
+---------------------+
|/ 10 12\|/ 9 B \|
| | |
+-------+ pool-min |\ 31 4 /|\ 1 B /| pool-max +-------+
| 4 1 | /------ |---------------------| ------\ |31 9 |
| 5 8 | \------ |/ 16 5 \|/ 8 B \| ------/ |16 8 |
+-------+ | | | +-------+
|\ B B /.\ B B /|
+---------------------+
@end example
The choice of the statistic to use depends on the specific use case, the characteristics of the input data, and the desired output.
Each statistic has its advantages and disadvantages and the choice of which to use should be informed by the specific needs of the problem at hand.
Below, the various pool operators of arithmetic are listed:
@table @command
@item pool-max
Apply max-pooling on the input dataset.
This operator takes three operands: the first popped operand is the stride and the second is the width of the square pooling window (which should be a single integer).
Also, The third operand should be the input image.
Within the pooling window, this operator will place the largest value in the output pixel (any blank pixels will be ignored).
See the ASCII diagram above for a demonstration of how max-pooling works.
Here is an example of using this operator:
@example
$ astarithmetic image.fits 2 2 pool-max
@end example
Max-pooling retains the largest value of the input window in the output, so the returned image is sharper where you have strong signal-to-noise ratio and more noisy in regions with no significant signal (only noise).
It is therefore useful when the background of the image is dark and we are interested in only the highest signal-to-noise ratio regions of the image.
@item pool-min
Apply min-pooling on the input dataset.
This operator takes three operands: the first popped operand is the stride and the second is the width of the square pooling window (which should be a single integer).
Also, The third operand should be the input image.
Except the used statistical measurement, this operator is similar to @code{pool-max}, see the description there for more.
Min-pooling is mostly used when the image has a high signal-to-noise ratio and a light background: min-pooling will select darker (lower-valued) pixels.
For low signal-to-noise regions, this operator will increase the noise level (similar to the maximum, the scatter in the minimum is very strong).
@item pool-sum
Apply sum-pooling to the input dataset.
This operator takes three operands: the first popped operand is the stride and the second is the width of the square pooling window (which should be a single integer).
Also, The third operand should be the input image.
Except the used statistical measurement, this operator is similar to @code{pool-max}, see the description there for more.
Sum-pooling will increase the signal-to-noise ratio at the cost of having a smoother output (less resolution).
@item pool-mean
Apply mean pooling on the input dataset.
This operator takes three operands: the first popped operand is the stride and the second is the width of the square pooling window (which should be a single integer).
Also, The third operand should be the input image.
Except the used statistical measurement, this operator is similar to @code{pool-max}, see the description there for more.
The mean pooling method smooths out the image and hence the sharp features may not be identified when this pooling method is used.
This therefore preserves more information than max-pooling, but may also reduces the effect of the most prominent pixels.
Mean is often used where a more accurate representation of the input is required.
@item pool-median
Apply median pooling on the input dataset.
This operator takes three operands: the first popped operand is the stride and the second is the width of the square pooling window (which should be a single integer).
Also, The third operand should be the input image.
Except the used statistical measurement, this operator is similar to @code{pool-max}, see the description there for more.
In general, the mean is mathematically easier to interpret and more susceptible to outliers, while the median outputs as being less subject to the influence of outliers compared to the mean so we have a smoother image.
This is therefore better for low signal-to-ratio (noisy) features and extended features (where you don't want a single high or low valued pixel to affect the output).
@end table
@node Interpolation operators, Dimensionality changing operators, Pooling operators, Arithmetic operators
@subsubsection Interpolation operators
Interpolation is the process of removing blank pixels from a dataset (by giving them a value based on the non-blank neighbors).
@table @command
@item interpolate-medianngb
Interpolate the blank elements of the second popped operand with the median of nearest non-blank neighbors to each.
The number of the nearest non-blank neighbors used to calculate the median is given by the first popped operand.
The distance of the nearest non-blank neighbors is irrelevant in this interpolation.
The neighbors of each blank pixel will be parsed in expanding circular rings (for 2D images) or spherical surfaces (for 3D cube) and each non-blank element over them is stored in memory.
When the requested number of non-blank neighbors have been found, their median is used to replace that blank element.
For example, the line below replaces each blank element with the median of the nearest 5 pixels.
@example
$ astarithmetic image.fits 5 interpolate-medianngb
@end example
When you want to interpolate blank regions and you want each blank region to have a fixed value (for example, the centers of saturated stars) this operator is not good.
Because the pixels used to interpolate various parts of the region differ.
For such scenarios, you may use @code{interpolate-maxofregion} or @code{interpolate-inofregion} (described below).
@item interpolate-meanngb
Similar to @code{interpolate-medianngb}, but will fill the blank values of the dataset with the mean value of the requested number of nearest neighbors.
@item interpolate-minngb
Similar to @code{interpolate-medianngb}, but will fill the blank values of the dataset with the minimum value of the requested number of nearest neighbors.
@item interpolate-maxngb
Similar to @code{interpolate-medianngb}, but will fill the blank values of the dataset with the maximum value of the requested number of nearest neighbors.
One useful implementation of this operator is to fill the saturated pixels of stars in images.
@item interpolate-minofregion
Interpolate all blank regions (consisting of many blank pixels that are touching) of the third popped operand with the minimum value of the pixels that are touching that region (a single value).
The second popped operand is the inner-connectivity (defining the actual region: which touching pixels are considered as part of the region to give a single value).
The first popped operand is the outer-connectivity which defines the pixels outside the blank region which are used.
For more on the definition of connectivity, see @command{connected-components}).
For example, with the command below all the 2-connected blank regions of @file{image.fits} will be filled with the minimum of the 1-connected, non-blank neighbors of the region.
The `image.fits` is an image (2D dataset), so a 2 connectivity means that the independent blank regions are defined by 8-connected neighbors: two pixels touching on the corner (a point) are considered as one region.
If inner-connectivity was 1, the regions would be defined by 4-connectivity: blank regions would be defined only by pixels touch touch along one full edge length.
Conversely, for the non-blank pixels ``touching'' the outer parts of the blank regions (to find the minimum), the command below assumes 1-connectivity.
@example
$ astarithmetic image.fits 2 1 interpolate-minofregion
@end example
@item interpolate-maxofregion
@cindex Saturated pixels
Similar to @code{interpolate-minofregion}, but the maximum is used to fill the blank regions.
This operator can be useful in filling saturated pixels in stars for example.
Recall that the @option{interpolate-maxngb} operator looks for the maximum value with a given number of neighboring pixels and is more useful in small noisy regions.
Therefore as the blank regions become larger, @option{interpolate-maxngb} can cause a fragmentation in the connected blank region because the nearest neighbor to one part of the blank region, may not fall within the pixels searched for the other regions.
With this option, the size of the blank region is irrelevant: all the pixels bordering the blank region are parsed and their maximum value is used for the whole region.
@end table
@node Dimensionality changing operators, Conditional operators, Interpolation operators, Arithmetic operators
@subsubsection Dimensionality changing operators
Through these operators you can change the dimensions of the output through certain statistics on the dimensions that should be removed.
For example, let's assume you have a 3D data cube that has 300 by 300 pixels in the RA and Dec dimensions (first two dimensions), and 3600 slices along the wavelength (third dimension), so the whole cube is @mymath{300\times300\times3600} voxels (volume elements).
To create a narrow-band image that only contains 100 slices around a certain wavelength, you can crop that section (using @ref{Crop}), giving you a @mymath{300\times300\times100} cube.
You can now use the @code{collapse-sum} operator below to ``collapse'' all the 100 slices into one 2D image that has @mymath{300\times300} pixels.
Every pixel in this 2D image will have the flux of the sum of the 100 slices.
@table @command
@item to-1d
Convert the input operand into a 1D array; irrespective of the number of dimensions it has.
This operator only takes a single operand (the input array) and just updates the metadata.
Therefore it does not change the layout of the array contents in memory and is very fast.
If no further operation is requested on the 1D array, recall that Arithmetic will write a 1D array as a table column by default.
In case you want the output to be saved as a 1D image, or to see it on the standard output, please use the @code{--onedasimage} or @code{--onedonstdout} options respectively (see @ref{Invoking astarithmetic}).
This operator is useful in scenarios where after some operations on a 2D image or 3D cube, the dimensionality is no longer relevant for you and you just care about the values.
In the example below, we will first make a simple 2D image from a plain-text file, then convert it to a 1D array:
@example
## Contents of 'a.txt' to start with.
$ cat a.txt
# Image 1: DEMO [counts, uint8] An example image
1 2 3
4 5 6
7 8 9
## Convert the text image into a FITS image.
$ astconvertt a.txt -o a.fits
## Convert it into a table column (1D):
$ astarithmetic a.fits to-1d -o table.fits
## Convert it into a 1D image:
$ astarithmetic a.fits to-1d -o table.fits --onedasimage
@end example
@cindex Flattening (CNNs)
A more real-world example would be the following: assume you want to ``flatten'' two images into a single 1D array (as commonly done in convolutional neural networks, or CNNs@footnote{@url{https://en.wikipedia.org/wiki/Convolutional_neural_network}}).
First, we show the contents of a new @mymath{2\times2} image in plain-text image, then convert it to a 2D FITS image (@file{b.fits}).
We will then use arithmetic to make both @file{a.fits} (from the example above) and @file{b.fits} into a 1D array and stitch them together into a single 1D image with one call to Arithmetic.
For a description of the @code{stitch} operator, see below (same section).
@example
## Contents of 'b.txt':
$ cat b.txt
# Image 1: DEMO [counts, uint8] An example image
10 11
12 13
## Convert the text image into a FITS image.
$ astconvertt b.txt -o b.fits
# Flatten the two images into a single 1D image:
$ astarithmetic a.fits to-1d b.fits to-1d 2 1 stitch -g1 \
--onedonstdout --quiet
1
2
3
4
5
6
7
8
9
10
11
12
13
@end example
@item stitch
Stitch (connect) any number of given images together along the given dimension.
The output has the same number of dimensions as the input, but the number of pixels along the requested dimension will be different from the inputs.
The @code{stitch} operator takes at least three operands:
@itemize
@item
The first popped operand (placed just before @code{stitch}) is the direction (dimension) that the images should be stitched along.
The first FITS dimension is along the horizontal, therefore a value of @code{1} will stitch them horizontally.
Similarly, giving a value of @code{2} will result in a vertical stitch.
@item
The second popped operand is the number of images that should be stitched.
@item
Depending on the value given to the second popped operand, @code{stitch} will pop the given number of datasets from the stack and stitch them along the given dimension.
The popped images have to have the same number of pixels along the other dimension.
The order of the stitching is defined by how they are placed in the command-line, not how they are popped (after being popped, they are placed in a list in the same order).
@end itemize
For example, in the commands below, we will first crop out fixed sized regions of @mymath{100\times300} pixels of a larger image (@file{large.fits}) first.
In the first call of Arithmetic below, we will stitch the bottom set of crops together along the first (horizontal) axis.
In the second Arithmetic call, we will stitch all 6 along both dimensions.
@example
## Crop the fixed-size regions of a larger image ('-O' is the
## short form of the '--mode' option).
$ astcrop large.fits -Oimg --section=1:100,1:300 -oa.fits
$ astcrop large.fits -Oimg --section=101:200,1:300 -ob.fits
$ astcrop large.fits -Oimg --section=201:300,1:300 -oc.fits
$ astcrop large.fits -Oimg --section=1:100,301:600 -od.fits
$ astcrop large.fits -Oimg --section=101:200,301:600 -oe.fits
$ astcrop large.fits -Oimg --section=201:300,301:600 -of.fits
## Stitch the bottom three crops into one image.
$ astarithmetic a.fits b.fits c.fits 3 1 stitch -obottom.fits
# Stitch all the 6 crops along both dimensions
$ astarithmetic a.fits b.fits c.fits 3 1 stitch \
d.fits e.fits f.fits 3 1 stitch \
2 2 stitch -g1 -oall.fits
@end example
The start of the last command is like the one before it (stitching the bottom three crops along the first FITS dimension, producing a @mymath{300\times300} image).
Later in the same command, we then stitch the top three crops horizontally (again, into a @mymath{300\times300} image)
This leaves the the two @mymath{300\times300} images on the stack (see @ref{Reverse polish notation}).
We finally stitch those two along the second (vertical) dimension.
This operator is therefore useful in scenarios like placing the CCD amplifiers into one image.
@item trim
Trim all blank elements from the outer edges of the input operand (it only takes a single operand).
For example see the commands below using Table's @ref{Column arithmetic}:
@example
$ cat table.txt
nan
nan
nan
3
4
nan
5
6
nan
$ asttable table.txt -Y -c'arith $1 trim'
3.000000
4.000000
nan
5.000000
6.000000
@end example
Similarly, on 2D images or 3D cubes, all outer rows/columns or slices that are fully blank get ``trim''ed with this operator.
This is therefore a very useful operator for extracting a certain feature within your dataset.
For example, let's assume that you have set @ref{NoiseChisel} and @ref{Segment} on an image to extract all clumps and objects.
With the command below on Segment's output, you will have a smaller image that only contains the sky-subtracted input pixels corresponding to object 263.
@example
$ astarithmetic seg.fits -hINPUT-NO-SKY seg.fits -hOBJECTS \
263 ne nan where trim --output=obj-263.fits
@end example
@item add-dimension-slow
Build a higher-dimensional dataset from all the input datasets stacked after one another (along the slowest dimension).
The first popped operand has to be a single number.
It is used by the operator to know how many operands it should pop from the stack (and the size of the output in the new dimension).
The rest of the operands must have the same size and numerical data type.
This operator currently only works for 2D input operands, please contact us if you want inputs to have different dimensions.
The output's WCS (which should have a different dimensionality compared to the inputs) can be read from another file with the @option{--wcsfile} option.
If no file is specified for the WCS, the first dataset's WCS will be used, you can later add/change the necessary WCS keywords with the FITS keyword modification features of the Fits program (see @ref{Fits}).
If your datasets do not have the same type, you can use the type transformation operators of Arithmetic that are discussed below.
Just beware of overflow if you are transforming to a smaller type, see @ref{Numeric data types}.
For example, let's assume you have 3 two-dimensional images @file{a.fits}, @file{b.fits} and @file{c.fits} (each with @mymath{200\times100} pixels).
You can construct a 3D data cube with @mymath{200\times100\times3} voxels (volume-pixels) using the command below:
@example
$ astarithmetic a.fits b.fits c.fits 3 add-dimension-slow
@end example
@item add-dimension-fast
Similar to @code{add-dimension-slow} but along the fastest dimension.
This operator currently only works for 1D input operands, please contact us if you want inputs to have different dimensions.
For example, let's assume you have 3 one-dimensional datasets, each with 100 elements.
With this operator, you can construct a @mymath{3\times100} pixel FITS image that has 3 pixels along the horizontal and 5 pixels along the vertical.
@item collapse-sum
Collapse the given dataset (second popped operand), by summing all elements along the first popped operand (a dimension in FITS standard: counting from one, from fastest dimension).
The returned dataset has one dimension less compared to the input.
The output will have a double-precision floating point type irrespective of the input dataset's type.
Doing the operation in double-precision (64-bit) floating point will help the collapse (summation) be affected less by floating point errors.
But afterwards, single-precision floating points are usually enough in real (noisy) datasets.
So depending on the type of the input and its nature, it is recommended to use one of the type conversion operators on the returned dataset.
@cindex World Coordinate System (WCS)
If any WCS is present, the returned dataset will also lack the respective dimension in its WCS matrix.
Therefore, when the WCS is important for later processing, be sure that the input is aligned with the respective axes: all non-diagonal elements in the WCS matrix are zero.
@cindex Data cubes
@cindex 3D data-cubes
@cindex Cubes (3D data)
@cindex Narrow-band image
@cindex IFU: Integral Field Unit
@cindex Integral field unit (IFU)
One common application of this operator is the creation of synthetic broad-band or narrow-band 2D images from 3D data cubes.
For example, integral field unit (IFU) data products that have two spatial dimensions (first two FITS dimensions) and one spectral dimension (third FITS dimension).
The command below will collapse the whole third dimension into a 2D array the size of the first two dimensions, and then convert the output to single-precision floating point (as discussed above).
@example
$ astarithmetic cube.fits 3 collapse-sum float32
@end example
@item collapse-min
@itemx collapse-max
@itemx collapse-mean
@itemx collapse-median
Similar to @option{collapse-sum}, but the returned dataset will be the desired statistic along the collapsed dimension, not the sum.
@item collapse-madclip-fill-mad
@itemx collapse-madclip-fill-std
@itemx collapse-madclip-fill-mean
@itemx collapse-madclip-fill-median
@itemx collapse-madclip-fill-number
Collapse the input dataset (fourth popped operand) along the FITS dimension given as the first popped operand by calculating the desired statistic after median absolute deviation (MAD) filled re-clipping.
The MAD-clipping parameters (namely, the multiple of sigma and termination criteria) are read as the third and second popped operands respectively.
This is the most robust method to reject outliers; for more on filled re-clipping and its advantages, see @ref{Contiguous outliers}.
For a more general tutorial on rejecting outliers, see @ref{Clipping outliers}.
If you have not done this tutorial yet, we recommend you to take an hour or so and go through that tutorial for optimal understanding and results.
When more than 95@mymath{\%} of the area of an operand is masked, the full operand will be masked.
This is necessary in like this: one of your inputs has many outliers (for example it is much more noisy than the rest or its sky level has not been subtracted properly).
Because this operator fills holes between outlying pixels, most of the area of the input will be masked, but the thin edges (where there are no ``holes'') will remain, causing different statistics in those thin edges of that input in your final coadd.
Through this mask coverage fraction (which is currently hard-coded@footnote{Please get in touch with us at @code{bug-gnuastro@@gnu.org} if you notice this problem and feel the fraction needs to be lowered (or generally to be set in each run).}), we ensure that such thin edges do not cause artifacts in the final coadd.
For example, with the command below, the pixels of the input 2 dimensional @file{image.fits} will be collapsed to a single dimension output.
The first popped operand is @code{2}, so it will collapse all the pixels that are vertically on top of each other.
Such that the output will have the same number of pixels as the horizontal axis of the input.
During the collapsing, all pixels that are more than @mymath{3\sigma} (third popped operand) are rejected, and the clipping will continue until the standard deviation changes less than @mymath{0.2} between clips.
Finally the @code{counter} operator is used to have a two-column table with the first one being a simple counter starting from one (see @ref{Size and position operators}).
@example
$ astarithmetic image.fits 3 0.2 2 collapse-sigclip-mean \
counter --output=collapsed-vertical.fits
@end example
@cartouche
@noindent
@strong{Printing output of collapse in plain-text:} the default datatype of @code{collapse-sigclip-mean} is 32-bit floating point.
This is sufficient for any observed astronomical data.
However, if you request a plain-text output, or decide to print/view the output as plain-text on the standard output, the full set of decimals may not be printed in some situations.
This can lead to apparently discrete values in the output of this operator when viewed in plain-text!
The FITS format is always superior (since it stores the value in binary, therefore not having the problem above).
But if you are forced to save the output in plain-text, use the @code{float64} operator after this to change the type to 64-bit floating point (which will print more decimals).
@end cartouche
@item collapse-madclip-mad
@itemx collapse-madclip-std
@itemx collapse-madclip-mean
@itemx collapse-madclip-median
@itemx collapse-madclip-number
Collapse the input dataset (fourth popped operand) along the FITS dimension given as the first popped operand by calculating the desired statistic after median absolute deviation (MAD) clipping.
This operator is called similarly to the @code{collapse-madclip-fill-*} operators, see the description there for more.
@item collapse-sigclip-fill-mad
@itemx collapse-sigclip-fill-std
@itemx collapse-sigclip-fill-mean
@itemx collapse-sigclip-fill-median
@itemx collapse-sigclip-fill-number
Collapse the input dataset (fourth popped operand) along the FITS dimension given as the first popped operand by calculating the desired statistic after filled @mymath{\sigma} re-clipping.
This operator is called similarly to the @code{collapse-madclip-fill-*} operators, see the description there for more.
@item collapse-sigclip-mad
@itemx collapse-sigclip-std
@itemx collapse-sigclip-mean
@itemx collapse-sigclip-median
@itemx collapse-sigclip-number
Collapse the input dataset (fourth popped operand) along the FITS dimension given as the first popped operand by calculating the desired statistic after @mymath{\sigma}-clipping.
This operator is called similarly to the @code{collapse-madclip-fill-*} operators, see the description there for more.
@end table
@node Conditional operators, Mathematical morphology operators, Dimensionality changing operators, Arithmetic operators
@subsubsection Conditional operators
Conditional operators take two inputs and return a binary output that can only have two values 0 (for pixels where the condition was false) or 1 (for the pixels where the condition was true).
Because of the binary (2-valued) nature of their outputs, the output is therefore stored in an @code{unsigned char} data type (see @ref{Numeric data types}) to speed up process and take less space in your storage.
There are two exceptions to the general features above: @code{isblank} only takes one input, and @code{where} takes three, while not returning a binary output, see their description for more.
@table @command
@item lt
Less than: creates a binary output (values either 0 or 1) where each pixel will be 1 if the second popped operand is smaller than the first popped operand and 0 otherwise.
If both operands are images, then all the pixels will be compared with their counterparts in the other image.
For example, the pixels in the output of the command below will have a value of 1 (true) if their value in @file{image1.fits} is less than their value in @file{image2.fits}.
Otherwise, their value will be 0 (false).
@example
$ astarithmetic image1.fits image2.fits lt
@end example
If only one operand is an image, then all the pixels will be compared with the single value (number) of the other operand.
For example:
@example
$ astarithmetic image1.fits 1000 lt
@end example
Finally if both are numbers, then the output is also just one number (0 or 1).
@example
$ astarithmetic 4 5 lt
@end example
@item le
Less or equal: similar to @code{lt} (`less than' operator), but returning 1 when the second popped operand is smaller or equal to the first.
For example
@example
$ astarithmetic image1.fits 1000 le
@end example
@item gt
Greater than: similar to @code{lt} (`less than' operator), but returning 1 when the second popped operand is greater than the first.
For example
@example
$ astarithmetic image1.fits 1000 gt
@end example
@item ge
Greater or equal: similar to @code{lt} (`less than' operator), but returning 1 when the second popped operand is larger or equal to the first.
For example
@example
$ astarithmetic image1.fits 1000 ge
@end example
@item eq
Equality: similar to @code{lt} (`less than' operator), but returning 1 when the two popped operands are equal (to double precision floating point accuracy).
@example
$ astarithmetic image1.fits 1000 eq
@end example
@item ne
Non-Equality: similar to @code{lt} (`less than' operator), but returning 1 when the two popped operands are @emph{not} equal (to double precision floating point accuracy).
@example
$ astarithmetic image1.fits 1000 ne
@end example
@item and
Logical AND: returns 1 if both operands have a non-zero value, and returns 0 if either operand is zero.
Both operands have to be the same kind: either both images or both numbers and it mostly makes meaningful values when the inputs are binary (with pixel values of 0 or 1).
@example
$ astarithmetic image1.fits image2.fits -g1 and
@end example
For example, if you only want to see which pixels in an image have a value @emph{between} 50 (greater equal, or inclusive) and 200 (less than, or exclusive), you can use this command:
@example
$ astarithmetic image.fits set-i i 50 ge i 200 lt and
@end example
@item or
Logical OR: returns 1 if either one of the operands is non-zero and 0 only when both operators are zero.
Both operands have to be the same kind: either both images or both numbers.
The usage is similar to @code{and}.
For example, if you only want to see which pixels in an image have a value @emph{outside of} -100 (greater equal, or inclusive) and 200 (less than, or exclusive), you can use this command:
@example
$ astarithmetic image.fits set-i i -100 lt i 200 ge or
@end example
@item not
Logical NOT: returns 1 when the operand is 0 and 0 when the operand is non-zero.
The operand can be an image or number, for an image, it is applied to each pixel separately.
For example, if you want to know which pixels are not blank (and assuming that we didn't have the @code{isnotblank} operator), you can use this @code{not} operator on the output of the @command{isblank} operator described below:
@example
$ astarithmetic image.fits isblank not
@end example
@cindex Blank pixel
@item isblank
Test each pixel for being a blank value (see @ref{Blank pixels}).
This is a conditional operator: the output has the same size and dimensions as the input, but has an unsigned 8-bit integer type with two possible values: either 1 (for a pixel that was blank) or 0 (for a pixel that was not blank).
See the description of @code{lt} operator above).
The difference is that it only needs one operand.
For example:
@example
$ astarithmetic image.fits isblank
@end example
Because of the definition of a blank pixel, a blank value is not even equal to itself, so you cannot use the equal operator above to select blank pixels.
See the ``Blank pixels'' box below for more on Blank pixels in Arithmetic.
In case you want to set non-blank pixels to an output pixel value of 1, it is better to use @code{isnotblank} instead of `@code{isblank not}' (for more, see the description of @code{isnotblank}).
@item isnotblank
The inverse of the @code{isblank} operator above (see that description for more).
Therefore, if a pixel has a blank value, the output of this operator will have a 0 value for it.
This operator is therefore similar to running `@option{isblank not}', but slightly more efficient (won't need the intermediate product of two operators).
@item where
Change the input (pixel) value @emph{where}/if a certain condition holds.
The conditional operators above can be used to define the condition.
Three operands are required for @command{where}.
The input format is demonstrated in this simplified example:
@example
$ astarithmetic modify.fits binary.fits if-true.fits where
@end example
The value of any pixel in @file{modify.fits} that corresponds to a non-zero @emph{and} non-blank pixel of @file{binary.fits} will be changed to the value of the same pixel in @file{if-true.fits} (this may also be a number).
The 3rd and 2nd popped operands (@file{modify.fits} and @file{binary.fits} respectively, see @ref{Reverse polish notation}) have to have the same dimensions/size.
@file{if-true.fits} can be either a number, or have the same dimension/size as the other two.
The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or @code{unsigned char} in standard C) type (see @ref{Numeric data types}).
It is treated as a binary dataset (with only two values: zero and non-zero, hence the name @code{binary.fits} in this example).
However, commonly you will not be dealing with an actual FITS file of a condition/binary image.
You will probably define the condition in the same run based on some other reference image and use the conditional and logical operators above to make a true/false (or one/zero) image for you internally.
For example, the case below:
@example
$ astarithmetic in.fits reference.fits 100 gt new.fits where
@end example
In the example above, any of the @file{in.fits} pixels that has a value in @file{reference.fits} greater than @command{100}, will be replaced with the corresponding pixel in @file{new.fits}.
Effectively the @code{reference.fits 100 gt} part created the condition/binary image which was added to the stack (in memory) and later used by @code{where}.
The command above is thus equivalent to these two commands:
@example
$ astarithmetic reference.fits 100 gt --output=binary.fits
$ astarithmetic in.fits binary.fits new.fits where
@end example
Finally, the input operands are read and used independently, so you can use the same file more than once as any of the operands.
When the 1st popped operand to @code{where} (@file{if-true.fits}) is a single number, it may be a NaN value (or any blank value, depending on its type) like the example below (see @ref{Blank pixels}).
When the number is blank, it will be converted to the blank value of the type of the 3rd popped operand (@code{in.fits}).
Hence, in the example below, all the pixels in @file{reference.fits} that have a value greater than 100, will become blank in the natural data type of @file{in.fits} (even though NaN values are only defined for floating point types).
@example
$ astarithmetic in.fits reference.fits 100 gt nan where
@end example
@end table
@node Mathematical morphology operators, Bitwise operators, Conditional operators, Arithmetic operators
@subsubsection Mathematical morphology operators
@cindex Mathematical morphology
From Wikipedia: ``Mathematical morphology (MM) is a theory and technique for the analysis and processing of geometrical structures, based on set theory, lattice theory, topology, and random functions. MM is most commonly applied to digital images''.
In theory it extends a very large body of research and methods in image processing, but currently in Gnuastro it mainly applies to images that are binary (only have a value of 0 or 1).
For example, you have applied the greater-than operator (@code{gt}, see @ref{Conditional operators}) to select all pixels in your image that are larger than a value of 100.
But they will all have a value of 1, and you want to separate the various groups of pixels that are connected (for example, peaks of stars in your image).
With the @code{connected-components} operator, you can give each connected region of the output of @code{gt} a separate integer label.
@table @command
@item erode
@cindex Erosion
Erode the foreground pixels (with value @code{1}) of the input dataset (second popped operand).
The first popped operand is the connectivity (see description in @command{connected-components}).
Erosion is simply a flipping of all foreground pixels (with value @code{1}) to background (with value @code{0}) that are ``touching'' background pixels.
``Touching'' is defined by the connectivity.
In effect, this operator ``carves off'' the outer borders of the foreground, making them thinner.
This operator assumes a binary dataset (all pixels are @code{0} or @code{1}).
For example, imagine that you have an astronomical image with a mean/sky value of 0 units and a standard deviation (@mymath{\sigma}) of 100 units and many galaxies in it.
With the first command below, you can apply a threshold of @mymath{2\sigma} on the image (by only keeping pixels that are greater than 200 using the @command{gt} operator).
The output of thresholding the image is a binary image (each pixel is either smaller or equal to the threshold or larger than it).
You can then erode the binary image with the second command below to remove very small false positives (one or two pixel peaks).
@example
$ astarithmetic image.fits 100 gt -obinary.fits
$ astarithmetic binary.fits 2 erode -oout.fits
@end example
In fact, you can merge these operations into one command thanks to the reverse polish notation (see @ref{Reverse polish notation}):
@example
$ astarithmetic image.fits 100 gt 2 erode -oout.fits
@end example
To see the effect of connectivity, try this:
@example
$ astarithmetic image.fits 100 gt 1 erode -oout-con-1.fits
@end example
@item dilate
@cindex Dilation
Dilate the foreground pixels (with value @code{1}) of the binary input dataset (second popped operand).
The first popped operand is the connectivity (see description in @command{connected-components}).
Dilation is simply a flipping of all background pixels (with value @code{0}) to foreground (with value @code{1}) that are ``touching'' foreground pixels.
``Touching'' is defined by the connectivity.
In effect, this expands the outer borders of the foreground.
This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
The usage is similar to @code{erode}, for example:
@example
$ astarithmetic binary.fits 2 dilate -oout.fits
@end example
@item number-neighbors
Return a dataset of the same size as the second popped operand, but where each non-zero and non-blank input pixel is replaced with the number of its non-zero and non-blank neighbors.
The first popped operand is the connectivity (see above) and must be a single-value of an integer type.
The dataset is assumed to be binary (having an unsigned, 8-bit dataset).
For example with the command below, you can select all pixels above a value of 100 in your image with the ``greater-than'' or @code{gt} operator (see @ref{Conditional operators}).
Recall that the output of all conditional operators is a binary output (having a value of 0 or 1).
In the same command, we will then find how many neighboring pixels of each pixel (that was originally above the threshold) are also above the threshold.
@example
$ astarithmetic image.fits 100 gt 2 number-neighbors
@end example
@item connected-components
@cindex Connected components
Find the connected components in the input dataset (second popped operand).
The first popped is the connectivity used in the connected components algorithm.
The second popped operand is the dataset where connected components are to be found.
It is assumed to be a binary image (with values of 0 or 1).
It must have an 8-bit unsigned integer type which is the format produced by conditional operators.
This operator will return a labeled dataset where the non-zero pixels in the input will be labeled with a counter (starting from 1).
The connectivity is a number between 1 and the number of dimensions in the dataset (inclusive).
1 corresponds to the weakest (symmetric) connectivity between elements and the number of dimensions the strongest.
For example, on a 2D image, a connectivity of 1 corresponds to 4-connected neighbors and 2 corresponds to 8-connected neighbors.
One example usage of this operator can be the identification of regions above a certain threshold, as in the command below.
With this command, Arithmetic will first separate all pixels greater than 100 into a binary image (where pixels with a value of 1 are above that value).
Afterwards, it will label all those that are connected.
@example
$ astarithmetic in.fits 100 gt 2 connected-components
@end example
If your input dataset does not have a binary type, but you know all its values are 0 or 1, you can use the @code{uint8} operator (below) to convert it to binary.
@item fill-holes
Flip background (0) pixels surrounded by foreground (1) in a binary dataset.
This operator takes two operands (similar to @code{connected-components}): the second is the binary (0 or 1 valued) dataset to fill holes in and the first popped operand is the connectivity (to define a hole).
Imagine that in your dataset there are some holes with zero value inside the objects with one value (for example, the output of the thresholding example of @command{erode}) and you want to fill the holes:
@example
$ astarithmetic binary.fits 2 fill-holes
@end example
@item invert
Invert an unsigned integer dataset (will not work on other data types, see @ref{Numeric data types}).
This is the only operator that ignores blank values (which are set to be the maximum values in the unsigned integer types).
This is useful in cases where the target(s) has(have) been imaged in absorption as raw formats (which are unsigned integer types).
With this option, the maximum value for the given type will be subtracted from each pixel value, thus ``inverting'' the image, so the target(s) can be treated as emission.
This can be useful when the higher-level analysis methods/tools only work on emission (positive skew in the noise, not negative).
@example
$ astarithmetic image.fits invert
@end example
@end table
@node Bitwise operators, Numerical type conversion operators, Mathematical morphology operators, Arithmetic operators
@subsubsection Bitwise operators
@cindex Bitwise operators
Astronomical images are usually stored as an array multi-byte pixels with different sizes for different precision levels (see @ref{Numeric data types}).
For example, images from CCDs are usually in the unsigned 16-bit integer type (each pixel takes 16 bits, or 2 bytes, of memory) and fully reduced deep images have a 32-bit floating point type (each pixel takes 32 bits or 4 bytes).
On the other hand, during the data reduction, we need to preserve a lot of meta-data about some pixels.
For example, if a cosmic ray had hit the pixel during the exposure, or if the pixel was saturated, or is known to have a problem, or if the optical vignetting is too strong on it.
A crude solution is to make a new image when checking for each one of these things and make a binary image where we flag (set to 1) pixels that satisfy any of these conditions above, and set the rest to zero.
However, processing pipelines sometimes need more than 20 flags to store important per-pixel meta-data, and recall that the smallest numeric data type is one byte (or 8 bits, that can store up to 256 different values), while we only need two values for each flag!
This is a major waste of storage space!
@cindex Flag (mask) images
@cindex Mask (flag) images
A much more optimal solution is to use the bits within each pixel to store different flags!
In other words, if you have an 8-bit pixel, use each bit as a flag to mark if a certain condition has happened on a certain pixel or not.
For example, let's set the following standard based on the four cases mentioned above: the first bit will show that a cosmic ray has hit that pixel.
So if a pixel is only affected by cosmic rays, it will have this sequence of bits (note that the bit-counting starts from the right): @code{00000001}.
The second bit shows that the pixel was saturated (@code{00000010}), the third bit shows that it has known problems (@code{00000100}) and the fourth bit shows that it was affected by vignetting (@code{00001000}).
Since each bit is independent, we can thus mark multiple metadata about that pixel in the actual image, within a single ``flag'' or ``mask'' pixel of a flag or mask image that has the same number of pixels.
For example, a flag-pixel with the following bits @code{00001001} shows that it has been affected by cosmic rays @emph{and} it has been affected by vignetting at the same time.
The common data type to store these flagging pixels are unsigned integer types (see @ref{Numeric data types}).
Therefore when you open an unsigned 8-bit flag image in a viewer like DS9, you will see a single integer in each pixel that actually has 8 layers of metadata in it!
For example, the integer you will see for the bit sequences given above will respectively be: @mymath{2^0=1} (for a pixel that only has cosmic ray), @mymath{2^1=2} (for a pixel that was only saturated), @mymath{2^2=4} (for a pixel that only has known problems), @mymath{2^3=8} (for a pixel that is only affected by vignetting) and @mymath{2^0 + 2^3 = 9} (for a pixel that has a cosmic ray @emph{and} was affected by vignetting).
You can later use this bit information to mark objects in your final analysis or to mask certain pixels.
For example, you may want to set all pixels affected by vignetting to NaN, but can interpolate over cosmic rays.
You therefore need ways to separate the pixels with a desired flag(s) from the rest.
It is possible to treat a flag pixel as a single integer (and try to define certain ranges in value to select certain flags).
But a much more easier and robust way is to actually look at each pixel as a sequence of bits (not as a single integer!) and use the bitwise operators below for this job.
For more on the theory behind bitwise operators, see @url{https://en.wikipedia.org/wiki/Bitwise_operation, Wikipedia}.
@table @command
@item bitand
Bitwise AND operator: only bits with values of 1 in both popped operands will get the value of 1, the rest will be set to 0.
For example, (assuming numbers can be written as bit strings on the command-line): @code{00101000 00100010 bitand} will give @code{00100000}.
Note that the bitwise operators only work on integer type datasets.
@item bitor
Bitwise inclusive OR operator: The bits where at least one of the two popped operands has a 1 value get a value of 1, the others 0.
For example, (assuming numbers can be written as bit strings on the command-line): @code{00101000 00100010 bitand} will give @code{00101010}.
Note that the bitwise operators only work on integer type datasets.
@item bitxor
Bitwise exclusive OR operator: A bit will be 1 if it differs between the two popped operands.
For example, (assuming numbers can be written as bit strings on the command-line): @code{00101000 00100010 bitand} will give @code{00001010}.
Note that the bitwise operators only work on integer type datasets.
@item lshift
Bitwise left shift operator: shift all the bits of the first operand to the left by a number of times given by the second operand.
For example, (assuming numbers can be written as bit strings on the command-line): @code{00101000 2 lshift} will give @code{10100000}.
This is equivalent to multiplication by 4.
Note that the bitwise operators only work on integer type datasets.
@item rshift
Bitwise right shift operator: shift all the bits of the first operand to the right by a number of times given by the second operand.
For example, (assuming numbers can be written as bit strings on the command-line): @code{00101000 2 rshift} will give @code{00001010}.
Note that the bitwise operators only work on integer type datasets.
@item bitnot
Bitwise not (more formally known as one's complement) operator: flip all the bits of the popped operand (note that this is the only unary, or single operand, bitwise operator).
In other words, any bit with a value of @code{0} is changed to @code{1} and vice-versa.
For example, (assuming numbers can be written as bit strings on the command-line): @code{00101000 bitnot} will give @code{11010111}.
Note that the bitwise operators only work on integer type datasets/numbers.
@end table
@node Numerical type conversion operators, Random number generators, Bitwise operators, Arithmetic operators
@subsubsection Numerical type conversion operators
With the operators below you can convert the numerical data type of your input, see @ref{Numeric data types}.
Type conversion is particularly useful when dealing with integers, see @ref{Integer benefits and pitfalls}.
As an example, let's assume that your colleague gives you many single exposure images for processing, but they have a double-precision floating point type!
You know that the statistical error a single-exposure image can never exceed 6 or 7 significant digits, so you would prefer to archive them as a single-precision floating point and save space on your computer (a double-precision floating point is also double the file size!).
You can do this with the @code{float32} operator described below.
@table @command
@item u8
@itemx uint8
Convert the type of the popped operand to 8-bit unsigned integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item i8
@itemx int8
Convert the type of the popped operand to 8-bit signed integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item u16
@itemx uint16
Convert the type of the popped operand to 16-bit unsigned integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item i16
@itemx int16
Convert the type of the popped operand to 16-bit signed integer (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item u32
@itemx uint32
Convert the type of the popped operand to 32-bit unsigned integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item i32
@itemx int32
Convert the type of the popped operand to 32-bit signed integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item u64
@itemx uint64
Convert the type of the popped operand to 64-bit unsigned integer (see @ref{Numeric data types}).
The internal conversion of C will be used.
@item f32
@itemx float32
Convert the type of the popped operand to 32-bit (single precision) floating point (see @ref{Numeric data types}).
The internal conversion of C will be used.
For example, if @file{f64.fits} is a 64-bit floating point image, and you want to store it as a 32-bit floating point image, you can use the command below (the second command is to show that the output file consumes half the storage)
@example
$ astarithmetic f64.fits float32 --output=f32.fits
$ ls -lh f64.fits f32.fits
@end example
@item f64
@itemx float64
Convert the type of the popped operand to 64-bit (double precision) floating point (see @ref{Numeric data types}).
The internal conversion of C will be used.
@end table
@node Random number generators, Coordinate and border operators, Numerical type conversion operators, Arithmetic operators
@subsubsection Random number generators
When you simulate data (for example, see @ref{Sufi simulates a detection}), everything is ideal and there is no noise!
The final step of the process is to add simulated noise to the data.
The operators in this section are designed for that purpose.
To learn more about the definition and implementation ``noise'', see @ref{Noise basics}.
In case each data element's random distribution should have an independent parameter (for example @mymath{\sigma} in a Gaussian distribution), the first popped operand can be a dataset of the same size as the second.
In this case (when the parameter is not a single value, but an array), each element will have a different parameter.
When @option{--quiet} is not given, a statement will be printed on each invocation of these operators (if there are multiple calls to the @code{mknoise-*} operators, the statement will be printed multiple times).
It will show the random number generator function and seed that was used in that invocation.
These are necessary for the future reproducibility of the outputs using the @option{--envseed} option, for more, see @ref{Generating random numbers}.
For example, with the first command below, @file{image.fits} will be degraded by a noise of standard deviation 3 units.
@example
$ astarithmetic image.fits 3 mknoise-sigma
@end example
Alternatively, you can use the operators in this section within the @ref{Column arithmetic} feature of the Table program.
For example, with the command below, you can generate a random number (centered on 0, with @mymath{\sigma=3}).
With the second command, you can put it into a shell variable for later usage.
@example
$ echo 0 | asttable -c'arith $1 3 mknoise-sigma'
$ value=$(echo 0 | asttable -c'arith $1 3 mknoise-sigma' --quiet)
$ echo $value
@end example
You can also use the operators here in combination with AWK to easily generate an arbitrarily large table with random columns.
In the example below, we will create a two column table with 20 rows.
The first column will be centered on 5 and @mymath{\sigma_1=2}, the second will be centered on 10 and @mymath{\sigma_2=3}:
@example
$ echo 5 10 \
| awk '@{for(i=0;i<20;++i) print $1, $2@}' \
| asttable -c'arith $1 2 mknoise-sigma' \
-c'arith $2 3 mknoise-sigma'
@end example
By adding an extra @option{--output=random.fits}, the table will be saved into a file called @file{random.fits}, and you can change the @code{i<20} to @code{i<5000} to have 5000 rows instead.
Of course, if your input table has different values in the desired column the noisy distribution will be centered on each input element, but all will have the same scatter/sigma.
As mentioned above, you can use the @option{--envseed} option to pre-define the random number generator seed (and thus get a reproducible result).
For more on @option{--envseed}, see @ref{Generating random numbers}.
When using column arithmetic in Table, it may happen that multiple columns need random numbers (with any of the @code{mknoise-*} operators) in one call of @command{asttable}.
In such cases, the value given to @code{GSL_RNG_SEED} is incremented by one on every call to the @code{mknoise-*} operators.
Without this increment, when the column values are the same (happens a lot, for no-noised datasets), the returned values for all columns will be identical.
But this feature has a side-effect: that if the order of calling the @code{mknoise-*} operators changes, the seeds used for each operator will change@footnote{We have defined @url{https://savannah.gnu.org/task/?15971, Task 15971} in Gnuastro's project management system to address this.
If you need this feature please send us an email at @code{bug-gnuastro@@gnu.org} (to motivate us in its implementation).}.
@table @command
@item mknoise-sigma
Add a Gaussian noise with pre-defined @mymath{\sigma} to each element of the input dataset (independent of the input pixel value).
@mymath{\sigma} is the standard deviation of the @url{https://en.wikipedia.org/wiki/Normal_distribution, Gaussian or Normal distribution}.
This operator takes two arguments: the top/first popped operand is the noise standard deviation, the next popped operand is the dataset that the noise should be added to.
For example, with the first command below, let's put a S@'ersic profile with S@'ersic index 1 and effective radius 10 pixels, truncated at 5 times the effective radius in the center of a mock image that is @mymath{100\times100} pixels wide.
We will also give it a position angle of 45 degrees and an axis ratio of 0.8, and set it to have a total electron count of 10000 (@code{1e4} in the command).
Note that this example is focused on this operator, for a robust simulation, see the tutorial in @ref{Sufi simulates a detection}.
With the second command, let's add noise to this image and with the third command, we'll subtract the raw image from the noised image.
Finally, we'll view them both together:
@example
$ echo "1 50 50 1 10 1 45 0.8 1e4 5" \
| astmkprof --mergedsize=100,100 --oversample=1 \
--mcolissum --output=raw.fits
$ astarithmetic raw.fits 2 mknoise-sigma --output=sigma.fits
$ astarithmetic raw.fits sigma.fits - -g1 \
--output=diff-sigma.fits
$ astscript-fits-view raw.fits sigma.fits diff-sigma.fits
@end example
You see that the final @file{diff-sigma.fits} distribution was independent of the pixel values of the input.
You will also notice that within @file{sigma.fits} the noisy pixels that had a zero value in @file{raw.fits}, the noise fluctuates around zero (is negative in half of those pixels).
These behaviors will be different in the case for @code{mknoise-sigma-from-mean} below, which is more ``realistic'' (or Poisson-like).
@item mknoise-sigma-from-mean
@cindex Poisson noise
Replace each input element (e.g., pixel in an image) of the input with a random value taken from a Gaussian distribution (for pixel @mymath{i}) with mean @mymath{\mu_i} and standard deviation @mymath{\sigma_i}.
Where, @mymath{\sigma_i=\sqrt{I_i+B_i}} and @mymath{\mu_i=I_i+B_i} and @mymath{I_i} and @mymath{B_i} are respectively the values of the input image, and background in that same pixel.
In other words, this can be seen as approximating a Poisson distribution at high mean values (where the Poisson distribution becomes identical to the Gaussian distribution).
This operator takes two arguments: 1. the first popped operand (just before the operator) is the @emph{per-pixel} background value (in units of electron counts).
2. The second popped operand is the dataset that the noise should be added to.
To demonstrate the effect of this noise pattern, please run the example commands in the description of @code{mknoise-sigma}.
With the first command below, let's add this Poisson-like noise (assuming a background level of 4 electron counts, to be similar to a @mymath{\sigma=2} of the example in @code{mknoise-sigma}).
With the second command, let's subtract the raw image from this noise pattern:
@example
$ astarithmetic raw.fits 4 mknoise-sigma-from-mean \
--output=sigma-from-mean.fits
$ astarithmetic raw.fits sigma-from-mean.fits - -g1 \
--output=diff-sigma-from-mean.fits
$ astscript-fits-view diff-sigma.fits \
diff-sigma-from-mean.fits
@end example
You clearly see how the noise in the center of the S@'ersic profile is much stronger than the outer parts.
As described, above, this is behavior we would expect in a ``real'' observation: the regions with stronger signal, also have stronger noise as defined through the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson distribution}!
The reason we described this operator as ``Poisson-like'' is that, it has some shortcomings as opposed to the @code{mknoise-poisson} operator (that is described below):
@itemize
@item
For low mean values (less than 3 for example), this will produce a symmetric Gaussian distribution, while the Poisson distribution will not be symmetric.
@item
The random values from this distribution are floating point (unlike the Poisson distribution that produces integers.
@item
The random values can be negative (which is not possible in a Poisson distribution).
@end itemize
@cindex Coadds
@cindex Coadding
@cindex Photon-starved images
Therefore to simulate photon-starved images (for example UV or X-ray data), the @code{mknoise-poisson} operator should always be used, not this one.
However, in optical (or redder bands) data, the background is very bright (much brighter than 10 counts for example).
In such cases (as the mean increases), the Poisson distributions becomes identical to the Gaussian distribution.
Furthermore, processed coadded images are no longer integers, but floating points with the Sky-level already subtracted (see @ref{Sky value}).
Therefore if you are trying to simulate a processed, photon-rich dataset, you can safely use this operator.
@cindex Dark night
@cindex Gray night
@cindex Nights (dark or gray)
Recall that the background values reported by observatories (for example, to define dark or gray nights), or in papers, is usually reported in units of magnitudes per arcseconds square.
You need to do the conversion to counts per pixel manually.
The conversion of magnitudes to counts is described below.
For converting arcseconds squared to number of pixels, you can use the @option{--pixelscale} option of @ref{Fits}.
For example, @code{astfits image.fits --pixelscale}.
Except for the noise-model, this operator is very similar to @code{mknoise-sigma} and the examples there apply here too.
The main difference with @code{mknoise-sigma} is that in a Poisson distribution the scatter/sigma will depend on each element's value.
For example, let's assume you have made a mock image called @file{mock.fits} with @ref{MakeProfiles} and it is assumed zero point is 22.5 (for more on the zero point, see @ref{Brightness flux magnitude}).
Let's assume the background level for the Poisson noise has a value of 19 magnitudes.
You can first use the @code{mag-to-counts} operator to convert this background magnitude into counts, then feed the background value in counts to @code{mknoise-sigma-from-mean} operator:
@example
$ astarithmetic mock.fits 19 22.5 mag-to-counts \
mknoise-sigma-from-mean
@end example
Try changing the background value from 19 to 10 to see the effect!
Recall that the tutorial @ref{Sufi simulates a detection} shows how you can use MakeProfiles to build mock images.
@item mknoise-poisson
Replace every pixel of the input with a random integer taken from a Poisson distribution with the mean value of that input pixel.
Similar to @code{mknoise-sigma-from-mean}, it takes two operands:
1. The first popped operand (just before the operator) is the per-pixel background value (in units of electron counts).
2. The second popped operand is the dataset that the noise should be added to.
To demonstrate this noise pattern, let's use @code{mknoise-poisson} in the example of the description of @code{mknoise-sigma-from-mean} with the first command below.
The second command below will show you the two images side-by-side, you will notice that the Poisson distribution's undetected regions are slightly darker (this is because of the skewness of the Poisson distribution).
Finally, with the last two commands, you can see the histograms of the two distributions:
@example
$ astarithmetic raw.fits 4 mknoise-poisson \
--output=poisson.fits
$ astscript-fits-view sigma-from-mean.fits poisson.fits
$ aststatistics sigma-from-mean.fits --lessthan=10
-------
Histogram:
| ***
| ******
| **********
| ***********
| **************
| ****************
| ******************
| **********************
| **************************
| ********************************** *
|* **********************************************************
|------------------------------------------------------------
$ aststatistics poisson.fits --lessthan=10
-------
Histogram:
| * *
| * *
| * * *
| * * * *
| * * * * *
| * * * * *
| * * * * * * *
| * * * * * * *
| * * * * * * * *
| * * * * * * * *
| * * * * * * * *
|------------------------------------------------------------
@end example
The extra skewness in the Poisson distribution, and the fact that it only returns integers is therefore clear with the commands above.
The comparison was further made above in the description of @code{mknoise-sigma-from-mean}.
In summary, you should prefer the Poisson distribution when you are simulating the following scenarios:
@itemize
@item
A photon-starved image (as in UV or X-ray).
@item
A raw exposure of a photon-rich image (which may be photon-rich, but always integers).
@end itemize
@item mknoise-uniform
Add uniform noise to each element of the input dataset.
This operator takes two arguments: the top/first popped operand is the width of the interval, the second popped operand is the dataset that the noise should be added to (each element will be the center of the interval).
The returned random values may happen to be the minimum interval value, but will never be the maximum.
Except for the noise-model, this operator behaves very similar to @code{mknoise-sigma}, see the explanation there for more.
For example, with the command below, a random value will be selected between 10 to 14 (centered on 12, which is the only input data element, with a total width of 4).
@example
echo 12 | asttable -c'arith $1 4 mknoise-uniform'
@end example
Similar to the example in @code{mknoise-sigma}, you can pipe the output of @command{echo} to @command{awk} before passing it to @command{asttable} to generate a full column of uniformly selected values within the same interval.
@item random-from-hist-raw
Generate random values from a custom distribution (defined by a histogram).
The output will have a double-precision floating point type (see @ref{Numeric data types}).
This operator takes three operands:
@itemize
@item
The first popped operand (nearest to the operator) is the histogram values.
The histogram is a 1-dimensional dataset (a table column) and contains the probability of obtaining a certain interval of values.
The histogram does not have to be normalized: the GNU Scientific Library (or GSL, which is used by Gnuastro for this operator), will normalize it internally.
The value of each bin (whose probability is given in the histogram) is given in the second popped operand.
Therefore these two operands have to have the same number of rows.
@item
The second popped operand is the bin value (mostly the bin center, but it can be anything).
The probability of each bin is defined in the histogram operand (first popped operand).
The bins can have any width (do not have to be evenly spaced), and any order.
Just make sure that the same row in the bins column corresponds to the same row in the histogram: the number of rows in the bins and histogram must be equal.
@item
The third popped operand is the dataset that the random values should be written over.
Effectively only its size will be used by this operator (all values will be over-written as a double-precision floating point number).
@end itemize
The first two operands have to be single-dimensional (a table column) and have the same number of rows, but the last popped operand can have any number of dimensions.
You can use the @code{load-col-} operator to load the two bins and histogram columns from an external file (see @ref{Loading external columns}).
For example, in the command below, we first construct a fake histogram to represent a @mymath{y=x^2} distribution with AWK.
We aim to distribute random values from this distribution in a @mymath{100\times100} image.
Therefore, we use the @command{makenew} operator to construct an empty image of that size, use the @command{load-col-} operator to load the histogram columns into Arithmetic and put the output in @file{random.fits}.
Finally we visually inspect @file{random.fits} with DS9 and also have a look at its pixel distribution with @command{aststatistics}.
@example
$ echo "" | awk '@{for(i=1;i<5;++i) print i, i*i@}' \
> histogram.txt
$ cat histogram.txt
1 1
2 4
3 9
4 16
$ astarithmetic 100 100 2 makenew \
load-col-1-from-histogram.txt \
load-col-2-from-histogram.txt \
random-from-hist-raw \
--output=random.fits
$ astscript-fits-view random.fits
$ aststatistics random.fits --asciihist --numasciibins=50
| *
| *
| *
| *
| * *
| * *
| * *
| * * *
| * * *
|* * * *
|* * * *
|--------------------------------------------------
@end example
As you see, the 10000 pixels in the image only have values 1, 2, 3 or 4 (which were the values in the bins column of @file{histogram.txt}), and the number of times each of these values occurs follows the @mymath{y=x^2} distribution.
Generally, any value given in the bins column will be used for the final output values.
For example, in the command below (for generating a histogram from an analytical function), we are adding the bins by 20 (while keeping the same probability distribution of @mymath{y=x^2}).
If you re-run the Arithmetic command above after this, you will notice that the pixels values are now one of the following 21, 22, 23 or 24 (instead of 1, 2, 3, or 4).
But the shape of the histogram of the resulting random distribution will be unchanged.
@example
$ echo "" | awk '@{for(i=1;i<5;++i) print 20+i, i*i@}' \
> histogram.txt
@end example
If you do not want the outputs to have exactly the value of the bin identifier, but be a randomly selected value from a uniform distribution within the bin, you should use @command{random-from-hist} (see below).
As mentioned above, the output will have a double-precision floating point type (see @ref{Numeric data types}).
Therefore, by default each element of the output will consume 8 bytes (64-bits) of storage.
This is usually far more than the statistical error/precision of your data (and just results in wasted storage in your file system, or wasted RAM when a program that uses the data is being run, and a slower running time of the program).
It is therefore recommended to use a type-conversion operator after this operator to put the output in the smallest type that can be used to safely store your data without wasting storage, RAM or time.
For the list of type conversion operators, see @ref{Numerical type conversion operators}.
Recall that you already know the values returned by this operator (they are one of the values in the bins column).
For example, in the example above, the whole image only has values 1, 2, 3 or 4.
Since they are always positive and are below 255, we can safely place them in an unsigned 8-bit integer (see @ref{Numeric data types}) with the command below (note the @code{uint8} after the operator name, and that we are using a different name for the output).
After building the new image, let's have a look at the sizes of the two images with @command{ls -l}:
@example
$ astarithmetic 100 100 2 makenew \
load-col-1-from-histogram.txt \
load-col-2-from-histogram.txt \
random-from-hist-raw uint8 \
--output=random-u8.fits
$ ls -lh random.fits random-u8.fits
-rw-r--r-- 1 name name 85K Jan 01 13:40 random.fits
-rw-r--r-- 1 name name 17K Jan 01 13:45 random-u8.fits
@end example
As you see, when using a suitable data type, we can shrink the size of the file significantly without loosing any information (from 85 kilobytes to 17 kilobytes).
This difference can be felt much better for larger (real-world) datasets, so be sure to always set the output data type after calling this operator.
@item random-from-hist
Similar to @code{random-from-hist-raw}, but do not return the exact bin value, instead return a random value from a uniform distribution within each bin.
Therefore the following limitations have to be taken into account (compared to @code{random-from-hist-raw}):
@itemize
@item
The number associated with each bin (in the bin column) should be its center.
@item
The bins have to be in descending order (so the second row in the bin column is larger than the first).
@item
The bin widths (distance from one bin to another) have to be fixed.
@end itemize
For a demonstration, let's replace @code{random-from-hist-raw} with @code{random-from-hist} in the example of the description of @code{random-from-hist-raw}.
Note how we are manually converting the output of this operator into single-precision floating point (32-bit, since the default 64-bit precision is statistically meaningless in this scenario and we do not want to waste storage, memory and running time):
@example
$ echo "" | awk '@{for(i=1;i<5;++i) print i, i*i@}' \
> histogram.txt
$ astarithmetic 100 100 2 makenew \
load-col-1-from-histogram.txt \
load-col-2-from-histogram.txt \
random-from-hist float32 \
--output=random.fits
$ aststatistics random.fits --asciihist --numasciibins=50
| *
| *** ********
| ************
| *************
| * * *************
| * ***********************
| *************************
| *************************
| *************************************
|********* * **************************************
|**************************************************
|--------------------------------------------------
@end example
You can see that the pixels of @file{histogram.fits} are no longer just 1, 2, 3 or 4.
Instead, the values within each bin are selected from a uniform distribution covering that bin.
This creates the step-like feature in the histogram of the output.
Of course, this extra uniform random number generation can make your program slower so be sure to check if it is worth it.
In particular, one way to avoid this (and use @command{random-from-hist-raw} with a more contiguous-looking output distribution) is to simply use a higher-resolution histogram (assuming it is possible: you have a sufficient number of data points, or you have an analytical expression that you can sample at smaller bin sizes).
To better demonstrate this operator and its practical usage in everyday research, let's look at another example:
Assume you want to get 100 random star magnitudes that follow the real-world Gaia Data release 3 magnitude distribution within a radius of 2 degrees around the (RA,Dec) coordinate of (1.23,4.56).
Let's further assume that you want to distribute them uniformly over an image of size 1000 by 1000 pixels.
So your desired output table should have three columns, the first two are pixel positions of each star, and the third is the magnitude.
First, we need to query the Gaia database and ask for all the magnitudes in this region of the sky.
We know that Gaia is not complete for stars fainter than the 20th magnitude, so we will use the @option{--range} option and only ask for those stars that are brighter than magnitude 20.
@example
$ astquery gaia --dataset=dr3 --center=1.23,3.45 --radius=2 \
--column=phot_g_mean_mag --output=gaia.fits \
--range=phot_g_mean_mag,-inf,20
@end example
We now have more than 25000 magnitudes in @file{gaia.fits}!
To get a more accurate random sampling of our stars, let's construct a histogram with 500 bins, and generate our three desired randomly selected columns:
@example
$ aststatistics gaia.fits --histogram --numbins=500 \
--output=gaia-hist.fits
$ asttable gaia-hist.fits -i
$ echo 1000 \
| awk '@{for(i=0;i<100;++i) print $1/2@}' \
| asttable -c'arith $1 500 mknoise-uniform' \
-c'arith $1 500 mknoise-uniform' \
-c'arith $1 \
load-col-1-from-gaia-hist.fits-hdu-1 \
load-col-2-from-gaia-hist.fits-hdu-1 \
random-from-hist float32'
@end example
These columns can easily be placed in the format for @ref{MakeProfiles} to be inserted into an image automatically.
@end table
@node Coordinate and border operators, Loading external columns, Random number generators, Arithmetic operators
@subsubsection Coordinate and border operators
The operators here help you in defining or manipulating coordinates.
For examples to define the ``box'' (a rectangular region) that surrounds an ellipse or to rotate a point around a reference point.
@table @command
@item rotate-coord
Rotate the given point (horizontal and vertical coordinates given in 5th and 4th popped operands) around a center/reference point (coordinates given in the 3rd and 2nd popped operands) by a given angle (first popped operand).
For example, if you want to trace the outer edge of a circle centered on (1.23,45.6) with a radius of 0.78, you can use this operator like below.
The logic is that we assume a single point that is located on 0.78 units after the center on the horizontal axis (the point's vertical axis position is the same as the center).
We then rotate this point in each row by one degree to build the circle's circumference.
@verbatim
$ cx=1.23
$ cy=45.6
$ rad=0.78
$ seq 0 360 \
| awk '{print '$rad'+'$cx', '$cy', $1}' \
| asttable -c'arith $1 $2 '$cx' '$cy' $3 rotate-coord' \
--output=circle.fits
## Within TOPCAT, after opening "Plane Plot", within "Axes" select
## "Aspect lock" so the steps in both axis is the same.
$ astscript-fits-view circle.fits
@end verbatim
If you want the points to create a circle on the celestial sphere, you can use the @code{eq-j2000-from-flat} operator after this one (see @ref{Column arithmetic}):
@verbatim
$ seq 0 360 \
| awk '{print '$rad'+'$cx', '$cy', $1}' \
| asttable -c'arith $1 $2 '$cx' '$cy' $3 rotate-coord \
'$cx' '$cy' TAN eq-j2000-from-flat' \
--output=circle-on-sky.fits
@end verbatim
When you open TOPCAT, if you open the ``Plane Plot'', you will see an ellipse.
However, if you open ``Sky Plot'' (from the ``Graphics'' menu), and select the first and second columns respectively, you will see a circle.
The center coordinates and angle can be fixed for all the rows (as in the example above) or be different for every row.
Recall that if you want these to change on every row, you should give the column name (or number followed by @code{$}) for these operands instead of the constant number above.
@item box-around-ellipse
Return the width (along horizontal) and height (along vertical) of a box that encompasses an ellipse with the same center point.
The top-popped operand is assumed to be the position angle (angle from the horizontal axis) in @emph{degrees}.
The second and third popped operands are the minor and major radii of the ellipse respectively.
This operator outputs two operands on the general stack.
The first one is the width and the second (which will be the top one when this operator finishes) is the height.
If the value to the second popped operand (minor axis) is larger than the third (major axis), a NaN value will be written for both the width and height of that element and a warning will be printed (the warning can be disabled with the @option{--quiet} option).
As an example, if your ellipse has a major axis radius of 10 units, a minor axis radius of 4 units and a position angle of 20 degrees, you can estimate the bounding box with this command:
@example
$ echo "10 4 20" \
| asttable -c'arith $1 $2 $3 box-around-ellipse'
@end example
Alternatively if your three values are in separate FITS arrays/images, you can use the command below to have the width and height in similarly sized fits arrays.
In this example @file{a.fits} and @file{b.fits} are respectively the major and minor axis lengths and @file{pa.fits} is the position angle (in degrees).
Also, in all three, we assume the first extension is used.
After it is done, the height of the box will be put in @file{h.fits} and the width will be in @file{w.fits}.
Just note that because this operator has two output datasets, you need to first write the height (top output operand) into a file and free it with the @code{tofilefree-} operator, then write the width in the file given to @option{--output}.
@example
$ astarithmetic a.fits b.fits pa.fits box-around-ellipse \
tofilefree-h.fits -ow.fits -g1
@end example
Finally, if you need to treat the width and height separately for further processing, you can call the @code{set-} operator two times afterwards like below.
Recall that the @code{set-} operator will pop the top operand, and put it in memory with a certain name, bringing the next operand to the top of the stack.
For example, let's assume @file{catalog.fits} has at least three columns @code{MAJOR}, @code{MINOR} and @code{PA} which specify the major axis, minor axis and position angle respectively.
But you want the final width and height in 32-bit floating point numbers (not the default 64-bit, which may be too much precision in many scenarios).
You can do this with the command below (note you can also break lines with @key{\}, within the single-quote environment)
@example
$ asttable catalog.fits \
-c'arith MAJOR MINOR PA box-around-ellipse \
set-height set-width \
width float32 height float32'
@end example
@item box-vertices-on-sphere
@cindex Polygon
@cindex Vertices on sphere (sky)
Convert a box center and width to the coordinates of the vertices of the box on a left-hand spherical coordinate system.
In a left-handed spherical coordinate system, the longitude increases towards the left while north is up (as in the RA and Dec direction of the equatorial coordinate system used in astronomy).
This operator therefore takes four input operands (the RA and Dec of the box's center, as well as the width of the box in each direction).
After it is complete, this operator places 8 operands on the stack which contain the RA and Dec of the four vertices of the box in the following anti-clockwise order:
@enumerate
@item
Bottom-left vertice Longitude (RA)
@item
Bottom-left vertice Latitude (Dec)
@item
Bottom-right vertice Longitude (RA)
@item
Bottom-right vertice Latitude (Dec)
@item
Top-right vertice Longitude (RA)
@item
Top-right vertice Latitude (Dec)
@item
Top-left vertice Longitude (RA)
@item
Top-left vertice Latitude (Dec)
@end enumerate
For example, with the command below, we will retrieve the vertice coordinates of a rectangle around a point with RA=20 and Dec=0 (on the equator).
The rectangle will have a 1 degree edge along the RA direction and a 2 degree edge along the declination.
In this example, we are using the @option{-Afixed -B2} only for demonstration purposes here due to the round numbers!
In general, it is best to write your outputs to a binary FITS table to preserve the full precision (see @ref{Printing floating point numbers}).
@example
$ echo "20 0 1 2" \
| asttable -Afixed -B2 \
-c'arith $1 $2 $3 $4 box-vertices-on-sphere'
20.50 -1.00 19.50 -1.00 19.50 1.00 20.50 1.00
@end example
We see that the bottom-left vertice is at (RA,Dec) of @mymath{(20.50,-1.0)} and the top-right vertice is at @mymath{(19.50,1.00)}.
These could have easily been done by manually adding and subtracting!
But you will see that the complexity arises at higher/lower declinations.
For example, with the command below, let's see how vertice coordinates of the same box, but after moving its center to (RA,Dec) of (20,85):
@example
$ echo "20 85 1 2" \
| asttable -Afixed -B2 \
-c'arith $1 $2 $3 $4 box-vertices-on-sphere'
24.78 84.00 15.22 84.00 12.83 86.00 27.17 86.00
@end example
Even though, we didn't change the central RA (20) or the size of the box along the RA (1 degree), the RA of the bottom-left vertice is now at 24.78; almost 5 degrees away!
This occurs because of the spherical coordinate system, we measure the longitude (e.g., RA) with the following way:
@enumerate
@item
@cindex Meridian
@cindex Great circle
@cindex Circle (great)
Draw a meridian that passes your point.
The meridian is half of a @url{https://en.wikipedia.org/wiki/Great_circle, great-circle} (which has a diameter that is equal to the sphere's diameter) passes both poles.
@item
Find the intersection of that meridian with the equator.
@item
The distance of the intersection and the reference point (along the equator) defines the longitude angle.
@end enumerate
@cindex Small circle
@cindex Circle (small)
As you get more distant from the equator (declination becomes non-zero), any change along the RA (towards the east; 1 degree in the example above) will on longer be on a great circle, but along a ``@url{https://en.wikipedia.org/wiki/Circle_of_a_sphere, small circle}''.
On a small circle that is defined by the fixed declination @mymath{\delta}, the distance of two points is closer than the distances of their projection on the equator (as described in the definition of longitude above).
It is smaller by a factor of @mymath{\cos({\delta})}.
Therefore, an angular change (let's call it @mymath{\Delta_{lon}}) along the small circle defined by the fixed declination of @mymath{\delta} corresponds to @mymath{\Delta_{lon}/\cos(\delta)} on the equator.
@end table
@node Loading external columns, Size and position operators, Coordinate and border operators, Arithmetic operators
@subsubsection Loading external columns
In the Arithmetic program, you can always load new dataset by simply giving their name.
However, they can only be images, not a column.
In the Table program, you can load columns in @ref{Column arithmetic}, but it has to be columns within the same table (and thus the same number of rows).
However, in some situations, it is necessary to use certain columns of a table in the Arithmetic program, or columns of different rows (from the main input) in Table.
@table @command
@item load-col-%-from-%
@itemx load-col-%-from-%-hdu-%
Load the requested column (first @command{%}) from the requested file (second @command{%}).
If the file is a FITS file, it is also necessary to specify a HDU using the second form (where the HDU identifier is the third @command{%}.
For example, @command{load-col-MAG-from-catalog.fits-hdu-1} will load the @code{MAG} column from HDU 1 of @code{catalog.fits}.
For example, let's assume you have the following two tables, and you would like to add the first column of the first with the second:
@example
$ asttable tab-1.fits
1 43.23
2 21.91
3 71.28
4 18.10
$ cat tab-2.txt
5
6
7
8
$ asttable tab-1.txt -c'arith $1 load-col-1-from-tab-2.txt +'
6
8
10
12
@end example
@end table
@node Size and position operators, New operands, Loading external columns, Arithmetic operators
@subsubsection Size and position operators
With the operators below you can get metadata about the top dataset on the stack.
@table @code
@item index
Add a new operand to the stack with an integer type and the same size (in all dimensions) as top operand on the stack (before it was called; it is not popped!).
The first pixel in the returned operand is zero, and every later pixel's value is incremented by one.
It is important to remember that the top operand is not popped by this operand, so it remains on the stack.
After this operand is finished, it adds a new operand to the stack.
To pop the previous operand, you can use the @code{indexonly} operator.
The data type of the output is always an unsigned integer, and its width is determined from the number of pixels/rows in the top operand.
For example if there are only 108 rows in a table, the returned column will have an unsigned 8-bit integer type (that can keep 256 separate values).
But if the top operand is a @mymath{1000\times1000=10^6} pixel image, the output will be a 32-bit unsigned integer.
For the various types of integers, see @ref{Numeric data types}.
To see the index image along with the actual image, you can use the @option{--writeall} operator to have a multi-HDU output (without @option{--writeall}, Arithmetic will complain if more than one operand is left at the end).
After DS9 opens with the second command, flip between the two extensions.
@example
$ astarithmetic image.fits index --writeall
$ astscript-fits-view image_arith.fits
@end example
Below is a review some usage examples of this operator:
@table @asis
@item Image: masking margins
With the command below, we will be masking all pixels that are 20 pixels away from the edges of the image (on the margin).
Here is a description of the command below (for the basics of Arithmetic's notation, see @ref{Reverse polish notation}):
@itemize
@item
The @code{index} operator just adds a new dataset on the stack: unlike almost all other operators in Arithmetic, @code{index} doesn't remove its input dataset from the stack (use @code{indexonly} for the ``normal'' behavior).
This is because @code{index} returns the pixel metadata not data.
As a result, after @code{index}, we have two operands on the stack: the input image and the index image.
@item
With the @code{set-i} operator, the top operand (the image containing the index of each pixel) is popped from the stack and associated to the name @code{i}.
Therefore after this, the stack only has the input image.
For more on the @code{set-} operator, see @ref{Operand storage in memory or a file}.
@item
We need three values from the commands before Arithmetic (for the width and height of the image and the size of the margin).
To make the rest of the command easier to read/use, we'll define them in Arithmetic as three named operators (respectively called @code{w}, @code{h} and @code{m}).
All three are integers that will have a positive value lower than @mymath{2^{16}=65536} (for a ``normal'' image!).
Therefore, we will store them as 16-bit unsigned integers with the @code{uint16} operator (this will help optimal processing in later steps).
For more the type changing operators, see @ref{Numerical type conversion operators}.
@item
Using the modulo @code{%} and division (@code{/}) operators on the index image and the width, we extract the horizontal (X) and vertical (Y) positions of each pixel in separately named operands called @code{X} and @code{Y}.
The maximum value in these two will also fit within an unsigned 16-bit integer, so we'll also store these in that type.
@item
For the horizontal (X) dimension, we select pixels that are less than the margin (@code{X m lt}) and those that are more than the width subtracted by the margin (@code{X w m - gt}).
@item
The output of the @code{lt} and @code{gt} conditional operators above is a binary (0 or 1 valued) image.
We therefore merge them into one binary image using the @code{or} operator.
For more, see @ref{Conditional operators}.
@item
We repeat the two steps above for the vertical (Y) dimension.
@item
Once the images containing the to-be-masked pixels in each dimension are made, we combine them into one binary image with a final @code{or} operator.
At this point, the stack only has two operands: 1) the input image and 2) the binary image that has a value of 1 for all pixels whose value should be changed.
@item
A single-element operand (@code{nan}) is added on the stack.
@item
Using the @code{where} operator, we replace all the pixels that are non-zero in the second operand (on the margins) to the top operand's value (NaN) in the third popped operand (image that was read from @code{image.fits}).
For more on the @code{where} operator, see @ref{Conditional operators}.
@end itemize
@example
$ margin=20
$ width=$(astfits image.fits --keyvalue=NAXIS1 -q)
$ height=$(astfits image.fits --keyvalue=NAXIS2 -q)
$ astarithmetic image.fits index set-i \
$width uint16 set-w \
$height uint16 set-h \
$margin uint16 set-m \
i w % uint16 set-X \
i w / uint16 set-Y \
X m lt X w m - gt or \
Y m lt Y h m - gt or \
or nan where
@end example
@item Image: Masking regions outside a circle
As another example for usage on an image, in the command below we are using @code{index} to define an image where each pixel contains the distance to the pixel with X,Y coordinates of 345,250.
We are then using that distance image to only keep the pixels that are within a 50 pixel radius of that point.
The basic concept behind this process is very similar to the previous example, with a different mathematical definition for pixels to mask.
The major difference is that we want the distance to a pixel within the image, we need to have negative values and the center coordinates can be in a sub-pixel positions.
The best numeric datatype for intermediate steps is therefore floating point.
64-bit floating point can have a precision of up to 15 digits after the decimal point.
This is far too much for what we need here: in astronomical imaging, the PSF is usually on the scale of 1 or more pixels (see @ref{Sampling theorem}).
So even reaching a precision of one millionth of a pixel (offered by 32-bit floating points) is beyond our wildest dreams (see @ref{Numeric data types}).
We will also define the horizontal (X) and vertical (Y) operands after shifting to the desired central point.
@example
$ radius=50
$ centerx=345.2
$ centery=250.3
$ width=$(astfits image.fits --keyvalue=NAXIS1 -q)
$ astarithmetic image.fits index set-i \
$width uint16 set-w \
$radius float32 set-r \
$centerx float32 set-cx \
$centery float32 set-cy \
i w % cx - set-X \
i w / cy - set-Y \
X X x Y Y x + sqrt r gt \
nan where --output=arith-masked.fits
@end example
@cartouche
@noindent
@strong{Optimal data types have significant benefits:} choosing the minimum required datatype for your operation is very important to avoid wasting your CPU and RAM.
Don't simply default to 64-bit floating points for everything!
Integer operations are much faster than floating points, and within floating point types, 32-bit is faster and will use half the RAM/storage!
For more, see @ref{Numeric data types}.
@end cartouche
The example above was just a demo for usage of the @code{index} operator and some important concepts.
But it is not the easiest way to achieve the desired result above!
An easier way for the scenario above (to keep a circle within an image and set everything else to NaN) is to use MakeProfiles in combination with Arithmetic, like below:
@example
$ radius=50
$ centerx=345.2
$ centery=250.3
$ echo "1 $centerx $centery 5 $radius 0 0 1 1 1" \
| astmkprof --background=image.fits \
--mforflatpix --clearcanvas \
-omkprof-mask.fits --type=uint8
$ astarithmetic image.fits mkprof-mask.fits not \
nan where -g1 -omkprof-masked.fits
@end example
@item Tables: adding new columns with row index
Within Table, you can use this operator to add an index column like below (see the @code{counter} operator for starting the count from one).
@example
## The index will be the second column.
$ asttable table.fits -c'arith $1 index'
## The index will be the first column
$ asttable table.fits -c'arith $1 index swap'
@end example
@end table
@item indexonly
Similar to @code{index}, except that the top operand is popped from the stack and is no longer available afterwards.
@item counter
Similar to @code{index}, except that counting starts from one (not zero as in @code{index}).
Counting from one is usually necessary when adding row counters in tables, like below:
@example
$ asttable table.fits -c'arith $1 counter swap'
@end example
@item counteronly
Similar to @code{counter}, but the top operand before it is popped (no longer available).
@item size
Size of the dataset along a given FITS (or FORTRAN) dimension (counting from 1).
The desired dimension should be the first popped operand and the dataset must be the second popped operand.
The output will be a single unsigned integer (dimensions cannot be negative).
For example, the following command will produce the size of the first extension/HDU (the default HDU) of @file{a.fits} along the second FITS axis.
@example
$ astarithmetic a.fits 2 size
@end example
@cartouche
@noindent
@strong{Not optimal:} This operator reads the top element on the stack and then simply reads its size along the given dimension.
On a small dataset this won't consume much RAM, but if you want to put this in a pipeline or use it on large image, the extra RAM and slow operation can become meaningful.
To avoid such issues, you can read the size along the given dimension using the @option{--keyvalue} option of @ref{Keyword inspection and manipulation}.
For example, in the code below, the X axis position of every pixel is returned:
@example
$ width=$(astfits image.fits --keyvalue=NAXIS1 -q)
$ astarithmetic image.fits indexonly $width % -opix-x.fits
@end example
@end cartouche
@end table
@node New operands, Operand storage in memory or a file, Size and position operators, Arithmetic operators
@subsubsection New operands
With the operator here, you can create a new dataset from scratch to start certain operations without any input data.
@table @command
@item makenew
Create a new dataset that only has zero values.
The number of dimensions is read as the first popped operand and the number of elements along each dimension are the next popped operand (in reverse of the popping order).
The type of the new dataset is an unsigned 8-bit integer and all pixel values have a value of zero.
For example, if you want to create a new 100 by 200 pixel image, you can run this command:
@example
$ astarithmetic 100 200 2 makenew
@end example
@noindent
To further extend the example, you can use any of the noise-making operators to add noise to this new dataset (see @ref{Random number generators}), like the command below:
@example
$ astarithmetic 100 200 2 makenew 5 mknoise-sigma
@end example
@item constant
Return an operand that will have a constant value (first popped operand) in all its elements.
The number of elements is read from the second popped operand.
The second popped operand is only used for its number of elements, its numeric data type, or its values are fully ignored and it is later freed.
@cindex Provenance
Here is one useful scenario for this operator in tables: you want to merge the objects/rows of some catalogs together, but you first want to give each source catalog a label/counter that distinguishes between the source of each rows in the merged/final catalog (using @ref{Invoking asttable}).
The steps below show the the usage of this.
@example
## Add label 1 to the RA, Dec, magnitude and magnitude error
## rows of the first catalog.
$ asttable cat-1.fits -cRA,DEC,MAG,MAG_ERR \
-c'arith $1 1 constant' --output=tab-1.fits
## Similar to above, but for the second catalog.
$ asttable cat-2.fits -cRA,DEC,MAG,MAG_ERR \
-c'arith $1 2 constant' --output=tab-2.fits
## Concatenate (merge/blend) the rows of the two tables into
## one for the 5 columns, but also add a counter for each
## object or row in the final catalog.
$ asttable tab-1.fits --catrowfile=tab-2.fits \
-c'arith $1 counteronly' \
-cRA,DEC,MAG,MAG_ERR,5 --output=merged.fits \
--colmetadata=1,ID_MERGED,counter,"Merged ID." \
--colmetadata=6,SOURCE-CAT,counter,"Source ID."
## Add keyword information on each input. It is very important
## to preserve this within the merged catalog. If the tables
## came from public databases (for example on VizieR), give
## their public identifier as the value.
$ astfits merged.fits --write=/,"Source catalogs" \
--write=CATSRC1,"I/355/gaiadr3","VizieR ID." \
--write=CATSRC2,"Jane Doe","Name of source."
## Check the metadata in 'merged.fits' and clean the
## temporary files.
$ rm tab-1.fits tab-2.fits
$ astfits merged.fits -h1
@end example
Like most operators, @code{constant} is not limited to tables, you can also apply it on images.
In the example below, we'll use @code{constant} to set all the pixels of the input image to NaN (which is necessary in scenarios that you need to include in an image in an analysis, but you don't want its pixels to affect the processing):
@example
$ astarithmetic image.fits nan constant
@end example
@end table
@node Operand storage in memory or a file, , New operands, Arithmetic operators
@subsubsection Operand storage in memory or a file
In your early days of using Gnuastro, to do multiple operations, it is likely that you will simply call Arithmetic (or Table, with column arithmetic) multiple times: feed the output file of the first call to the second call.
But as you get more proficient in the reverse polish notation, you will find yourself combining many operations into one call.
This greatly speeds up your operation, because instead of writing the dataset to a file in one command, and reading it in the next command, it will just keep the intermediate dataset in memory!
But adding more complexity to your operations, can make them much harder to debug, or extend even further.
Therefore in this section we have some special operators that behave differently from the rest: they do not touch the contents of the data, only where/how they are stored.
They are designed to do complex operations, without necessarily having a complex command.
@table @command
@item swap
Swap the top two operands on the stack.
For example the @code{index} operator doesn't pop with the top operand (the input to @code{index}), it just adds the index image to the stack.
In case you want your next operation to be on the input to @code{index}, you can simply call @code{swap} and continue the operations on that image, while keeping the indexed pixels for later steps.
In the example below we are using the @option{--writeall} option to write the full stack and if you open the outputs you will see that the stack order has changed.
@example
## Index image is written in HDU 1.
$ astarithmetic image.fits index --writeall \
--output=ind-first.fits
## image.fits in HDU 1.
$ astarithmetic image.fits index swap --writeall \
--output=img-first.fits
@end example
@item repeat
Add N copies of the second popped operand to the stack of operands.
N is the first popped operand.
For example, let's assume @file{image.fits} is a @mymath{100\times100} image.
The output of the command below will be a 3D data cube of size @mymath{100\times100\times20} voxels (volume-pixels):
@example
$ astarithmetic image.fits 20 repeat 20 add-dimension-slow
@end example
@item free
Free the top operand from the stack and memory.
This is useful in cases where the operator adds more than one operand on the stack, but you only need one of them (not all!).
For an example, see the clipping operators of @ref{Coadding operators}.
@item set-AAA
Set the characters after the dash (@code{AAA} in the case shown here) as a name for the first popped operand on the stack.
The named dataset will be freed from memory as soon as it is no longer needed, or if the name is reset to refer to another dataset later in the command.
This operator thus enables reusability of a dataset without having to reread it from a file every time it is necessary during a process.
When a dataset is necessary more than once, this operator can thus help simplify reading/writing on the command-line (thus avoiding potential bugs), while also speeding up the processing.
Like all operators, this operator pops the top operand off of the main processing stack, but unlike other operands, it will not add anything back to the stack immediately.
It will keep the popped dataset in memory through a separate list of named datasets (not on the main stack).
That list will be used to add/copy any requested dataset to the main processing stack when the name is called.
The name to give the popped dataset is part of the operator's name.
For example, the @code{set-a} operator of the command below, gives the name ``@code{a}'' to the contents of @file{image.fits}.
This name is then used instead of the actual filename to multiply the dataset by two.
@example
$ astarithmetic image.fits set-a a 2 x
@end example
The name can be any string, but avoid strings ending with standard filename suffixes (for example, @file{.fits})@footnote{A dataset name like @file{a.fits} (which can be set with @command{set-a.fits}) will cause confusion in the initial parser of Arithmetic.
It will assume this name is a FITS file, and if it is used multiple times, Arithmetic will abort, complaining that you have not provided enough HDUs.}.
One example of the usefulness of this operator is in the @code{where} operator.
For example, let's assume you want to mask all pixels larger than @code{5} in @file{image.fits} (extension number 1) with a NaN value.
Without setting a name for the dataset, you have to read the file two times from memory in a command like this:
@example
$ astarithmetic image.fits image.fits 5 gt nan where -g1
@end example
But with this operator you can simply give @file{image.fits} the name @code{i} and simplify the command above to the more readable one below (which greatly helps when the filename is long):
@example
$ astarithmetic image.fits set-i i i 5 gt nan where
@end example
@item tofile-AAA
Write the top operand on the operands stack into a file called @code{AAA} (can be any FITS file name) without changing the operands stack.
If you do not need the dataset any more and would like to free it, see the @code{tofilefree} operator below.
By default, any file that is given to this operator is deleted before Arithmetic actually starts working on the input datasets.
The deletion can be deactivated with the @option{--dontdelete} option (as in all Gnuastro programs, see @ref{Input output options}).
If the same FITS file is given to this operator multiple times, it will contain multiple extensions (in the same order that it was called.
For example, the operator @command{tofile-check.fits} will write the top operand to @file{check.fits}.
Since it does not modify the operands stack, this operator is very convenient when you want to debug, or understanding, a string of operators and operands given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to see what is happening behind the scenes without modifying the overall process.
@item tofilefree-AAA
Similar to the @code{tofile} operator, with the only difference that the dataset that is written to a file is popped from the operand stack and freed from memory (cannot be used any more).
@end table
@node Invoking astarithmetic, , Arithmetic operators, Arithmetic
@subsection Invoking Arithmetic
Arithmetic will do pixel to pixel arithmetic operations on the individual pixels of input data and/or numbers.
For the full list of operators with explanations, please see @ref{Arithmetic operators}.
Any operand that only has a single element (number, or single pixel FITS image) will be read as a number, the rest of the inputs must have the same dimensions.
The general template is:
@example
$ astarithmetic [OPTION...] ASTRdata1 [ASTRdata2] OPERATOR ...
@end example
@noindent
One line examples:
@example
## Calculate (10.32-3.84)^2.7 quietly (will just print 155.329):
$ astarithmetic -q 10.32 3.84 - 2.7 pow
## Inverse the input image (1/pixel):
$ astarithmetic 1 image.fits / --out=inverse.fits
## Multiply each pixel in image by -1:
$ astarithmetic image.fits -1 x --out=negative.fits
## Subtract extension 4 from extension 1 (counting from zero):
$ astarithmetic image.fits image.fits - --out=skysub.fits \
--hdu=1 --hdu=4
## Add two images, then divide them by 2 (2 is read as floating point):
## Note that without the '.0', the '2' will be read/used as an integer.
$ astarithmetic image1.fits image2.fits + 2.0 / --out=average.fits
## Use Arithmetic's average operator:
$ astarithmetic image1.fits image2.fits average --out=average.fits
## Calculate the median of three images in three separate extensions:
$ astarithmetic img1.fits img2.fits img3.fits median \
-h0 -h1 -h2 --out=median.fits
@end example
Arithmetic's notation for giving operands to operators is fully described in @ref{Reverse polish notation}.
The output dataset is last remaining operand on the stack.
When the output dataset a single number, and @option{--output} is not called, it will be printed on the standard output (command-line).
When the output is an array, it will be stored as a file.
The name of the final file can be specified with the @option{--output} option, but if it is not given (and the output dataset has more than one element), Arithmetic will use ``automatic output'' on the name of the first FITS image encountered to generate an output file name, see @ref{Automatic output}.
By default, if the output file already exists, it will be deleted before Arithmetic starts operation.
However, this can be disabled with the @option{--dontdelete} option (see below).
At any point during Arithmetic's operation, you can also write the top operand on the stack to a file, using the @code{tofile} or @code{tofilefree} operators, see @ref{Arithmetic operators}.
By default, the world coordinate system (WCS) information of the output dataset will be taken from the first input image (that contains a WCS) on the command-line.
This can be modified with the @option{--wcsfile} and @option{--wcshdu} options described below.
When the @option{--quiet} option is not given, the name and extension of the dataset used for the output's WCS is printed on the command-line.
Through operators like those starting with @code{collapse-}, the dimensionality of the inputs may not be the same as the outputs.
By default, when the output is 1D, Arithmetic will write it as a table, not an image/array.
The format of the output table (plain text or FITS ASCII or binary) can be set with the @option{--tableformat} option, see @ref{Input output options}).
You can disable this feature (write 1D arrays as FITS images/arrays, or to the standard output) with the @option{--onedasimage} or @option{--onedonstdout} options.
See @ref{Common options} for a review of the options in all Gnuastro programs.
Arithmetic just redefines the @option{--hdu} and @option{--dontdelete} options as explained below.
@table @option
@item --arguments=STR
A plain-text file containing the command-line arguments that will be used by Arithmetic.
This option is only relevant when no arguments are given on the command-line: if any arguments are given, this option is ignored.
@cindex Argument list too long
This is necessary when the set of of input files and operators (arguments; see @ref{Arguments and options}) are very long (thousands of long file names for example; usually generated within large pipelines).
Such long arguments will cause the shell to abort with an @code{Argument list too long} error.
In such cases, you can put the list into a plain-text file and use this option like below.
Here we are assuming you want to coadd all the files in a certain directory with the @code{mean} operator but after masking outliers; see @ref{Coadding operators} and @ref{Statistical operators}:
@example
$ counter=0
$ for f in $(pwd)/*.fits; do \
echo $f; counter=$((counter+1)); \
done > arguments.txt; \
echo "$counter 4.5 0.01 madclip-maskfilled $counter mean" \
>> arguments.txt
$ astarithmetic --arguments=arguments.txt -g1
@end example
@item -h INT/STR
@itemx --hdu INT/STR
The header data unit of the input FITS images, see @ref{Input output options}.
Unlike most options in Gnuastro (which will ultimately only have one value for this option), Arithmetic allows @option{--hdu} to be called multiple times and the value of each invocation will be stored separately (for the unlimited number of input images you would like to use).
Recall that for other programs this (common) option only takes a single value.
So in other programs, if you specify it multiple times on the command-line, only the last value will be used and in the configuration files, it will be ignored if it already has a value.
The order of the values to @option{--hdu} has to be in the same order as input FITS images.
Options are first read from the command-line (from left to right), then top-down in each configuration file, see @ref{Configuration file precedence}.
If the number of HDUs is less than the number of input images, Arithmetic will abort and notify you.
However, if there are more HDUs than FITS images, there is no problem: they will be used in the given order (every time a FITS image comes up on the stack) and the extra HDUs will be ignored in the end.
So there is no problem with having extra HDUs in the configuration files and by default several HDUs with a value of @option{0} are kept in the system-wide configuration file when you install Gnuastro.
@item -g INT/STR
@itemx --globalhdu INT/STR
Use the value to this option as the HDU of all input FITS files.
This option is very convenient when you have many input files and the dataset of interest is in the same HDU of all the files.
When this option is called, any values given to the @option{--hdu} option (explained above) are ignored and will not be used.
@item -w FITS
@itemx --wcsfile FITS
FITS Filename containing the WCS structure that must be written to the output.
The HDU/extension should be specified with @option{--wcshdu}.
When this option is used, the respective WCS will be read before any processing is done on the command-line and directly used in the final output.
If the given file does not have any WCS, then the default WCS (first file on the command-line with WCS) will be used in the output.
This option will mostly be used when the default file (first of the set of inputs) is not the one containing your desired WCS.
But with this option, you can also use Arithmetic to rewrite/change the WCS of an existing FITS dataset from another file:
@example
$ astarithmetic data.fits --wcsfile=other.fits -ofinal.fits
@end example
@item -W STR
@itemx --wcshdu STR
HDU/extension to read the WCS within the file given to @option{--wcsfile}.
For more, see the description of @option{--wcsfile}.
@item --envseed
Use the environment for the random number generator settings in operators that need them (for example, @code{mknoise-sigma}).
This is very important for obtaining reproducible results, for more see @ref{Generating random numbers}.
@item --append
If the output file already exists, do not delete it; add the output data to new HDUs at the end of that file.
You can use the @option{--meta*} options below to give a name, unit or comments to this HDUs (to easily distinguish it from other HDUs).
When this option is given, the 0th HDU of the existing file will not be updated to add Arithmetic's option values at run-time (because the existing file must already have values there).
@item -n STR[,STR,[...]]
@itemx --metaname=STR[,STR,[...]]
Metadata (name) of the output dataset(s).
Multiple strings can be given (separated by a coma), in multiple calls to this option when you have multiple datasets in the output (and @option{--writeall} is necessary).
For a FITS image or table, the string given to this option is written in the @code{EXTNAME} or @code{TTYPE1} keyword (respectively).
In the case of tables, recall that the Arithmetic program only outputs a single column, you should use column arithmetic in Table for more than one column (see @ref{Column arithmetic}).
If this keyword is present in a FITS extension, it will be printed in the table output of a command like @command{astfits image.fits} (for images) or @command{asttable table.fits -i} (for tables).
This metadata can be very helpful for yourself in the future (when you have forgotten the details), so it is recommended to use this option for files that should be archived or shared with colleagues.
@item -u STR[,STR,[...]]
@itemx --metaunit=STR[,STR,[...]]
Metadata (unit) of the output dataset(s).
Multiple strings can be given (separated by a coma), in multiple calls to this option when you have multiple datasets in the output (and @option{--writeall} is necessary).
For a FITS image or table, the string given to this option is written in the @code{BUNIT} or @code{TTYPE1} keyword respectively.
For more on the importance of metadata, see the description of @option{--metaname}.
@item -c STR[,STR,[...]]
@itemx --metacomment=STR[,STR,[...]]
Metadata (comments) of the output dataset(s).
Multiple strings can be given (separated by a coma), in multiple calls to this option when you have multiple datasets in the output (and @option{--writeall} is necessary).
In case your comment has a coma within it, be sure to quote it with a `@key{\}', for example @option{--metacomment="My comment\, with a coma"}.
For a FITS image or table, the string given to this option is written in the @code{COMMENT} or @code{TCOMM1} keyword respectively.
For more on the importance of metadata, see the description of @option{--metaname}.
@item -O
@itemx --onedasimage
Write final dataset as a FITS image/array even if it has a single dimension.
By default, if the output is 1D, it will be written as a table, see above.
If the output has more than one dimension, this option is redundant.
@item -s
@itemx --onedonstdout
Write final dataset (only when it is 1D) to standard output, not as a file.
By default 1D datasets will be written as a table, see above.
If the output has more than one dimension, this option is redundant.
@item -D
@itemx --dontdelete
Do not delete the output file, or files given to the @code{tofile} or @code{tofilefree} operators, if they already exist.
Instead append the desired datasets to the extensions that already exist in the respective file.
Note it does not matter if the final output file name is given with the @option{--output} option, or determined automatically.
Arithmetic treats this option differently from its default operation in other Gnuastro programs (see @ref{Input output options}).
If the output file exists, when other Gnuastro programs are called with @option{--dontdelete}, they simply complain and abort.
But when Arithmetic is called with @option{--dontdelete}, it will appended the dataset(s) to the existing extension(s) in the file.
@item -a
@itemx --writeall
Write all datasets on the stack as separate HDUs in the output file.
This only affects datasets with multiple dimensions (or single-dimension datasets when the @option{--onedasimg} is called).
This option is useful to debug Arithmetic calls: to check all the images on the stack while you are designing your operation.
The top dataset on the stack will be on HDU number 1 of the output, the second dataset will be on HDU number 2 and so on.
@end table
Arithmetic accepts two kinds of input: images and numbers.
Images are considered to be any of the inputs that is a file name of a recognized type (see @ref{Arguments}) and has more than one element/pixel.
Numbers on the command-line will be read into the smallest type (see @ref{Numeric data types}) that can store them, so @command{-2} will be read as a @code{char} type (which is signed on most systems and can thus keep negative values), @command{2500} will be read as an @code{unsigned short} (all positive numbers will be read as unsigned), while @code{3.1415926535897} will be read as a @code{double} and @code{3.14} will be read as a @code{float}.
To force a number to be read as float, put a @code{.} after it (possibly followed by a zero for easier readability), or add an @code{f} after it.
Hence while @command{5} will be read as an integer, @command{5.}, @command{5.0} or @command{5f} will be added to the stack as @code{float} (see @ref{Reverse polish notation}).
Unless otherwise stated (in @ref{Arithmetic operators}), the operators can deal with numeric multiple data types (see @ref{Numeric data types}).
For example, in ``@command{a.fits b.fits +}'', the image types can be @code{long} and @code{float}.
In such cases, C's internal type conversion will be used.
The output type will be set to the higher-ranking type of the two inputs.
Unsigned integer types have smaller ranking than their signed counterparts and floating point types have higher ranking than the integer types.
So the internal C type conversions done in the example above are equivalent to this piece of C:
@example
size_t i;
long a[100];
float b[100], out[100];
for(i=0;i<100;++i) out[i]=a[i]+b[i];
@end example
@noindent
Relying on the default C type conversion significantly speeds up the processing and also requires less RAM (when using very large images).
Some operators can only work on integer types (of any length, for example, bitwise operators) while others only work on floating point types, (currently only the @code{pow} operator).
In such cases, if the operand type(s) are different, an error will be printed.
Arithmetic also comes with internal type conversion operators which you can use to convert the data into the appropriate type, see @ref{Arithmetic operators}.
@cindex Options
The hyphen (@command{-}) can be used both to specify options (see @ref{Options}) and also to specify a negative number which might be necessary in your arithmetic.
In order to enable you to do this, Arithmetic will first parse all the input strings and if the first character after a hyphen is a digit, then that hyphen is temporarily replaced by the vertical tab character which is not commonly used.
The arguments are then parsed and these strings will not be specified as an option.
Then the given arguments are parsed and any vertical tabs are replaced back with a hyphen so they can be read as negative numbers.
Therefore, as long as the names of the files you want to work on, do not start with a vertical tab followed by a digit, there is no problem.
An important consequence of this implementation is that you should not write negative fractions like this: @command{-.3}, instead write them as @command{-0.3}.
@cindex AWK
@cindex GNU AWK
Without any images, Arithmetic will act like a simple calculator and print the resulting output number on the standard output like the first example above.
If you really want such calculator operations on the command-line, AWK (GNU AWK is the most common implementation) is much faster, easier and much more powerful.
For example, the numerical one-line example above can be done with the following command.
In general AWK is a fantastic tool and GNU AWK has a wonderful manual (@url{https://www.gnu.org/software/gawk/manual/}).
So if you often confront situations like this, or have to work with large text tables/catalogs, be sure to checkout AWK and simplify your life.
@example
$ echo "" | awk '@{print (10.32-3.84)^2.7@}'
155.329
@end example
@node Convolve, Warp, Arithmetic, Data manipulation
@section Convolve
@cindex Convolution
@cindex Neighborhood
@cindex Weighted average
@cindex Average, weighted
@cindex Kernel, convolution
On an image, convolution can be thought of as a process to blur or remove the contrast in an image.
If you are already familiar with the concept and just want to run Convolve, you can jump to @ref{Convolution kernel} and @ref{Invoking astconvolve} and skip the lengthy introduction on the basic definitions and concepts of convolution.
There are generally two methods to convolve an image.
The first and more intuitive one is in the ``spatial domain'' or using the actual image pixel values, see @ref{Spatial domain convolution}.
The second method is when we manipulate the ``frequency domain'', or work on the magnitudes of the different frequencies that constitute the image, see @ref{Frequency domain and Fourier operations}.
Understanding convolution in the spatial domain is more intuitive and thus recommended if you are just starting to learn about convolution.
However, getting a good grasp of the frequency domain is a little more involved and needs some concentration and some mathematical proofs.
However, its reward is a faster operation and more importantly a very fundamental understanding of this very important operation.
@cindex Detection
@cindex Atmosphere
@cindex Blur image
@cindex Cosmic rays
@cindex Pixel mixing
@cindex Mixing pixel values
Convolution of an image will generally result in blurring the image because it mixes pixel values.
In other words, if the image has sharp differences in neighboring pixel values@footnote{In astronomy, the only major time we confront such sharp borders in signal are cosmic rays.
All other sources of signal in an image are already blurred by the atmosphere or the optics of the instrument.}, those sharp differences will become smoother.
This has very good consequences in detection of signal in noise for example.
In an actual observed image, the variation in neighboring pixel values due to noise can be very high.
But after convolution, those variations will decrease and we have a better hope in detecting the possible underlying signal.
Another case where convolution is extensively used is in mock images and modeling in general, convolution can be used to simulate the effect of the atmosphere or the optical system on the mock profiles that we create, see @ref{PSF}.
Convolution is a very interesting and important topic in any form of signal analysis (including astronomical observations).
So we have thoroughly@footnote{A mathematician will certainly consider this explanation is incomplete and inaccurate.
However this text is written for an understanding on the operations that are done on a real (not complex, discrete and noisy) astronomical image, not any general form of abstract function} explained the concepts behind it in the following sub-sections.
@menu
* Spatial domain convolution:: Only using the input image values.
* Frequency domain and Fourier operations:: Using frequencies in input.
* Spatial vs. Frequency domain:: When to use which?
* Convolution kernel:: How to specify the convolution kernel.
* Invoking astconvolve:: Options and argument to Convolve.
@end menu
@node Spatial domain convolution, Frequency domain and Fourier operations, Convolve, Convolve
@subsection Spatial domain convolution
The pixels in an input image represent different ``spatial'' positions, therefore when convolution is done only using the actual input pixel values, we name the process as being done in the ``Spatial domain''.
In particular this is in contrast to the ``frequency domain'' that we will discuss later in @ref{Frequency domain and Fourier operations}.
In the spatial domain (and in realistic situations where the image and the convolution kernel do not extend to infinity), convolution is the process of changing the value of one pixel to the @emph{weighted} average of all the pixels in its @emph{neighborhood}.
The `neighborhood' of each pixel (how many pixels in which direction) and the `weight' function (how much each neighboring pixel should contribute depending on its position) are given through a second image which is known as a ``kernel''@footnote{Also known as filter, here we will use `kernel'.}.
@menu
* Convolution process:: More basic explanations.
* Edges in the spatial domain:: Dealing with the edges of an image.
@end menu
@node Convolution process, Edges in the spatial domain, Spatial domain convolution, Spatial domain convolution
@subsubsection Convolution process
In convolution, the kernel specifies the weight and positions of the neighbors of each pixel.
To find the convolved value of a pixel, the central pixel of the kernel is placed on that pixel.
The values of each overlapping pixel in the kernel and image are multiplied by each other and summed for all the kernel pixels.
To have one pixel in the center, the sides of the convolution kernel have to be an odd number.
This process effectively mixes the pixel values of each pixel with its neighbors, resulting in a blurred image compared to the sharper input image.
@cindex Linear spatial filtering
Formally, convolution is one kind of linear `spatial filtering' in image processing texts.
If we assume that the kernel has @mymath{2a+1} and @mymath{2b+1} pixels on each side, the convolved value of a pixel placed at @mymath{x} and @mymath{y} (@mymath{C_{x,y}}) can be calculated from the neighboring pixel values in the input image (@mymath{I}) and the kernel (@mymath{K}) from
@dispmath{C_{x,y}=\sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t}.}
@cindex Correlation
@cindex Convolution
Formally, any pixel that is outside of the image in the equation above will be considered to be zero (although, see @ref{Edges in the spatial domain}).
When the kernel is symmetric about its center the blurred image has the same orientation as the original image.
However, if the kernel is not symmetric, the image will be affected in the opposite manner, this is a natural consequence of the definition of spatial filtering.
In order to avoid this we can rotate the kernel about its center by 180 degrees so the convolved output can have the same original orientation (this is done by default in the Convolve program).
Technically speaking, only if the kernel is flipped the process is known as @emph{Convolution}.
If it is not it is known as @emph{Correlation}.
To be a weighted average, the sum of the weights (the pixels in the kernel) has to be unity.
This will have the consequence that the convolved image of an object and unconvolved object will have the same brightness (see @ref{Brightness flux magnitude}), which is natural, because convolution should not eat up the object photons, it only disperses them.
The convolution of each pixel is independent of the other pixels, and in some cases, it may be necessary to convolve different parts of an image separately (for example, when you have different amplifiers on the CCD).
Therefore, to speed up spatial convolution, Gnuastro first defines a tessellation over the input; assigning each group of pixels to ``tiles''.
It then does the convolution in parallel on each tile.
For more on how Gnuastro's programs create the tile grid (tessellation), see @ref{Tessellation}.
@node Edges in the spatial domain, , Convolution process, Spatial domain convolution
@subsubsection Edges in the spatial domain
In purely `linear' spatial filtering (convolution), there are problems with the edges of the input image.
Here we will explain the problem in the spatial domain.
For a discussion of this problem from the frequency domain perspective, see @ref{Edges in the frequency domain}.
The problem originates from the fact that on the edges, in practice, the sum of the weights we use on the actual image pixels is not unity@footnote{Because we assumed the overlapping pixels outside the input image have a value of zero.}.
For example, as discussed above, a profile in the center of an image will have the same brightness before and after convolution.
However, for partially imaged profile on the edge of the image, the brightness (sum of its pixel fluxes within the image, see @ref{Brightness flux magnitude}) will not be equal, some of the flux is going to be `eaten' by the edges.
If you run @command{$ make check} on the source files of Gnuastro, you can see this effect by comparing the @file{convolve_frequency.fits} with @file{convolve_spatial.fits} in the @file{./tests/} directory.
In the spatial domain, by default, no assumption will be made about pixels outside of the image or any blank pixels in the image.
The problem explained above will also occur on the sides of blank regions (see @ref{Blank pixels}).
The solution to this edge effect problem is only possible in the spatial domain.
For pixels near the edge, we have to abandon the assumption that the sum of the kernel pixels is unity during the convolution process@footnote{Of course the sum of the kernel pixels still have to be unity in general.}.
So taking @mymath{W} as the sum of the kernel pixels that overlapped with non-blank and in-image pixels, the equation in @ref{Convolution process} will become:
@dispmath{C_{x,y}= { \sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t} \over W}.}
@noindent
In this manner, objects which are near the edges of the image or blank pixels will also have the same brightness (within the image) before and after convolution.
This correction is applied by default in Convolve when convolving in the spatial domain.
To disable it, you can use the @option{--noedgecorrection} option.
In the frequency domain, there is no way to avoid this loss of flux near the edges of the image, see @ref{Edges in the frequency domain} for an interpretation from the frequency domain perspective.
Note that the edge effect discussed here is different from the one in @ref{If convolving afterwards}.
In making mock images we want to simulate a real observation.
In a real observation, the images of the galaxies on the sides of the CCD are first blurred by the atmosphere and instrument, then imaged.
So light from the parts of a galaxy which are immediately outside the CCD will affect the parts of the galaxy which are covered by the CCD.
Therefore in modeling the observation, we have to convolve an image that is larger than the input image by exactly half of the convolution kernel.
We can hence conclude that this correction for the edges is only useful when working on actual observed images (where we do not have any more data on the edges) and not in modeling.
@node Frequency domain and Fourier operations, Spatial vs. Frequency domain, Spatial domain convolution, Convolve
@subsection Frequency domain and Fourier operations
Getting a good grip on the frequency domain is usually not an easy job! So we have decided to give the issue a complete review here.
Convolution in the frequency domain (see @ref{Convolution theorem}) heavily relies on the concepts of Fourier transform (@ref{Fourier transform}) and Fourier series (@ref{Fourier series}) so we will be investigating these important operations first.
It has become something of a clich@'e for people to say that the Fourier series ``is a way to represent a (wave-like) function as the sum of simple sine waves'' (from Wikipedia).
However, sines themselves are abstract functions, so this statement really adds no extra layer of physical insight.
Before jumping head-first into the equations and proofs, we will begin with a historical background to see how the importance of frequencies actually roots in our ancient desire to see everything in terms of circles.
A short review of how the complex plane should be interpreted is then given.
Having paved the way with these two basics, we define the Fourier series and subsequently the Fourier transform.
The final aim is to explain discrete Fourier transform, however some very important concepts need to be solidified first: The Dirac comb, convolution theorem and sampling theorem.
So each of these topics are explained in their own separate sub-sub-section before going on to the discrete Fourier transform.
Finally we revisit (after @ref{Edges in the spatial domain}) the problem of convolution on the edges, but this time in the frequency domain.
Understanding the sampling theorem and the discrete Fourier transform is very important in order to be able to pull out valuable science from the discrete image pixels.
Therefore we have included the mathematical proofs and figures so you can have a clear understanding of these very important concepts.
@menu
* Fourier series historical background:: Historical background.
* Circles and the complex plane:: Interpreting complex numbers.
* Fourier series:: Fourier Series definition.
* Fourier transform:: Fourier Transform definition.
* Dirac delta and comb:: Dirac delta and Dirac comb.
* Convolution theorem:: Derivation of Convolution theorem.
* Sampling theorem:: Sampling theorem (Nyquist frequency).
* Discrete Fourier transform:: Derivation and explanation of DFT.
* Fourier operations in two dimensions:: Extend to 2D images.
* Edges in the frequency domain:: Interpretation of edge effects.
@end menu
@node Fourier series historical background, Circles and the complex plane, Frequency domain and Fourier operations, Frequency domain and Fourier operations
@subsubsection Fourier series historical background
Ever since the ancient times, the circle has been (and still is) the simplest shape for abstract comprehension.
All you need is a center point and a radius and you are done.
All the points on a circle are at a fixed distance from the center.
However, the moment you try to connect this elegantly simple and beautiful abstract construct (the circle) with the real world (for example, compute its area or its circumference), things become really hard (ideally, impossible) because the irrational number @mymath{\pi} gets involved.
The key to understanding the Fourier series (thus the Fourier transform and finally the Discrete Fourier Transform) is our ancient desire to express everything in terms of circles or the most exceptionally simple and elegant abstract human construct.
Most people prefer to say the same thing in a more ahistorical manner: to break a function into sines and cosines.
As the term ``ancient'' in the previous sentence implies, Jean-Baptiste Joseph Fourier (1768 -- 1830 A.D.) was not the first person to do this.
The main reason we know this process by his name today is that he came up with an ingenious method to find the necessary coefficients (radius of) and frequencies (``speed'' of rotation on) the circles for any generic (integrable) function.
@float Figure,epicycle
@c Since these links are long, we had to write them like this so they do not
@c jump out of the text width.
@cindex Qutb al-Din al-Shirazi
@cindex al-Shirazi, Qutb al-Din
@image{gnuastro-figures/epicycles, 15.2cm, , Middle ages epicycles along with two demonstrations of breaking a generic function using epicycles.}
@caption{Epicycles and the Fourier series.
Left: A demonstration of Mercury's epicycles relative to the ``center of the world'' by Qutb al-Din al-Shirazi (1236 -- 1311 A.D.) retrieved @url{https://commons.wikimedia.org/wiki/File:Ghotb2.jpg, from Wikipedia}.
@url{https://commons.wikimedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif, Middle} and
Right: How adding more epicycles (or terms in the Fourier series) will approximate functions.
The @url{https://commons.wikimedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif, right} animation is also available.}
@end float
Like most aspects of mathematics, this process of interpreting everything in terms of circles, began for astronomical purposes.
When astronomers noticed that the orbit of Mars and other outer planets, did not appear to be a simple circle (as everything should have been in the heavens).
At some point during their orbit, the revolution of these planets would become slower, stop, go back a little (in what is known as the retrograde motion) and then continue going forward again.
The correction proposed by Ptolemy (90 -- 168 A.D.) was the most agreed upon.
He put the planets on Epicycles or circles whose center itself rotates on a circle whose center is the earth.
Eventually, as observations became more and more precise, it was necessary to add more and more epicycles in order to explain the complex motions of the planets@footnote{See the Wikipedia page on ``Deferent and epicycle'' for a more complete historical review.}.
@ref{epicycle}(Left) shows an example depiction of the epicycles of Mercury in the late 13th century.
@cindex Aristarchus of Samos
Of course we now know that if they had abdicated the Earth from its throne in the center of the heavens and allowed the Sun to take its place, everything would become much simpler and true.
But there was not enough observational evidence for changing the ``professional consensus'' of the time to this radical view suggested by a small minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be one of the first people to suggest the Sun being in the center of the universe.
This approach to science (that the standard model is defined by consensus) and the fact that this consensus might be completely wrong still applies equally well to our models of particle physics and cosmology today.}.
So the pre-Galilean astronomers chose to keep Earth in the center and find a correction to the models (while keeping the heavens a purely ``circular'' order).
The main reason we are giving this historical background which might appear off topic is to give historical evidence that while such ``approximations'' do work and are very useful for pragmatic reasons (like measuring the calendar from the movement of astronomical bodies).
They offer no physical insight.
The astronomers who were involved with the Ptolemaic world view had to add a huge number of epicycles during the centuries after Ptolemy in order to explain more accurate observations.
Finally the death knell of this world-view was Galileo's observations with his new instrument (the telescope).
So the physical insight, which is what Astronomers and Physicists are interested in (as opposed to Mathematicians and Engineers who just like proving and optimizing or calculating!) comes from being creative and not limiting ourselves to such approximations.
Even when they work.
@node Circles and the complex plane, Fourier series, Fourier series historical background, Frequency domain and Fourier operations
@subsubsection Circles and the complex plane
Before going onto the derivation, it is also useful to review how the complex numbers and their plane relate to the circles we talked about above.
The two schematics in the middle and right of @ref{epicycle} show how a 1D function of time can be made using the 2D real and imaginary surface.
Seeing the animation in Wikipedia will really help in understanding this important concept.
At each point in time, we take the vertical coordinate of the point and use it to find the value of the function at that point in time.
@ref{iandtime} shows this relation with the axes marked.
@cindex Roger Cotes
@cindex Cotes, Roger
@cindex Caspar Wessel
@cindex Wassel, Caspar
@cindex Leonhard Euler
@cindex Euler, Leonhard
@cindex Abraham de Moivre
@cindex de Moivre, Abraham
Leonhard Euler@footnote{Other forms of this equation were known before Euler.
For example, in 1707 A.D. (the year of Euler's birth) Abraham de Moivre (1667 -- 1754 A.D.) showed that @mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}.
In 1714 A.D., Roger Cotes (1682 -- 1716 A.D. a colleague of Newton who proofread the second edition of Principia) showed that: @mymath{ix=\ln(\cos{x}+i\sin{x})}.} (1707 -- 1783 A.D.) showed that the complex exponential (@mymath{e^{iv}} where @mymath{v} is real) is periodic and can be written as: @mymath{e^{iv}=\cos{v}+isin{v}}.
Therefore @mymath{e^{iv+2\pi}=e^{iv}}.
Later, Caspar Wessel (mathematician and cartographer 1745 -- 1818 A.D.) showed how complex numbers can be displayed as vectors on a plane.
Euler's identity might seem counter intuitive at first, so we will try to explain it geometrically (for deeper physical insight).
On the real-imaginary 2D plane (like the left hand plot in each box of @ref{iandtime}), multiplying a number by @mymath{i} can be interpreted as rotating the point by @mymath{90} degrees (for example, the value @mymath{3} on the real axis becomes @mymath{3i} on the imaginary axis).
On the other hand, @mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, therefore, defining @mymath{m\equiv nu}, we get:
@dispmath{e^{u}=\lim_{n\rightarrow\infty}\left(1+{1\over n}\right)^{nu}
=\lim_{n\rightarrow\infty}\left(1+{u\over nu}\right)^{nu}
=\lim_{m\rightarrow\infty}\left(1+{u\over m}\right)^{m}}
@noindent
Taking @mymath{u\equiv iv} the result can be written as a generic complex number (a function of @mymath{v}):
@dispmath{e^{iv}=\lim_{m\rightarrow\infty}\left(1+i{v\over
m}\right)^{m}=a(v)+ib(v)}
@noindent
For @mymath{v=\pi}, a nice geometric animation of going to the limit can be seen @url{https://commons.wikimedia.org/wiki/File:ExpIPi.gif, on Wikipedia}.
We see that @mymath{\lim_{m\rightarrow\infty}a(\pi)=-1}, while @mymath{\lim_{m\rightarrow\infty}b(\pi)=0}, which gives the famous @mymath{e^{i\pi}=-1} equation.
The final value is the real number @mymath{-1}, however the distance of the polygon points traversed as @mymath{m\rightarrow\infty} is half the circumference of a circle or @mymath{\pi}, showing how @mymath{v} in the equation above can be interpreted as an angle in units of radians and therefore how @mymath{a(v)=cos(v)} and @mymath{b(v)=sin(v)}.
Since @mymath{e^{iv}} is periodic (let's assume with a period of @mymath{T}), it is more clear to write it as @mymath{v\equiv{2{\pi}n\over T}t} (where @mymath{n} is an integer), so @mymath{e^{iv}=e^{i{2{\pi}n\over T}t}}.
The advantage of this notation is that the period (@mymath{T}) is clearly visible and the frequency (@mymath{2{\pi}n \over T}, in units of 1/cycle) is defined through the integer @mymath{n}.
In this notation, @mymath{t} is in units of ``cycle''s.
As we see from the examples in @ref{epicycle} and @ref{iandtime}, for each constituting frequency, we need a respective `magnitude' or the radius of the circle in order to accurately approximate the desired 1D function.
The concepts of ``period'' and ``frequency'' are relatively easy to grasp when using temporal units like time because this is how we define them in every-day life.
However, in an image (astronomical data), we are dealing with spatial units like distance.
Therefore, by one ``period'' we mean the @emph{distance} at which the signal is identical and frequency is defined as the inverse of that spatial ``period''.
The complex circle of @ref{iandtime} can be thought of the Moon rotating about Earth which is rotating around the Sun; so the ``Real (signal)'' axis shows the Moon's position as seen by a distant observer on the Sun as time goes by.
Because of the scalar (not having any direction or vector) nature of time, @ref{iandtime} is easier to understand in units of time.
When thinking about spatial units, mentally replace the ``Time (sec)'' axis with ``Distance (meters)''.
Because length has direction and is a vector, visualizing the rotation of the imaginary circle and the advance along the ``Distance (meters)'' axis is not as simple as temporal units like time.
@float Figure,iandtime
@image{gnuastro-figures/iandtime, 15.2cm, , }
@caption{Relation between the real (signal), imaginary (@mymath{i\equiv\sqrt{-1}}) and time axes at two snapshots of time.}
@end float
@node Fourier series, Fourier transform, Circles and the complex plane, Frequency domain and Fourier operations
@subsubsection Fourier series
In astronomical images, our variable (brightness, or number of photo-electrons, or signal to be more generic) is recorded over the 2D spatial surface of a camera pixel.
However to make things easier to understand, here we will assume that the signal is recorded in 1D (assume one row of the 2D image pixels).
Also for this section and the next (@ref{Fourier transform}) we will be talking about the signal before it is digitized or pixelated.
Let's assume that we have the continuous function @mymath{f(l)} which is integrable in the interval @mymath{[l_0, l_0+L]} (always true in practical cases like images).
Take @mymath{l_0} as the position of the first pixel in the assumed row of the image and @mymath{L} as the width of the image along that row.
The units of @mymath{l_0} and @mymath{L} can be in any spatial units (for example, meters) or an angular unit (like radians) multiplied by a fixed distance which is more common.
To approximate @mymath{f(l)} over this interval, we need to find a set of frequencies and their corresponding `magnitude's (see @ref{Circles and the complex plane}).
Therefore our aim is to show @mymath{f(l)} as the following sum of periodic functions:
@dispmath{
f(l)=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over L}l} }
@noindent
Note that the different frequencies (@mymath{2{\pi}n/L}, in units of cycles per meters for example) are not arbitrary.
They are all integer multiples of the fundamental frequency of @mymath{\omega_0=2\pi/L}.
Recall that @mymath{L} was the length of the signal we want to model.
Therefore, we see that the smallest possible frequency (or the frequency resolution) in the end, depends on the length we observed the signal or @mymath{L}.
In the case of each dimension on an image, this is the size of the image in the respective dimension.
The frequencies have been defined in this ``harmonic'' fashion to insure that the final sum is periodic outside of the @mymath{[l_0, l_0+L]} interval too.
At this point, you might be thinking that the sky is not periodic with the same period as my camera's view angle.
You are absolutely right! The important thing is that since your camera's observed region is the only region we are ``observing'' and will be using, the rest of the sky is irrelevant; so we can safely assume the sky is periodic outside of it.
However, this working assumption will haunt us later in @ref{Edges in the frequency domain}.
The frequencies are thus determined by definition.
So all we need to do is to find the coefficients (@mymath{c_n}), or magnitudes, or radii of the circles for each frequency which is identified with the integer @mymath{n}.
Fourier's approach was to multiply both sides with a fixed term:
@dispmath{
f(l)e^{-i{2{\pi}m\over L}l}=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}(n-m)\over L}l}
}
@noindent
where @mymath{m>0}@footnote{ We could have assumed @mymath{m<0} and set the exponential to positive, but this is more clear.}.
We can then integrate both sides over the observation period:
@dispmath{
\int_{l_0}^{l_0+L}f(l)e^{-i{2{\pi}m\over L}l}dl
=\int_{l_0}^{l_0+L}\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}(n-m)\over L}l}dl=\displaystyle\sum_{n=-\infty}^{\infty}c_n\int_{l_0}^{l_0+L}e^{i{2{\pi}(n-m)\over L}l}dl
}
@noindent
Both @mymath{n} and @mymath{m} are positive integers.
Also, we know that a complex exponential is periodic so after one period (@mymath{L}) it comes back to its starting point.
Therefore @mymath{\int_{l_0}^{l_0+L}e^{2{\pi}k/L}dl=0} for any @mymath{k>0}.
However, when @mymath{k=0}, this integral becomes: @mymath{\int_{l_0}^{l_0+T}e^0dt=\int_{l_0}^{l_0+T}dt=T}.
Hence since the integral will be zero for all @mymath{n{\neq}m}, we get:
@dispmath{
\displaystyle\sum_{n=-\infty}^{\infty}c_n\int_{l_0}^{l_0+T}e^{i{2{\pi}(n-m)\over L}l}dl=Lc_m }
@noindent
The origin of the axis is fundamentally an arbitrary position.
So let's set it to the start of the image such that @mymath{l_0=0}.
So we can find the ``magnitude'' of the frequency @mymath{2{\pi}m/L} within @mymath{f(l)} through the relation:
@dispmath{ c_m={1\over L}\int_{0}^{L}f(l)e^{-i{2{\pi}m\over L}l}dl }
@node Fourier transform, Dirac delta and comb, Fourier series, Frequency domain and Fourier operations
@subsubsection Fourier transform
In @ref{Fourier series}, we had to assume that the function is periodic outside of the desired interval with a period of @mymath{L}.
Therefore, assuming that @mymath{L\rightarrow\infty} will allow us to work with any function.
However, with this approximation, the fundamental frequency (@mymath{\omega_0}) or the frequency resolution that we discussed in @ref{Fourier series} will tend to zero: @mymath{\omega_0\rightarrow0}.
In the equation to find @mymath{c_m}, every @mymath{m} represented a frequency (multiple of @mymath{\omega_0}) and the integration on @mymath{l} removes the dependence of the right side of the equation on @mymath{l}, making it only a function of @mymath{m} or frequency.
Let's define the following two variables:
@dispmath{\omega{\equiv}m\omega_0={2{\pi}m\over L}}
@dispmath{F(\omega){\equiv}Lc_m}
@noindent
The equation to find the coefficients of each frequency in
@ref{Fourier series} thus becomes:
@dispmath{ F(\omega)=\int_{-\infty}^{\infty}f(l)e^{-i{\omega}l}dl.}
@noindent
The function @mymath{F(\omega)} is thus the @emph{Fourier transform} of @mymath{f(l)} in the frequency domain.
So through this transformation, we can find (analyze) the magnitudes of the constituting frequencies or the value in the frequency space@footnote{As we discussed before, this `magnitude' can be interpreted as the radius of the circle rotating at this frequency in the epicyclic interpretation of the Fourier series, see @ref{epicycle} and @ref{iandtime}.} of our spatial input function.
The great thing is that we can also do the reverse and later synthesize the input function from its Fourier transform.
Let's do it: with the approximations above, multiply the right side of the definition of the Fourier Series (@ref{Fourier series}) with @mymath{1=L/L=({\omega_0}L)/(2\pi)}:
@dispmath{ f(l)={1\over
2\pi}\displaystyle\sum_{n=-\infty}^{\infty}Lc_ne^{{2{\pi}in\over
L}l}\omega_0={1\over
2\pi}\displaystyle\sum_{n=-\infty}^{\infty}F(\omega)e^{i{\omega}l}\Delta\omega
}
@noindent
To find the right most side of this equation, we renamed @mymath{\omega_0} as @mymath{\Delta\omega} because it was our resolution, @mymath{2{\pi}n/L} was written as @mymath{\omega} and finally, @mymath{Lc_n} was written as @mymath{F(\omega)} as we defined above.
Now, as @mymath{L\rightarrow\infty}, @mymath{\Delta\omega\rightarrow0} so we can write:
@dispmath{ f(l)={1\over
2\pi}\int_{-\infty}^{\infty}F(\omega)e^{i{\omega}l}d\omega }
Together, these two equations provide us with a very powerful set of tools that we can use to process (analyze) and recreate (synthesize) the input signal.
Through the first equation, we can break up our input function into its constituent frequencies and analyze it, hence it is also known as @emph{analysis}.
Using the second equation, we can synthesize or make the input function from the known frequencies and their magnitudes.
Thus it is known as @emph{synthesis}.
Here, we symbolize the Fourier transform (analysis) and its inverse (synthesis) of a function @mymath{f(l)} and its Fourier Transform @mymath{F(\omega)} as @mymath{{\cal F}[f]} and @mymath{{\cal F}^{-1}[F]}.
@node Dirac delta and comb, Convolution theorem, Fourier transform, Frequency domain and Fourier operations
@subsubsection Dirac delta and comb
The Dirac @mymath{\delta} (delta) function (also known as an impulse) is the way that we convert a continuous function into a discrete one.
It is defined to satisfy the following integral:
@dispmath{\int_{-\infty}^{\infty}\delta(l)dl=1}
@noindent
When integrated with another function, it gives that function's value at @mymath{l=0}:
@dispmath{\int_{-\infty}^{\infty}f(l)\delta(l)dt=f(0)}
@noindent
An impulse positioned at another point (say @mymath{l_0}) is written as @mymath{\delta(l-l_0)}:
@dispmath{\int_{-\infty}^{\infty}f(l)\delta(l-l_0)dt=f(l_0)}
@noindent
The Dirac @mymath{\delta} function also operates similarly if we use summations instead of integrals.
The Fourier transform of the delta function is:
@dispmath{{\cal F}[\delta(l)]=\int_{-\infty}^{\infty}\delta(l)e^{-i{\omega}l}dl=e^{-i{\omega}0}=1}
@dispmath{{\cal F}[\delta(l-l_0)]=\int_{-\infty}^{\infty}\delta(l-l_0)e^{-i{\omega}l}dl=e^{-i{\omega}l_0}}
@noindent
From the definition of the Dirac @mymath{\delta} we can also define a
Dirac comb (@mymath{{\rm III}_P}) or an impulse train with infinite
impulses separated by @mymath{P}:
@dispmath{
{\rm III}_P(l)\equiv\displaystyle\sum_{k=-\infty}^{\infty}\delta(l-kP) }
@noindent
@mymath{P} is chosen to represent ``pixel width'' later in @ref{Sampling theorem}.
Therefore the Dirac comb is periodic with a period of @mymath{P}.
We have intentionally used a different name for the period of the Dirac comb compared to the input signal's length of observation that we showed with @mymath{L} in @ref{Fourier series}.
This difference is highlighted here to avoid confusion later when these two periods are needed together in @ref{Discrete Fourier transform}.
The Fourier transform of the Dirac comb will be necessary in @ref{Sampling theorem}, so let's derive it.
By its definition, it is periodic, with a period of @mymath{P}, so the Fourier coefficients of its Fourier Series (@ref{Fourier series}) can be calculated within one period:
@dispmath{{\rm III}_P=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over
P}l}}
@noindent
We can now find the @mymath{c_n} from @ref{Fourier series}:
@dispmath{
c_n={1\over P}\int_{-P/2}^{P/2}\delta(l)e^{-i{2{\pi}n\over P}l}
={1\over P}\quad\quad \rightarrow \quad\quad
{\rm III}_P={1\over P}\displaystyle\sum_{n=-\infty}^{\infty}e^{i{2{\pi}n\over P}l}
}
@noindent
So we can write the Fourier transform of the Dirac comb as:
@dispmath{
{\cal F}[{\rm III}_P]=\int_{-\infty}^{\infty}{\rm III}_Pe^{-i{\omega}l}dl
={1\over P}\displaystyle\sum_{n=-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-i(\omega-{2{\pi}n\over P})l}dl={1\over P}\displaystyle\sum_{n=-\infty}^{\infty}\delta\left(\omega-{2{\pi}n\over P}\right)
}
@noindent
In the last step, we used the fact that the complex exponential is a periodic function, that @mymath{n} is an integer and that as we defined in @ref{Fourier transform}, @mymath{\omega{\equiv}m\omega_0}, where @mymath{m} was an integer.
The integral will be zero for any @mymath{\omega} that is not equal to @mymath{2{\pi}n/P}, a more complete explanation can be seen in @ref{Fourier series}.
Therefore, while in the spatial domain the impulses had spacing of @mymath{P} (meters for example), in the frequency space, the spacing between the different impulses are @mymath{2\pi/P} cycles per meters.
@node Convolution theorem, Sampling theorem, Dirac delta and comb, Frequency domain and Fourier operations
@subsubsection Convolution theorem
The convolution (shown with the @mymath{\ast} operator) of the two
functions @mymath{f(l)} and @mymath{h(l)} is defined as:
@dispmath{
c(l)\equiv[f{\ast}h](l)=\int_{-\infty}^{\infty}f(\tau)h(l-\tau)d\tau
}
@noindent
See @ref{Convolution process} for a more detailed physical (pixel based) interpretation of this definition.
The Fourier transform of convolution (@mymath{C(\omega)}) can be written as:
@dispmath{
C(\omega)=\int_{-\infty}^{\infty}[f{\ast}h](l)e^{-i{\omega}l}dl=
\int_{-\infty}^{\infty}f(\tau)\left[\int_{-\infty}^{\infty}h(l-\tau)e^{-i{\omega}l}dl\right]d\tau
}
@noindent
To solve the inner integral, let's define @mymath{s{\equiv}l-\tau}, so
that @mymath{ds=dl} and @mymath{l=s+\tau} then the inner integral
becomes:
@dispmath{
\int_{-\infty}^{\infty}h(l-\tau)e^{-i{\omega}l}dl=
\int_{-\infty}^{\infty}h(s)e^{-i{\omega}(s+\tau)}ds=e^{-i{\omega}\tau}\int_{-\infty}^{\infty}h(s)e^{-i{\omega}s}ds=H(\omega)e^{-i{\omega}\tau}
}
@noindent
where @mymath{H(\omega)} is the Fourier transform of @mymath{h(l)}.
Substituting this result for the inner integral above, we get:
@dispmath{
C(\omega)=H(\omega)\int_{-\infty}^{\infty}f(\tau)e^{-i{\omega}\tau}d\tau=H(\omega)F(\omega)=F(\omega)H(\omega)
}
@noindent
where @mymath{F(\omega)} is the Fourier transform of @mymath{f(l)}.
So multiplying the Fourier transform of two functions individually, we get the Fourier transform of their convolution.
The convolution theorem also proves a relation between the convolutions in the frequency space.
Let's define:
@dispmath{D(\omega){\equiv}F(\omega){\ast}H(\omega)}
@noindent
Applying the inverse Fourier Transform or synthesis equation (@ref{Fourier transform}) to both sides and following the same steps above, we get:
@dispmath{d(l)=f(l)h(l)}
@noindent
Where @mymath{d(l)} is the inverse Fourier transform of @mymath{D(\omega)}.
We can therefore re-write the two equations above formally as the convolution theorem:
@dispmath{
{\cal F}[f{\ast}h]={\cal F}[f]{\cal F}[h]
}
@dispmath{
{\cal F}[fh]={\cal F}[f]\ast{\cal F}[h]
}
Besides its usefulness in blurring an image by convolving it with a given kernel, the convolution theorem also enables us to do another very useful operation in data analysis: to match the blur (or PSF) between two images taken with different telescopes/cameras or under different atmospheric conditions.
This process is also known as deconvolution.
Let's take @mymath{f(l)} as the image with a narrower PSF (less blurry) and @mymath{c(l)} as the image with a wider PSF which appears more blurred.
Also let's take @mymath{h(l)} to represent the kernel that should be convolved with the sharper image to create the more blurry image.
Above, we proved the relation between these three images through the convolution theorem.
But there, we assumed that @mymath{f(l)} and @mymath{h(l)} are known (given) and the convolved image is desired.
In deconvolution, we have @mymath{f(l)} --the sharper image-- and @mymath{f*h(l)} --the more blurry image-- and we want to find the kernel @mymath{h(l)}.
The solution is a direct result of the convolution theorem:
@dispmath{
{\cal F}[h]={{\cal F}[f{\ast}h]\over {\cal F}[f]}
\quad\quad
{\rm or}
\quad\quad
h(l)={\cal F}^{-1}\left[{{\cal F}[f{\ast}h]\over {\cal F}[f]}\right]
}
While this works really nice, it has two problems:
@itemize
@item
If @mymath{{\cal F}[f]} has any zero values, then the inverse Fourier transform will not be a number!
@item
If there is significant noise in the image, then the high frequencies of the noise are going to significantly reduce the quality of the final result.
@end itemize
A standard solution to both these problems is the Weiner deconvolution
algorithm@footnote{@url{https://en.wikipedia.org/wiki/Wiener_deconvolution}}.
@node Sampling theorem, Discrete Fourier transform, Convolution theorem, Frequency domain and Fourier operations
@subsubsection Sampling theorem
Our mathematical functions are continuous, however, our data collecting and measuring tools are discrete.
Here we want to give a mathematical formulation for digitizing the continuous mathematical functions so that later, we can retrieve the continuous function from the digitized recorded input.
Assuming that we have a continuous function @mymath{f(l)}, then we can define @mymath{f_s(l)} as the `sampled' @mymath{f(l)} through the Dirac comb (see @ref{Dirac delta and comb}):
@dispmath{
f_s(l)=f(l){\rm III}_P=\displaystyle\sum_{n=-\infty}^{\infty}f(l)\delta(l-nP)
}
@noindent
The discrete data-element @mymath{f_k} (for example, a pixel in an
image), where @mymath{k} is an integer, can thus be represented as:
@dispmath{f_k=\int_{-\infty}^{\infty}f_s(l)dl=\int_{-\infty}^{\infty}f(l)\delta(l-kP)dt=f(kP)}
Note that in practice, our discrete data points are not found in this fashion.
Each detector pixel (in an image for example) has an area and averages the signal it receives over that area, not a mathematical point as the Dirac @mymath{\delta} function defines.
However, as long as the variation in the signal over one detector pixel is not significant, this can be a good approximation.
Having put this issue to the side, we can now try to find the relation between the Fourier transforms of the un-sampled @mymath{f(l)} and the sampled @mymath{f_s(l)}.
For a more clear notation, let's define:
@dispmath{F_s(\omega)\equiv{\cal F}[f_s]}
@dispmath{D(\omega)\equiv{\cal F}[{\rm III}_P]}
@noindent
Then using the Convolution theorem (see @ref{Convolution theorem}),
@mymath{F_s(\omega)} can be written as:
@dispmath{F_s(\omega)={\cal F}[f(l){\rm III}_P]=F(\omega){\ast}D(\omega)}
@noindent
Finally, from the definition of convolution and the Fourier transform
of the Dirac comb (see @ref{Dirac delta and comb}), we get:
@dispmath{
\eqalign{
F_s(\omega) &= \int_{-\infty}^{\infty}F(\omega)D(\omega-\mu)d\mu \cr
&= {1\over P}\displaystyle\sum_{n=-\infty}^{\infty}\int_{-\infty}^{\infty}F(\omega)\delta\left(\omega-\mu-{2{\pi}n\over P}\right)d\mu \cr
&= {1\over P}\displaystyle\sum_{n=-\infty}^{\infty}F\left(
\omega-{2{\pi}n\over P}\right).\cr }
}
@mymath{F(\omega)} was only a simple function, see @ref{samplingfreq}(left).
However, from the sampled Fourier transform function we see that @mymath{F_s(\omega)} is the superposition of infinite copies of @mymath{F(\omega)} that have been shifted, see @ref{samplingfreq}(right).
From the equation, it is clear that the shift in each copy is @mymath{2\pi/P}.
@float Figure,samplingfreq
@image{gnuastro-figures/samplingfreq, 15.2cm, , } @caption{Sampling causes infinite repetition in the frequency domain.
FT is an abbreviation for `Fourier transform'.
@mymath{\omega_m} represents the maximum frequency present in the input.
@mymath{F(\omega)} is only symmetric on both sides of 0 when the input is real (not complex).
In general @mymath{F(\omega)} is complex and thus cannot be simply plotted like this.
Here we have assumed a real Gaussian @mymath{f(t)} which has produced a Gaussian @mymath{F(\omega)}.}
@end float
The input @mymath{f(l)} can have any distribution of frequencies in it.
In the example of @ref{samplingfreq}(left), the input consisted of a range of frequencies equal to @mymath{\Delta\omega=2\omega_m}.
Fortunately as @ref{samplingfreq}(right) shows, the assumed pixel size (@mymath{P}) we used to sample this hypothetical function was such that @mymath{2\pi/P>\Delta\omega}.
The consequence is that each copy of @mymath{F(\omega)} has become completely separate from the surrounding copies.
Such a digitized (sampled) data set is thus called @emph{over-sampled}.
When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is just small enough to finely separate even the largest frequencies in the input signal and thus it is known as @emph{critically-sampled}.
Finally if @mymath{2\pi/P<\Delta\omega} we are dealing with an @emph{under-sampled} data set.
In an under-sampled data set, the separate copies of @mymath{F(\omega)} are going to overlap and this will deprive us of recovering high constituent frequencies of @mymath{f(l)}.
The effects of under-sampling in an image with high rates of change (for example, a brick wall imaged from a distance) can clearly be visually seen and is known as @emph{aliasing}.
When the input @mymath{f(l)} is composed of a finite range of frequencies, @mymath{f(l)} is known as a @emph{band-limited} function.
The example in @ref{samplingfreq}(left) was a nice demonstration of such a case: for all @mymath{\omega<-\omega_m} or @mymath{\omega>\omega_m}, we have @mymath{F(\omega)=0}.
Therefore, when the input function is band-limited and our detector's pixels are placed such that we have critically (or over-) sampled it, then we can exactly reproduce the continuous @mymath{f(l)} from the discrete or digitized samples.
To do that, we just have to isolate one copy of @mymath{F(\omega)} from the infinite copies and take its inverse Fourier transform.
This ability to exactly reproduce the continuous input from the sampled or digitized data leads us to the @emph{sampling theorem} which connects the inherent property of the continuous signal (its maximum frequency) to that of the detector (the spacing between its pixels).
The sampling theorem states that the full (continuous) signal can be recovered when the pixel size (@mymath{P}) and the maximum constituent frequency in the signal (@mymath{\omega_m}) have the following relation@footnote{This equation is also shown in some places without the @mymath{2\pi}.
Whether @mymath{2\pi} is included or not depends on how you define the frequency}:
@dispmath{{2\pi\over P}>2\omega_m}
@noindent
This relation was first formulated by Harry Nyquist (1889 -- 1976 A.D.) in 1928 and formally proved in 1949 by Claude E. Shannon (1916 -- 2001 A.D.) in what is now known as the Nyquist-Shannon sampling theorem.
In signal processing, the signal is produced (synthesized) by a transmitter and is received and de-coded (analyzed) by a receiver.
Therefore producing a band-limited signal is necessary.
In astronomy, we do not produce the shapes of our targets, we are only observers.
Galaxies can have any shape and size, therefore ideally, our signal is not band-limited.
However, since we are always confined to observing through an aperture, the aperture will cause a point source (for which @mymath{\omega_m=\infty}) to be spread over several pixels.
This spread is quantitatively known as the point spread function or PSF.
This spread does blur the image which is undesirable; however, for this analysis it produces the positive outcome that there will be a finite @mymath{\omega_m}.
Though we should caution that any detector will have noise which will add lots of very high frequency (ideally infinite) changes between the pixels.
However, the coefficients of those noise frequencies are usually exceedingly small.
@node Discrete Fourier transform, Fourier operations in two dimensions, Sampling theorem, Frequency domain and Fourier operations
@subsubsection Discrete Fourier transform
As we have stated several times so far, the input image is a digitized, pixelated or discrete array of values (@mymath{f_s(l)}, see @ref{Sampling theorem}).
The input is not a continuous function.
Also, all our numerical calculations can only be done on a sampled, or discrete Fourier transform.
Note that @mymath{F_s(\omega)} is not discrete, it is continuous.
One way would be to find the analytic @mymath{F_s(\omega)}, then sample it at any desired ``freq-pixel''@footnote{We are using the made-up word ``freq-pixel'' so they are not confused with spatial domain ``pixels''.} spacing.
However, this process would involve two steps of operations and computers in particular are not too good at analytic operations for the first step.
So here, we will derive a method to directly find the `freq-pixel'ated @mymath{F_s(\omega)} from the pixelated @mymath{f_s(l)}.
Let's start with the definition of the Fourier transform (see @ref{Fourier transform}):
@dispmath{F_s(\omega)=\int_{-\infty}^{\infty}f_s(l)e^{-i{\omega}l}dl }
@noindent
From the definition of @mymath{f_s(\omega)} (using @mymath{x} instead of @mymath{n}) we get:
@dispmath{
\eqalign{
F_s(\omega) &= \displaystyle\sum_{x=-\infty}^{\infty}
\int_{-\infty}^{\infty}f(l)\delta(l-xP)e^{-i{\omega}l}dl \cr
&= \displaystyle\sum_{x=-\infty}^{\infty}
f_xe^{-i{\omega}xP}
}
}
@noindent
Where @mymath{f_x} is the value of @mymath{f(l)} on the point @mymath{x} or the value of the @mymath{x}th pixel.
As shown in @ref{Sampling theorem} this function is infinitely periodic with a period of @mymath{2\pi/P}.
So all we need is the values within one period: @mymath{0<\omega<2\pi/P}, see @ref{samplingfreq}.
We want @mymath{X} samples within this interval, so the frequency difference between each frequency sample or freq-pixel is @mymath{1/XP}.
Hence we will evaluate the equation above on the points at:
@dispmath{\omega={u\over XP} \quad\quad u = 0, 1, 2, ..., X-1}
@noindent
Therefore the value of the freq-pixel @mymath{u} in the frequency
domain is:
@dispmath{F_u=\displaystyle\sum_{x=0}^{X-1} f_xe^{-i{ux\over X}} }
@noindent
Therefore, we see that for each freq-pixel in the frequency domain, we are going to need all the pixels in the spatial domain@footnote{So even if one pixel is a blank pixel (see @ref{Blank pixels}), all the pixels in the frequency domain will also be blank.}.
If the input (spatial) pixel row is also @mymath{X} pixels wide, then we can exactly recover the @mymath{x}th pixel with the following summation:
@dispmath{f_x={1\over X}\displaystyle\sum_{u=0}^{X-1} F_ue^{i{ux\over X}} }
When the input pixel row (we are still only working on 1D data) has @mymath{X} pixels, then it is @mymath{L=XP} spatial units wide.
@mymath{L}, or the length of the input data was defined in @ref{Fourier series} and @mymath{P} or the space between the pixels in the input was defined in @ref{Dirac delta and comb}.
As we saw in @ref{Sampling theorem}, the input (spatial) pixel spacing (@mymath{P}) specifies the range of frequencies that can be studied and in @ref{Fourier series} we saw that the length of the (spatial) input, (@mymath{L}) determines the resolution (or size of the freq-pixels) in our discrete Fourier transformed image.
Both result from the fact that the frequency domain is the inverse of the spatial domain.
@node Fourier operations in two dimensions, Edges in the frequency domain, Discrete Fourier transform, Frequency domain and Fourier operations
@subsubsection Fourier operations in two dimensions
Once all the relations in the previous sections have been clearly understood in one dimension, it is very easy to generalize them to two or even more dimensions since each dimension is by definition independent.
Previously we defined @mymath{l} as the continuous variable in 1D and the inverse of the period in its direction to be @mymath{\omega}.
Let's show the second spatial direction with @mymath{m} the inverse of the period in the second dimension with @mymath{\nu}.
The Fourier transform in 2D (see @ref{Fourier transform}) can be written as:
@dispmath{F(\omega, \nu)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
f(l, m)e^{-i({\omega}l+{\nu}m)}dl}
@dispmath{f(l, m)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
F(\omega, \nu)e^{i({\omega}l+{\nu}m)}dl}
The 2D Dirac @mymath{\delta(l,m)} is non-zero only when @mymath{l=m=0}.
The 2D Dirac comb (or Dirac brush! See @ref{Dirac delta and comb}) can be written in units of the 2D Dirac @mymath{\delta}.
For most image detectors, the sides of a pixel are equal in both dimensions.
So @mymath{P} remains unchanged, if a specific device is used which has non-square pixels, then for each dimension a different value should be used.
@dispmath{{\rm III}_P(l, m)\equiv\displaystyle\sum_{j=-\infty}^{\infty}
\displaystyle\sum_{k=-\infty}^{\infty}
\delta(l-jP, m-kP) }
The Two dimensional Sampling theorem (see @ref{Sampling theorem}) is thus very easily derived as before since the frequencies in each dimension are independent.
Let's take @mymath{\nu_m} as the maximum frequency along the second dimension.
Therefore the two dimensional sampling theorem says that a 2D band-limited function can be recovered when the following conditions hold@footnote{If the pixels are not a square, then each dimension has to use the respective pixel size, but since most detectors have square pixels, we assume so here too}:
@dispmath{ {2\pi\over P} > 2\omega_m \quad\quad\quad {\rm and}
\quad\quad\quad {2\pi\over P} > 2\nu_m}
Finally, let's represent the pixel counter on the second dimension in the spatial and frequency domains with @mymath{y} and @mymath{v} respectively.
Also let's assume that the input image has @mymath{Y} pixels on the second dimension.
Then the two dimensional discrete Fourier transform and its inverse (see @ref{Discrete Fourier transform}) can be written as:
@dispmath{F_{u,v}=\displaystyle\sum_{x=0}^{X-1}\displaystyle\sum_{y=0}^{Y-1}
f_{x,y}e^{-i({ux\over X}+{vy\over Y})} }
@dispmath{f_{x,y}={1\over XY}\displaystyle\sum_{u=0}^{X-1}\displaystyle\sum_{v=0}^{Y-1}
F_{u,v}e^{i({ux\over X}+{vy\over Y})} }
@node Edges in the frequency domain, , Fourier operations in two dimensions, Frequency domain and Fourier operations
@subsubsection Edges in the frequency domain
With a good grasp of the frequency domain, we can revisit the problem of convolution on the image edges, see @ref{Edges in the spatial domain}.
When we apply the convolution theorem (see @ref{Convolution theorem}) to convolve an image, we first take the discrete Fourier transforms (DFT, @ref{Discrete Fourier transform}) of both the input image and the kernel, then we multiply them with each other and then take the inverse DFT to construct the convolved image.
Of course, in order to multiply them with each other in the frequency domain, the two images have to be the same size, so let's assume that we pad the kernel (it is usually smaller than the input image) with zero valued pixels in both dimensions so it becomes the same size as the input image before the DFT.
Having multiplied the two DFTs, we now apply the inverse DFT which is where the problem is usually created.
If the DFT of the kernel only had values of 1 (unrealistic condition!) then there would be no problem and the inverse DFT of the multiplication would be identical with the input.
However in real situations, the kernel's DFT has a maximum of 1 (because the sum of the kernel has to be one, see @ref{Convolution process}) and decreases something like the hypothetical profile of @ref{samplingfreq}.
So when multiplied with the input image's DFT, the coefficients or magnitudes (see @ref{Circles and the complex plane}) of the smallest frequency (or the sum of the input image pixels) remains unchanged, while the magnitudes of the higher frequencies are significantly reduced.
As we saw in @ref{Sampling theorem}, the Fourier transform of a discrete input will be infinitely repeated.
In the final inverse DFT step, the input is in the frequency domain (the multiplied DFT of the input image and the kernel DFT).
So the result (our output convolved image) will be infinitely repeated in the spatial domain.
In order to accurately reconstruct the input image, we need all the frequencies with the correct magnitudes.
However, when the magnitudes of higher frequencies are decreased, longer periods (shorter frequencies) will dominate in the reconstructed pixel values.
Therefore, when constructing a pixel on the edge of the image, the newly empowered longer periods will look beyond the input image edges and will find the repeated input image there.
So if you convolve an image in this fashion using the convolution theorem, when a bright object exists on one edge of the image, its blurred wings will be present on the other side of the convolved image.
This is often termed as circular convolution or cyclic convolution.
So, as long as we are dealing with convolution in the frequency domain, there is nothing we can do about the image edges.
The least we can do is to eliminate the ghosts of the other side of the image.
So, we add zero valued pixels to both the input image and the kernel in both dimensions so the image that will be convolved has a size equal to the sum of both images in each dimension.
Of course, the effect of this zero-padding is that the sides of the output convolved image will become dark.
To put it another way, the edges are going to drain the flux from nearby objects.
But at least it is consistent across all the edges of the image and is predictable.
In Convolve, you can see the padded images when inspecting the frequency domain convolution steps with the @option{--viewfreqsteps} option.
@node Spatial vs. Frequency domain, Convolution kernel, Frequency domain and Fourier operations, Convolve
@subsection Spatial vs. Frequency domain
With the discussions above it might not be clear when to choose the spatial domain and when to choose the frequency domain.
Here we will try to list the benefits of each.
@noindent
The spatial domain,
@itemize
@item
Can correct for the edge effects of convolution, see @ref{Edges in the spatial domain}.
@item
Can operate on blank pixels.
@item
Can be faster than frequency domain when the kernel is small (in terms of the number of pixels on the sides).
@end itemize
@noindent
The frequency domain,
@itemize
@item
Will be much faster when the image and kernel are both large.
@end itemize
@noindent
As a general rule of thumb, when working on an image of modeled profiles use the frequency domain and when working on an image of real (observed) objects use the spatial domain (corrected for the edges).
The reason is that if you apply a frequency domain convolution to a real image, you are going to loose information on the edges and generally you do not want large kernels.
But when you have made the profiles in the image yourself, you can just make a larger input image and crop the central parts to completely remove the edge effect, see @ref{If convolving afterwards}.
Also due to oversampling, both the kernels and the images can become very large and the speed boost of frequency domain convolution will significantly improve the processing time, see @ref{Oversampling}.
@node Convolution kernel, Invoking astconvolve, Spatial vs. Frequency domain, Convolve
@subsection Convolution kernel
All the programs that need convolution will need to be given a convolution kernel file and extension.
In most cases (other than Convolve, see @ref{Convolve}) the kernel file name is optional.
However, the extension is necessary and must be specified either on the command-line or at least one of the configuration files (see @ref{Configuration files}).
Within Gnuastro, there are two ways to create a kernel image:
@itemize
@item
MakeProfiles: You can use MakeProfiles to create a parametric (based on a radial function) kernel, see @ref{MakeProfiles}.
By default MakeProfiles will make the Gaussian and Moffat profiles in a separate file so you can feed it into any of the programs.
@item
ConvertType: You can write your own desired kernel into a text file table and convert it to a FITS file with ConvertType, see @ref{ConvertType}.
Just be careful that the kernel has to have an odd number of pixels along its two axes, see @ref{Convolution process}.
All the programs that do convolution will normalize the kernel internally, so if you choose this option, you do not have to worry about normalizing the kernel.
Only within Convolve, there is an option to disable normalization, see @ref{Invoking astconvolve}.
@end itemize
@noindent
The two options to specify a kernel file name and its extension are shown below.
These are common between all the programs that will do convolution.
@table @option
@item -k FITS
@itemx --kernel=FITS
The convolution kernel file name.
The @code{BITPIX} (data type) value of this file can be any standard type and it does not necessarily have to be normalized.
Several operations will be done on the kernel image prior to the program's processing:
@itemize
@item
It will be converted to floating point type.
@item
All blank pixels (see @ref{Blank pixels}) will be set to zero.
@item
It will be normalized so the sum of its pixels equal unity.
@item
It will be flipped so the convolved image has the same orientation.
This is only relevant if the kernel is not circular. See @ref{Convolution process}.
@end itemize
@item -U STR
@itemx --khdu=STR
The convolution kernel HDU.
Although the kernel file name is optional, before running any of the programs, they need to have a value for @option{--khdu} even if the default kernel is to be used.
So be sure to keep its value in at least one of the configuration files (see @ref{Configuration files}).
By default, the system configuration file has a value.
@end table
@node Invoking astconvolve, , Convolution kernel, Convolve
@subsection Invoking Convolve
Convolve an input dataset (2D image or 1D spectrum for example) with a known kernel, or make the kernel necessary to match two PSFs.
The general template for Convolve is:
@example
$ astconvolve [OPTION...] ASTRdata
@end example
@noindent
One line examples:
@example
## Convolve mockimg.fits with psf.fits:
$ astconvolve --kernel=psf.fits mockimg.fits
## Convolve in the spatial domain:
$ astconvolve observedimg.fits --kernel=psf.fits --domain=spatial
## Convolve a 3D cube (only spatial domain is supported in 3D).
## It is also necessary to define 3D tiles and channels for
## parallelization (see the Tessellation section for more).
$ astconvolve cube.fits --kernel=kernel3d.fits --domain=spatial \
--tilesize=30,30,30 --numchannels=1,1,1
## Find the kernel to convolve with a sharper PSF to become similar
## to a broader PSF (they both have to have the same pixel size).
$ astconvolve --kernel=sharper-psf.fits --makekernel=10 \
broader-psf.fits
## Convolve a Spectrum (column 14 in the FITS table below) with a
## custom kernel (the kernel will be normalized internally, so only
## the ratios are important). Sed is used to replace the spaces with
## new line characters so Convolve sees them as values in one column.
$ echo "1 3 10 3 1" | sed 's/ /\n/g' | astconvolve spectra.fits -c14
@end example
The only argument accepted by Convolve is an input image file.
Some of the options are the same between Convolve and some other Gnuastro programs.
Therefore, to avoid repetition, they will not be repeated here.
For the full list of options shared by all Gnuastro programs, please see @ref{Common options}.
In particular, in the spatial domain, on a multi-dimensional datasets, convolve uses Gnuastro's tessellation to speed up the run, see @ref{Tessellation}.
Common options related to tessellation are described in @ref{Processing options}.
1-dimensional datasets (for example, spectra) are only read as columns within a table (see @ref{Tables} for more on how Gnuastro programs read tables).
Note that currently 1D convolution is only implemented in the spatial domain and thus kernel-matching is also not supported.
Here we will only explain the options particular to Convolve.
Run Convolve with @option{--help} in order to see the full list of options Convolve accepts, irrespective of where they are explained in this book.
@table @option
@item --kernelcolumn
Column containing the 1D kernel.
When the input dataset is a 1-dimensional column, and the host table has more than one column, use this option to specify which column should be used.
@item --nokernelflip
Do not flip the kernel after reading; only for spatial domain convolution.
This can be useful if the flipping has already been applied to the kernel.
By default, the input kernel is flipped to avoid the output getting flipped; see @ref{Convolution process}.
@item --nokernelnorm
Do not normalize the kernel after reading it, such that the sum of its pixels is unity.
As described in @ref{Convolution process}, the kernel is normalized by default.
@item --conv-on-blank
Do not ignore blank pixels in the convolution.
The output pixels that were originally non-blank are not affected by this option (they will have the same value if this option is called or not).
This option just expands/dilates the non-blank regions of your dataset into the blank regions and only works in spatial domain convolution.
Therefore, with this option convolution can be used as a proxy for interpolation or dilation.
By default, blank pixels are ignored during spatial domain convolution; so the input and output have exactly the same number of blank pixels.
With this option, the blank pixels that are sufficiently close to non-blank pixels (based on the kernel) will be given a value based on the non-blank elements that overlap with the kernel for that blank pixel (see @ref{Edges in the spatial domain}).
@item -d STR
@itemx --domain=STR
@cindex Discrete Fourier transform
The domain to use for the convolution.
The acceptable values are `@code{spatial}' and `@code{frequency}', corresponding to the respective domain.
For large images, the frequency domain process will be more efficient than convolving in the spatial domain.
However, the edges of the image will loose some flux (see @ref{Edges in the spatial domain}) and the image must not contain any blank pixels, see @ref{Spatial vs. Frequency domain}.
@item --checkfreqsteps
With this option a file with the initial name of the output file will be created that is suffixed with @file{_freqsteps.fits}, all the steps done to arrive at the final convolved image are saved as extensions in this file.
The extensions in order are:
@enumerate
@item
The padded input image.
In frequency domain convolution the two images (input and convolved) have to be the same size and both should be padded by zeros.
@item
The padded kernel, similar to the above.
@item
@cindex Phase angle
@cindex Complex numbers
@cindex Numbers, complex
@cindex Fourier spectrum
@cindex Spectrum, Fourier
The Fourier spectrum of the forward Fourier transform of the input image.
Note that the Fourier transform is a complex operation (and not view able in one image!) So we either have to show the `Fourier spectrum' or the `Phase angle'.
For the complex number @mymath{a+ib}, the Fourier spectrum is defined as @mymath{\sqrt{a^2+b^2}} while the phase angle is defined as @mymath{\arctan(b/a)}.
@item
The Fourier spectrum of the forward Fourier transform of the kernel image.
@item
The Fourier spectrum of the multiplied (through complex arithmetic) transformed images.
@item
@cindex Round-off error
@cindex Floating point round-off error
@cindex Error, floating point round-off
The inverse Fourier transform of the multiplied image.
If you open it, you will see that the convolved image is now in the center, not on one side of the image as it started with (in the padded image of the first extension).
If you are working on a mock image which originally had pixels of precisely 0.0, you will notice that in those parts that your convolved profile(s) did not convert, the values are now @mymath{\sim10^{-18}}, this is due to floating-point round off errors.
Therefore in the final step (when cropping the central parts of the image), we also remove any pixel with a value less than @mymath{10^{-17}}.
@end enumerate
@item --noedgecorrection
Do not correct the edge effect in spatial domain convolution (this correction is done in spatial domain convolution by default).
For a full discussion, please see @ref{Edges in the spatial domain}.
@item -m INT
@itemx --makekernel=INT
If this option is called, Convolve will do PSF-matching: the output will be the kernel that you should convolve with the sharper image to obtain the blurry one (see @ref{Convolution theorem}).
The two images must have the same size (number of pixels).
This option is not yet supported in 1-dimensional datasets.
In effect, it is only necessary to give the two PSFs of your two datasets, find the matching kernel based on that, then apply that kernel to the higher-resolution (sharper image).
The image given to the @option{--kernel} option is assumed to be the sharper (less blurry) image and the input image (with no option) is assumed to be the more blurry image.
The value given to this option will be used as the maximum radius of the kernel.
Any pixel in the final kernel that is larger than this distance from the center will be set to zero.
Noise has large frequencies which can make the result less reliable for the higher frequencies of the final result.
So all the frequencies which have a spectrum smaller than the value given to the @option{minsharpspec} option in the sharper input image are set to zero and not divided.
This will cause the wings of the final kernel to be flatter than they would ideally be which will make the convolved image result unreliable if it is too high.
There is a complete tutorial in Gnuastro on how to build the (extended) PSF: @ref{Building the extended PSF}.
Since the very extended PSF wings can be subtracted before matching (as described in that tutorial), for PSF-matching, you may not need the full extended PSF.
It it is good to validate how large the PSF to match should be, based on the size of the sources you want to study: if it is a large nearby galaxy, you need a larger PSF, but if it is high redshift galaxies, only the inner part of the tutorial above is enough.
@item -c
@itemx --minsharpspec
(@option{=FLT}) The minimum frequency spectrum (or coefficient, or pixel value in the frequency domain image) to use in deconvolution, see the explanations under the @option{--makekernel} option for more information.
@end table
@node Warp, , Convolve, Data manipulation
@section Warp
Image warping is the process of mapping the pixels of one image onto a new pixel grid.
This process is sometimes known as transformation, however following the discussion of Heckbert 1989@footnote{Paul S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image Warping}, Master's thesis at University of California, Berkeley.} we will not be using that term because it can be confused with only pixel value or flux transformations.
Here we specifically mean the pixel grid transformation which is better conveyed with `warp'.
@cindex Gravitational lensing
Image warping is a very important step in astronomy, both in observational data analysis and in simulating modeled images.
In modeling, warping an image is necessary when we want to apply grid transformations to the initial models, for example, in simulating gravitational lensing.
Observational reasons for warping an image are listed below:
@itemize
@cindex Signal to noise ratio
@item
@strong{Noise:} Most scientifically interesting targets are inherently faint (have a very low Signal to noise ratio).
Therefore one short exposure is not enough to detect such objects that are drowned deeply in the noise.
We need multiple exposures so we can add them together and increase the objects' signal to noise ratio.
Keeping the telescope fixed on one field of the sky is practically impossible.
Therefore very deep observations have to put into the same grid before adding them.
@cindex Mosaicing
@cindex Image mosaic
@item
@strong{Resolution:} If we have multiple images of one patch of the sky (hopefully at multiple orientations) we can warp them to the same grid.
The multiple orientations will allow us to `guess' the values of pixels on an output pixel grid that has smaller pixel sizes and thus increase the resolution of the output.
This process of merging multiple observations is known as Mosaicing.
@cindex Cosmic rays
@item
@strong{Cosmic rays:} Cosmic rays can randomly fall on any part of an image.
If they collide vertically with the camera, they are going to create a very sharp and bright spot that in most cases can be separated easily@footnote{All astronomical targets are blurred with the PSF, see @ref{PSF}, however a cosmic ray is not and so it is very sharp (it suddenly stops at one pixel).}.
However, depending on the depth of the camera pixels, and the angle that a cosmic rays collides with it, it can cover a line-like larger area on the CCD which makes the detection using their sharp edges very hard and error prone.
One of the best methods to remove cosmic rays is to compare multiple images of the same field.
To do that, we need all the images to be on the same pixel grid.
@cindex Optical distortion
@cindex Distortion, optical
@item
@strong{Optical distortion:} In wide field images, the optical distortion that occurs on the outer parts of the focal plane will make accurate comparison of the objects at various locations impossible.
It is therefore necessary to warp the image and correct for those distortions prior to the analysis.
@cindex ACS
@cindex CCD
@cindex WFC3
@cindex Wide Field Camera 3
@cindex Charge-coupled device
@cindex Advanced camera for surveys
@cindex Hubble Space Telescope (HST)
@item
@strong{Detector not on focal plane:} In some cases (like the Hubble Space Telescope ACS and WFC3 cameras), the CCD might be tilted compared to the focal plane, therefore the recorded CCD pixels have to be projected onto the focal plane before further analysis.
@end itemize
@menu
* Linear warping basics:: Basics of coordinate transformation.
* Merging multiple warpings:: How to merge multiple matrices.
* Resampling:: Warping an image is re-sampling it.
* Invoking astwarp:: Arguments and options for Warp.
@end menu
@node Linear warping basics, Merging multiple warpings, Warp, Warp
@subsection Linear warping basics
@cindex Scaling
@cindex Coordinate transformation
Let's take @mymath{\left[\matrix{u&v}\right]} as the coordinates of a point in the input image and @mymath{\left[\matrix{x&y}\right]} as the coordinates of that same point in the output image@footnote{These can be any real number, we are not necessarily talking about integer pixels here.}.
The simplest form of coordinate transformation (or warping) is the scaling of the coordinates, let's assume we want to scale the first axis by @mymath{M} and the second by @mymath{N}, the output coordinates of that point can be calculated by
@dispmath{\left[\matrix{x\cr y}\right]=
\left[\matrix{Mu\cr Nv}\right]=
\left[\matrix{M&0\cr0&N}\right]\left[\matrix{u\cr v}\right]}
@cindex Matrix
@cindex Multiplication, Matrix
@cindex Rotation of coordinates
@noindent
Note that these are matrix multiplications.
We thus see that we can represent any such grid warping as a matrix.
Another thing we can do with this @mymath{2\times2} matrix is to rotate the output coordinate around the common center of both coordinates.
If the output is rotated anticlockwise by @mymath{\theta} degrees from the positive (to the right) horizontal axis, then the warping matrix should become:
@dispmath{\left[\matrix{x\cr y}\right]=
\left[\matrix{ucos\theta-vsin\theta\cr usin\theta+vcos\theta}\right]=
\left[\matrix{cos\theta&-sin\theta\cr sin\theta&cos\theta}\right]
\left[\matrix{u\cr v}\right]
}
@cindex Flip coordinates
@noindent
We can also flip the coordinates around the first axis, the second axis and the coordinate center with the following three matrices respectively:
@dispmath{\left[\matrix{1&0\cr0&-1}\right]\quad\quad
\left[\matrix{-1&0\cr0&1}\right]\quad\quad
\left[\matrix{-1&0\cr0&-1}\right]}
@cindex Shear
@noindent
The final thing we can do with this definition of a @mymath{2\times2} warping matrix is shear.
If we want the output to be sheared along the first axis with @mymath{A} and along the second with @mymath{B}, then we can use the matrix:
@dispmath{\left[\matrix{1&A\cr B&1}\right]}
@noindent
To have one matrix representing any combination of these steps, you use matrix multiplication, see @ref{Merging multiple warpings}.
So any combinations of these transformations can be displayed with one @mymath{2\times2} matrix:
@dispmath{\left[\matrix{a&b\cr c&d}\right]}
@cindex Wide Field Camera 3
@cindex Advanced Camera for Surveys
@cindex Hubble Space Telescope (HST)
The transformations above can cover a lot of the needs of most coordinate transformations.
However they are limited to mapping the point @mymath{[\matrix{0&0}]} to @mymath{[\matrix{0&0}]}.
Therefore they are useless if you want one coordinate to be shifted compared to the other one.
They are also space invariant, meaning that all the coordinates in the image will receive the same transformation.
In other words, all the pixels in the output image will have the same area if placed over the input image.
So transformations which require varying output pixel sizes like projections cannot be applied through this @mymath{2\times2} matrix either (for example, for the tilted ACS and WFC3 camera detectors on board the Hubble space telescope).
@cindex M@"obius, August. F.
@cindex Homogeneous coordinates
@cindex Coordinates, homogeneou
To add these further capabilities, namely translation and projection, we use the homogeneous coordinates.
They were defined about 200 years ago by August Ferdinand M@"obius (1790 -- 1868).
For simplicity, we will only discuss points on a 2D plane and avoid the complexities of higher dimensions.
We cannot provide a deep mathematical introduction here, interested readers can get a more detailed explanation from Wikipedia@footnote{@url{http://en.wikipedia.org/wiki/Homogeneous_coordinates}} and the references therein.
By adding an extra coordinate to a point we can add the flexibility we need.
The point @mymath{[\matrix{x&y}]} can be represented as @mymath{[\matrix{xZ&yZ&Z}]} in homogeneous coordinates.
Therefore multiplying all the coordinates of a point in the homogeneous coordinates with a constant will give the same point.
Put another way, the point @mymath{[\matrix{x&y&Z}]} corresponds to the point @mymath{[\matrix{x/Z&y/Z}]} on the constant @mymath{Z} plane.
Setting @mymath{Z=1}, we get the input image plane, so @mymath{[\matrix{u&v&1}]} corresponds to @mymath{[\matrix{u&v}]}.
With this definition, the transformations above can be generally written as:
@dispmath{\left[\matrix{x\cr y\cr 1}\right]=
\left[\matrix{a&b&0\cr c&d&0\cr 0&0&1}\right]
\left[\matrix{u\cr v\cr 1}\right]}
@noindent
@cindex Affine Transformation
@cindex Transformation, affine
We thus acquired 4 extra degrees of freedom.
By giving non-zero values to the zero valued elements of the last column we can have translation (try the matrix multiplication!).
In general, any coordinate transformation that is represented by the matrix below is known as an affine transformation@footnote{@url{http://en.wikipedia.org/wiki/Affine_transformation}}:
@dispmath{\left[\matrix{a&b&c\cr d&e&f\cr 0&0&1}\right]}
@cindex Homography
@cindex Projective transformation
@cindex Transformation, projective
We can now consider translation, but the affine transform is still spatially invariant.
Giving non-zero values to the other two elements in the matrix above gives us the projective transformation or Homography@footnote{@url{http://en.wikipedia.org/wiki/Homography}} which is the most general type of transformation with the @mymath{3\times3} matrix:
@dispmath{\left[\matrix{x'\cr y'\cr w}\right]=
\left[\matrix{a&b&c\cr d&e&f\cr g&h&1}\right]
\left[\matrix{u\cr v\cr 1}\right]}
@noindent
So the output coordinates can be calculated from:
@dispmath{x={x' \over w}={au+bv+c \over gu+hv+1}\quad\quad\quad\quad
y={y' \over w}={du+ev+f \over gu+hv+1}}
Thus with Homography we can change the sizes of the output pixels on the input plane, giving a `perspective'-like visual impression.
This can be quantitatively seen in the two equations above.
When @mymath{g=h=0}, the denominator is independent of @mymath{u} or @mymath{v} and thus we have spatial invariance.
Homography preserves lines at all orientations.
A very useful fact about Homography is that its inverse is also a Homography.
These two properties play a very important role in the implementation of this transformation.
A short but instructive and illustrated review of affine, projective and also bi-linear mappings is provided in Heckbert 1989@footnote{
Paul S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image Warping}, Master's thesis at University of California, Berkeley.
Note that since points are defined as row vectors there, the matrix is the transpose of the one discussed here.}.
@node Merging multiple warpings, Resampling, Linear warping basics, Warp
@subsection Merging multiple warpings
@cindex Commutative property
@cindex Matrix multiplication
@cindex Multiplication, matrix
@cindex Non-commutative operations
@cindex Operations, non-commutative
In @ref{Linear warping basics} we saw how a basic warp/transformation can be represented with a matrix.
To make more complex warpings (for example, to define a translation, rotation and scale as one warp) the individual matrices have to be multiplied through matrix multiplication.
However matrix multiplication is not commutative, so the order of the set of matrices you use for the multiplication is going to be very important.
The first warping should be placed as the left-most matrix.
The second warping to the right of that and so on.
The second transformation is going to occur on the warped coordinates of the first.
As an example for merging a few transforms into one matrix, the multiplication below represents the rotation of an image about a point @mymath{[\matrix{U&V}]} anticlockwise from the horizontal axis by an angle of @mymath{\theta}.
To do this, first we take the origin to @mymath{[\matrix{U&V}]} through translation.
Then we rotate the image, then we translate it back to where it was initially.
These three operations can be merged in one operation by calculating the matrix multiplication below:
@dispmath{\left[\matrix{1&0&U\cr0&1&V\cr{}0&0&1}\right]
\left[\matrix{cos\theta&-sin\theta&0\cr sin\theta&cos\theta&0\cr 0&0&1}\right]
\left[\matrix{1&0&-U\cr0&1&-V\cr{}0&0&1}\right]}
@node Resampling, Invoking astwarp, Merging multiple warpings, Warp
@subsection Resampling
@cindex Pixel
@cindex Camera
@cindex Detector
@cindex Sampling
@cindex Resampling
@cindex Pixel mixing
@cindex Photoelectrons
@cindex Picture element
@cindex Mixing pixel values
A digital image is composed of discrete `picture elements' or `pixels'.
When a real image is created from a camera or detector, each pixel's area is used to store the number of photo-electrons that were created when incident photons collided with that pixel's surface area.
This process is called the `sampling' of a continuous or analog data into digital data.
When we change the pixel grid of an image, or ``warp'' it, we have to calculate the flux value of each pixel on the new grid based on the old grid, or resample it.
Because of the calculation (as opposed to observation), any form of warping on the data is going to degrade the image and mix the original pixel values with each other.
So if an analysis can be done on an unwarped data image, it is best to leave the image untouched and pursue the analysis.
However as discussed in @ref{Warp} this is not possible in some scenarios and re-sampling is necessary.
@cindex Point pixels
@cindex Interpolation
@cindex Sampling theorem
@cindex Bicubic interpolation
@cindex Signal to noise ratio
@cindex Bi-linear interpolation
@cindex Interpolation, bicubic
@cindex Interpolation, bi-linear
When the FWHM of the PSF of the camera is much larger than the pixel scale (see @ref{Sampling theorem}) we are sampling the signal in a much higher resolution than the camera can offer.
This is usually the case in many applications of image processing (nonastronomical imaging).
In such cases, we can consider each pixel to be a point and not an area: the PSF doesn't vary much over a single pixel.
Approximating a pixel's area to a point can significantly speed up the resampling and also the simplicity of the code.
Because resampling becomes a problem of interpolation: points of the input grid need to be interpolated at certain other points (over the output grid).
To increase the accuracy, you might also sample more than one point from within a pixel giving you more points for a more accurate interpolation in the output grid.
@cindex Image edges
@cindex Edges, image
However, interpolation has several problems.
The first one is that it will depend on the type of function you want to assume for the interpolation.
For example, you can choose a bi-linear or bi-cubic (the `bi's are for the 2 dimensional nature of the data) interpolation method.
For the latter there are various ways to set the constants@footnote{see @url{http://entropymine.com/imageworsener/bicubic/} for a nice introduction.}.
Such parametric interpolation functions can fail seriously on the edges of an image, or when there is a sharp change in value (for example, the bleeding saturation of bright stars in astronomical CCDs).
They will also need normalization so that the flux of the objects before and after the warping is comparable.
The parametric nature of these methods adds a level of subjectivity to the data (it makes more assumptions through the functions than the data can handle).
For most applications this is fine (as discussed above: when the PSF is over-sampled), but in scientific applications where we push our instruments to the limit and the aim is the detection of the faintest possible galaxies or fainter parts of bright galaxies, we cannot afford this loss.
Because of these reasons Warp will not use parametric interpolation techniques.
@cindex Drizzle
@cindex Pixel mixing
@cindex Exact area resampling
Warp will do interpolation based on ``pixel mixing''@footnote{For a graphic demonstration see @url{http://entropymine.com/imageworsener/pixelmixing/}.} or ``area resampling''.
This is also similar to what the Hubble Space Telescope pipeline calls ``Drizzling''@footnote{@url{http://en.wikipedia.org/wiki/Drizzle_(image_processing)}}.
This technique requires no functions, it is thus non-parametric.
It is also the closest we can get (make least assumptions) to what actually happens on the detector pixels.
In pixel mixing, the basic idea is that you reverse-transform each output pixel to find which pixels of the input image it covers, and what fraction of the area of the input pixels are covered by that output pixel.
We then multiply each input pixel's value by the fraction of its area that overlaps with the output pixel (between 0 to 1).
The output's pixel value is derived by summing all these multiplications for the input pixels that it covers.
Through this process, pixels are treated as an area not as a point (which is how detectors create the image), also the brightness (see @ref{Brightness flux magnitude}) of an object will be fully preserved.
Since it involves the mixing of the input's pixel values, this pixel mixing method is a form of @ref{Spatial domain convolution}.
Therefore, after comparing the input and output, you will notice that the output is slightly smoothed, thus boosting the more diffuse signal, but creating correlated noise.
In astronomical imaging the correlated noise will be decreased later when you coadd many exposures@footnote{If you are working on a single exposure image and see pronounced Moir@'e patterns after Warping, check @ref{Moire pattern in coadding and its correction} for a possible way to reduce them}.
If there are very high spatial-frequency signals in the image (for example, fringes) which vary on a scale @emph{smaller than} your output image pixel size (this is rarely the case in astronomical imaging), pixel mixing can cause ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}.
Therefore, in case such fringes are present, they have to be calculated and removed separately (which would naturally be done in any astronomical reduction pipeline).
Because of the PSF, no astronomical target has a sharp change in their signal.
Thus this issue is less important for astronomical applications, see @ref{PSF}.
To find the overlap area of the output pixel over the input pixels, we need to define polygons and clip them (find the overlap).
Usually, it is sufficient to define a pixel with a four-vertice polygon.
However, when a non-linear distortion (for example, @code{SIP} or @code{TPV}) is present and the distortion is significant over an output pixel's size (usually far from the reference point), the shadow of the output pixel on the input grid can be curved.
To account for such cases (which can only happen when correcting for non-linear distortions), Warp has the @option{--edgesampling} option to sample the output pixel over more vertices.
For more, see the description of this option in @ref{Align pixels with WCS considering distortions}.
@node Invoking astwarp, , Resampling, Warp
@subsection Invoking Warp
Warp will warp an input image into a new pixel grid by pixel mixing (see @ref{Resampling}).
Without any options, Warp will remove any non-linear distortions from the image and align the output pixel coordinates to its WCS coordinates.
Any homographic warp (for example, scaling, rotation, translation, projection, see @ref{Linear warping basics}) can also be done by calling the relevant option explicitly.
The general template for invoking Warp is:
@example
$ astwarp [OPTIONS...] InputImage
@end example
@noindent
One line examples:
@example
## Align image with celestial coordinates and remove any distortion
$ astwarp image.fits
## Align four exposures to same pixel grid and coadd them with
## Arithmetic program's sigma-clipped mean operator (out of many
## coadding operators, see Arithmetic's documentation).
$ grid="--center=1.234,5.678 --width=1001,1001 --widthinpix --cdelt=0.2/3600"
$ astwarp a.fits $grid --output=A.fits
$ astwarp b.fits $grid --output=B.fits
$ astwarp c.fits $grid --output=C.fits
$ astwarp d.fits $grid --output=D.fits
$ astarithmetic A.fits B.fits C.fits D.fits 4 5 0.2 sigclip-mean \
-g1 --writeall --output=coadd.fits
## Warp a previously created mock image to the same pixel grid as the
## real image (including any distortions).
$ astwarp mock.fits --gridfile=real.fits
## Rotate and then scale input image:
$ astwarp --rotate=37.92 --scale=0.8 image.fits
## Scale, then translate the input image:
$ astwarp --scale 8/3 --translate 2.1 image.fits
## Directly input a custom warping matrix (using fraction):
$ astwarp --matrix=1/5,0,4/10,0,1/5,4/10,0,0,1 image.fits
## Directly input a custom warping matrix, with final numbers:
$ astwarp --matrix="0.7071,-0.7071, 0.7071,0.7071" image.fits
@end example
If any processing is to be done, Warp needs to be given a 2D FITS image.
As in all Gnuastro programs, when an output is not explicitly set with the @option{--output} option, the output filename will be set automatically based on the operation, see @ref{Automatic output}.
For the full list of general options to all Gnuastro programs (including Warp), please see @ref{Common options}.
Warp uses pixel mixing to derive the pixel values of the output image, see @ref{Resampling}.
To be the most accurate, the input image will be read as a 64-bit double precision floating point dataset and all internal processing is done in this format.
Upon writing, by default it will be converted to 32-bit single precision floating point type (actual observational data rarely have such precision!).
In case you want a different output type, you can use the @option{--type} option that is common to several Gnuastro programs.
For example, if your input is a mock image without noise, and you want to preserve the 64-bit precision, use (with @option{--type=float64}.
Just note that the file size will also be double!
For more on the precision of various types, see @ref{Numeric data types}.
By default (if no linear operation is requested), Warp will align the pixel grid of the input image to the WCS coordinates it contains.
This operation and the the options that govern it are described in @ref{Align pixels with WCS considering distortions}.
You can Warp an input image to the same pixel grid as a reference FITS file using the @option{--wcsfile} option.
In this case, the output image will take all the information needed from the reference WCS file and HDU/extension specified with @option{--wcshdu}, thus it will discard any other resampling options given.
If you need any custom linear warping (independent of the WCS, see @ref{Linear warping basics}), you need to call the respective operation manually.
These are described in @ref{Linear warps to be called explicitly}.
Please note that you may not use both linear and non-linear modes simultaneously.
For example, you cannot scale or rotate the image while removing its non-linear distortions at the same time.
The following options are shared between both modes:
@table @option
@item --hstartwcs=INT
Specify the first header keyword number (line) that should be used to read the WCS information, see the full explanation in @ref{Invoking astcrop}.
@item --hendwcs=INT
Specify the last header keyword number (line) that should be used to read the WCS information, see the full explanation in @ref{Invoking astcrop}.
@item -C FLT
@itemx --coveredfrac=FLT
Depending on the warp, the output pixels that cover pixels on the edge of the input image, or blank pixels in the input image, are not going to be fully covered by input data.
With this option, you can specify the acceptable covered fraction of such pixels (any value between 0 and 1).
If you only want output pixels that are fully covered by the input image area (and are not blank), then you can set @option{--coveredfrac=1} (which is the default!).
Alternatively, a value of @code{0} will keep output pixels that are even infinitesimally covered by the input.
As a result, with @option{--coveredfrac=0}, the sum of the pixels in the input and output images will be exactly the same.
@end table
@menu
* Align pixels with WCS considering distortions:: Default operation.
* Linear warps to be called explicitly:: Other warps.
@end menu
@node Align pixels with WCS considering distortions, Linear warps to be called explicitly, Invoking astwarp, Invoking astwarp
@subsubsection Align pixels with WCS considering distortions
@cindex Resampling
@cindex WCS distortion
@cindex TPV distortion
@cindex SIP distortion
@cindex Non-linear distortion
@cindex Align pixel and WCS coordinates
When none of the linear warps@footnote{For linear warps, see @ref{Linear warps to be called explicitly}.} are requested, Warp will align the input's pixel axes with it's WCS axes.
In the process, any possibly existing distortion is also removed (such as @code{TPV} and @code{SIP}).
Usually, the WCS axes are the Right Ascension and Declination in equatorial coordinates.
The output image's pixel grid is highly customizable through the options in this section.
To learn about Warp's strategy to build the new pixel grid, see @ref{Resampling}.
For strong distortions (that produce strong curvatures), you can fine-tune the area-based resampling with @option{--edgesampling}, as described below.
On the other hand, sometimes you need to Warp an input image to the exact same grid of an already available reference FITS image with an existing WCS.
If that image is already aligned, finding its center, number of pixels and pixel scale can be annoying (and just increase the complexity of your script).
On the other hand, if that image is not aligned (for example, has a certain rotation in the sky, and has a different distortion), there are too many WCS parameters to set (some are not yet available explicitly in the options here)!
For such scenarios, Warp has the @option{--gridfile} option.
When @option{--gridfile} is called, the options below that are used to define the output's WCS will be ignored (these options: @option{--center}, @option{--widthinpix}, @option{--cdelt}, @option{--ctype}).
In this case, the output's WCS and pixel grid will exactly match the image given to @option{--gridfile} (including any rotation, pixel scale, or distortion or projection).
@cartouche
@noindent
@cindex Coadding
@cindex Coaddition
@strong{Set @option{--cdelt} explicitly when you plan to coadd many warped images:}
To align some images and later coadd them, it is necessary to be sure the pixel sizes of all the images are the same exactly.
Most of the time the measured (during astrometry) pixel scale of the separate exposures, will be different in the second or third digit number after the decimal point.
It is a normal/statistical error in measuring the astrometry.
On a large image, these slight differences can cause different output sizes (of one or two pixels on a very large image).
You can fix this by explicitly setting the pixel scale of each warped exposure with Warp's @option{--cdelt} option that is described below.
For good strategies of setting the pixel scale, see @ref{Moire pattern in coadding and its correction}.
@end cartouche
Another problem that may arise when aligning images to new pixel grids is the aliasing or visible Moir@'e patterns on the output image.
This artifact should be removed if you are coadding several exposures, especially with a pointing pattern.
If not see @ref{Moire pattern in coadding and its correction} for ways to mitigate the visible patterns.
See the description of @option{--gridfile} below for more.
@cartouche
@noindent
@cindex WCSLIB
@strong{Known issue:} Warp's WCS-based aligning works best with WCSLIB version 7.12 (released in September 2022) and above.
If you have an older version of WCSLIB, you might get a @code{wcss2p} error otherwise.
@end cartouche
@table @option
@item -c FLT,FLT
@itemx --center=FLT,FLT
@cindex CRVALi
@cindex Aligning an image
WCS coordinates of the center of the central pixel of the output image.
Since a central pixel is only defined with an odd number of pixels along both dimensions, the output will always have an odd number of pixels.
When @option{--center} or @option{--gridfile} aren't given, the output will have the same central WCS coordinate as the input.
Usually, the WCS coordinates are Right Ascension and Declination (when the first three characters of @code{CTYPE1} and @code{CTYPE2} are respectively @code{RA-} and @code{DEC}).
For more on the @code{CTYPEi} keyword values, see @code{--ctype} below.
@item -w INT[,INT]
@itemx --width=INT[,INT]
Width and height of the output image in units of WCS (usually degrees).
If you want the values to be read as pixels, also call the @option{--widthinpix} option with @option{--width}.
If a single value is given, Warp will use the same value for the second dimension (creating a square output).
When @option{--width} or @option{--gridfile} aren't given, Warp will calculate the necessary size of the output pixel grid to fully contain the input image.
Usually the WCS coordinates are in units of degrees (defined by the @code{CUNITi} keywords of the FITS standard).
But entering a certain number of arcseconds or arcminutes for the width can be annoying (you will usually need to go to the calculator!).
To simplify such situations, this option also accepts division.
For example @option{--width=1/60,2/60} will make an aligned warp that is 1 arcmin along Right Ascension and 2 arcminutes along the Declination.
With the @option{--widthinpix} option the values will be interpreted as numbers of pixels.
In this scenario, this option should be given @emph{odd} integer(s) that are greater than 1.
This ensures that the output image can have a @emph{central} pixel.
Recall that through the @option{--center} option, you specify the WCS coordinate of the center of the central pixel.
The central coordinate of an image with an even number of pixels will be on the edge of two pixels, so a ``central'' pixel is not well defined.
If any of the given values are even, Warp will automatically add a single pixel (to make it an odd integer) and print a warning message.
@item --widthinpix
When called, the values given to the @option{--width} option will be interpreted as the number of pixels along each dimension(s).
See the description of @option{--width} for more.
@item -x FLT[,FLT]
@itemx --cdelt=FLT[,FLT]
@cindex CDELTi
@cindex Pixel scale
Coordinate deltas or increments (@code{CDELTi} in the FITS standard), or the pixel scale in both dimensions.
If a single value is given, it will be used for both axes.
In this way, the output's pixels will be squares on the sky at the reference point (as is usually expected!).
When @option{--cdelt} or @option{--gridfile} aren't given, Warp will read the input's pixel scale and choose the larger of @code{CDELT1} or @code{CDELT2} so the output pixels are square.
Usually (when dealing with RA and Dec, and the @code{CUNITi}s have a value of @code{deg}), the units of the given values are degrees/pixel.
Warp allows you to easily convert from @emph{arcsec} to @emph{degrees} by simply appending a @code{/3600} to the value.
For example, for an output image of pixel scale @code{0.27} arcsec/pixel, you can use @code{--cdelt=0.27/3600}.
@item --ctype=STR,STR
@cindex Align
@cindex CTYPEi
@cindex Resampling
The coordinate types of the output (@code{CTYPE1} and @code{CTYPE2} keywords in the FITS standard), separated by a comma.
By default the value to this option is `@code{RA---TAN,DEC--TAN}'.
However, if @option{--gridfile} is given, this option is ignored.
If you don't call @option{--ctype} or @option{--gridfile}, the output WCS coordinates will be Right Ascension and Declination, while the output's projection will be @url{https://en.wikipedia.org/wiki/Gnomonic_projection,Gnomonic}, also known as Tangential (TAN).
This combination is the most common in extra-galactic imaging surveys.
For other coordinates and projections in your output use other values, as described below.
According to the FITS standard version 4.0@footnote{FITS standard version 4.0: @url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}: @code{CTYPEi} is the
``type for the Intermediate-coordinate Axis @mymath{i}.
Any coordinate type that is not covered by this Standard or an officially recognized FITS convention shall be taken to be linear.
All non-linear coordinate system names must be expressed in `4–3' form: the first four characters specify the coordinate type, the fifth character is a hyphen (@code{-}), and the remaining three characters specify an algorithm code for computing the world coordinate value.
Coordinate types with names of fewer than four characters are padded on the right with hyphens, and algorithm codes with fewer than three characters are padded on the right with SPACE.
Algorithm codes should be three characters''
(see list of algorithm codes below).
@cindex WCS Projections
@cindex Projections (world coordinate system)
You can use any of the projection algorithms (last three characters of each coordinate's type) provided by your host WCSLIB (a mandatory dependency of Gnuastro; see @ref{WCSLIB}).
For a very elaborate and complete description of projection algorithms in the FITS WCS standard, see @url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen 2002}.
Wikipedia also has a nice article on @url{https://en.wikipedia.org/wiki/Map_projection, Map projections}.
As an example, WCSLIB 7.12 (released in September 2022) has the following projection algorithms:
@table @code
@item AZP
@cindex Zenithal/azimuthal projection
Zenithal/azimuthal perspective
@item SZP
@cindex Slant zenithal projection
Slant zenithal perspective
@item TAN
@cindex Gnomonic (tangential) projection
Gnomonic (tangential)
@item STG
@cindex Stereographic projection
Stereographic
@item SIN
@cindex Orthographic/synthesis projection
Orthographic/synthesis
@item ARC
@cindex Zenithal/azimuthal equidistant projection
Zenithal/azimuthal equidistant
@item ZPN
@cindex Zenithal/azimuthal polynomial projection
Zenithal/azimuthal polynomial
@item ZEA
@cindex Zenithal/azimuthal equal area projection
Zenithal/azimuthal equal area
@item AIR
@cindex Airy projection
Airy
@item CYP
@cindex Cylindrical perspective projection
Cylindrical perspective
@item CEA
@cindex Cylindrical equal area projection
Cylindrical equal area
@item CAR
@cindex Plate carree projection
Plate carree
@item MER
@cindex Mercator projection
Mercator
@item SFL
@cindex Sanson-Flamsteed projection
Sanson-Flamsteed
@item PAR
@cindex Parabolic projection
Parabolic
@item MOL
@cindex Mollweide projection
Mollweide
@item AIT
@cindex Hammer-Aitoff projection
Hammer-Aitoff
@item COP
@cindex Conic perspective projection
Conic perspective
@item COE
@cindex Conic equal area projection
Conic equal area
@item COD
@cindex Conic equidistant projection
Conic equidistant
@item COO
@cindex Conic orthomorphic projection
Conic orthomorphic
@item BON
@cindex Bonne projection
Bonne
@item PCO
@cindex Polyconic projection
Polyconic
@item TSC
@cindex Tangential spherical cube projection
Tangential spherical cube
@item CSC
@cindex COBE spherical cube projection
COBE spherical cube
@item QSC
@cindex Quadrilateralized spherical cube projection
Quadrilateralized spherical cube
@item HPX
@cindex HEALPix projection
HEALPix
@item XPH
@cindex Butterfly projection
@cindex HEALPix polar projection
HEALPix polar, aka "butterfly"
@end table
@item -G
@itemx --gridfile
FITS filename containing the final pixel grid and WCS for the output image.
The HDU/extension containing should be specified with @option{--gridhdu} or its short option @option{-H}.
The HDU should contain a WCS, otherwise, Warp will abort with a crash.
When this option is used, Warp will read the respective WCS and the size of the image to resample the input.
Since this WCS of this HDU contains everything needed to construct the WCS the options above will be ignored when @option{--gridfile} is called: @option{--cdelt}, @option{--center}, and @option{--widthinpix}.
In the example below, let's use this option to put the image of M51 in one survey (J-PLUS) into the pixel grid of another survey (SDSS) containing M51.
The J-PLUS field of view is very large (almost @mymath{1.5\times1.5} deg@mymath{^2}, in @mymath{9500\times9500} pixels), while the field of view of SDSS in each filter is small (almost @mymath{0.3\times0.25} deg@mymath{^2} in @mymath{2048\times1489} pixels).
With the first two commands, we'll first download the two images, then we'll extract the portion of the J-PLUS image that overlaps with the SDSS image and align it exactly to SDSS's pixel grid.
Note that these are the two images that were used in two of Gnuastro's tutorials: @ref{Building the extended PSF} and @ref{Detecting large extended targets}.
@example
## Download the J-PLUS DR2 image of M51 in the r filter.
$ jplusbase="http://archive.cefca.es/catalogues/vo/siap"
$ wget $jplusbase/jplus-dr2/get_fits?id=67510 \
-O jplus.fits.fz
## Download the SDSS image in r filter and decompress it
## (Bzip2 is not a standard FITS compression algorithm).
$ sdssbase=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
$ wget $sdssbase/301/3716/6/frame-r-003716-6-0117.fits.bz2 \
-O sdss.fits.bz2
$ bunzip2 sdss.fits.bz2
## Warp and crop the J-PLUS image so the output exactly
## matches the SDSS pixel gid.
$ astwarp jplus.fits.fz --gridfile=sdss.fits --gridhdu=0 \
--output=jplus-on-sdss.fits
## View the two images side-by-side:
$ astscript-fits-view sdss.fits jplus-on-sdss.fits
@end example
As the example above shows, this option can therefore be very useful when comparing images from multiple surveys.
But there are other very interesting use cases also.
For example, when you are making a mock dataset and need to add distortion to the image so it matches the distortion of your camera.
Through @option{--gridhdu}, you can easily insert that distortion over the mock image and put the mock image in the pixel grid of an exposure.
@item -H
@itemx --gridhdu
The HDU/extension of the reference WCS file specified with option @option{--wcsfile} or its short version @option{-H} (see the description of @option{--wcsfile} for more).
@item --edgesampling=INT
Number of extra samplings along the edge of a pixel.
By default the value is @code{0} (the output pixel's polygon over the input will be a quadrilateral (a polygon with four edges/vertices).
Warp uses pixel mixing to derive the output pixel values.
For a complete introduction, see @ref{Resampling}, and in particular its later part on distortions.
To account for this possible curvature due to distortion, you can use this option.
For example, @option{--edgesampling=1} will add one extra vertice in the middle of each edge of the output pixel, producing an 8-vertice polygon.
Similarly, @option{--edgesampling=5} will put 5 extra vertices along each edge, thus sampling the shape (and possible curvature) of the output pixel over an input pixel with @mymath{4+5\times4=24} vertice polygon.
Since the polygon clipping will happen for every output pixel, a higher value to this option can significantly reduce the running speed and increase the RAM usage of Warp; so use it with caution: in most cases the default @option{--edgesampling=0} is sufficient.
To visually inspect the curvature effect on pixel area of the input image, see option @option{--pixelareaonwcs} in @ref{Pixel information images}.
@item --checkmaxfrac
Check each output pixel's maximum coverage on the input data and append as the `@code{MAX-FRAC}' HDU/extension to the output aligned image.
This option provides an easy visual inspection for possible recurring patterns or fringes caused by aligning to a new pixel grid.
For more detail about the origin of these patterns and how to mitigate them see @ref{Moire pattern in coadding and its correction}.
Note that the `@code{MAX-FRAC}' HDU/extension is not showing the patterns themselves;
It represents the largest area coverage on the input data for that particular pixel.
The values can be in the range between 0 to 1, where 1 means the pixel is covering at least one complete pixel of the input data.
On the other hand, 0 means that the pixel is not covering any pixels of the input at all.
@end table
@node Linear warps to be called explicitly, , Align pixels with WCS considering distortions, Invoking astwarp
@subsubsection Linear warps to be called explicitly
Linear warps include operations like rotation, scaling, sheer, etc.
For an introduction, see @ref{Linear warping basics}.
These are warps that don't depend on the WCS of the image and should be explicitly requested.
To align the input pixel coordinates with the WCS coordinates, see @ref{Align pixels with WCS considering distortions}.
While they will correct any existing WCS based on the warp, they can also operate on images without any WCS.
For example, you have a mock image that doesn't (yet!) have its mock WCS, and it has been created on an over-sampled grid and convolved with an over-sampled PSF.
In this scenario, you can use the @option{--scale} option to under-sample it to your desired resolution.
This is similar to the @ref{Sufi simulates a detection} tutorial.
Linear warps must be specified as command-line options, either as (possibly multiple) modular warpings (for example, @option{--rotate}, or @option{--scale}), or directly as a single raw matrix (with @option{--matrix}).
If specified together, the latter (direct matrix) will take precedence and all the modular warpings will be ignored.
Any number of modular warpings can be specified on the command-line and configuration files.
If more than one modular warping is given, all will be merged to create one warping matrix.
As described in @ref{Merging multiple warpings}, matrix multiplication is not commutative, so the order of specifying the modular warpings on the command-line, and/or configuration files makes a difference (see @ref{Configuration file precedence}).
The full list of modular warpings and the other options particular to Warp are described below.
The values to the warping options (modular warpings as well as @option{--matrix}), are a sequence of at least one number.
Each number in this sequence is separated from the next by a comma (@key{,}).
Each number can also be written as a single fraction (with a forward-slash @key{/} between the numerator and denominator).
Space and Tab characters are permitted between any two numbers, just do not forget to quote the whole value.
Otherwise, the value will not be fully passed onto the option.
See the examples above as a demonstration.
@cindex FITS standard
Based on the FITS standard, integer values are assigned to the center of a pixel and the coordinate [1.0, 1.0] is the center of the first pixel (bottom left of the image when viewed in SAO DS9).
So the coordinate center [0.0, 0.0] is half a pixel away (in each axis) from the bottom left vertex of the first pixel.
The resampling that is done in Warp (see @ref{Resampling}) is done on the coordinate axes and thus directly depends on the coordinate center.
In some situations this if fine, for example, when rotating/aligning a real image, all the edge pixels will be similarly affected.
But in other situations (for example, when scaling an over-sampled mock image to its intended resolution, this is not desired: you want the center of the coordinates to be on the corner of the pixel.
In such cases, you can use the @option{--centeroncorner} option which will shift the center by @mymath{0.5} before the main warp, then shift it back by @mymath{-0.5} after the main warp.
@table @option
@item -r FLT
@itemx --rotate=FLT
Rotate the input image by the given angle in degrees: @mymath{\theta} in @ref{Linear warping basics}.
Note that commonly, the WCS structure of the image is set such that the RA is the inverse of the image horizontal axis which increases towards the right in the FITS standard and as viewed by SAO DS9.
So the default center for rotation is on the right of the image.
If you want to rotate about other points, you have to translate the warping center first (with @option{--translate}) then apply your rotation and then return the center back to the original position (with another call to @option{--translate}, see @ref{Merging multiple warpings}.
@item -s FLT[,FLT]
@itemx --scale=FLT[,FLT]
Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in @ref{Linear warping basics}.
If only one value is given, then both image axes will be scaled with the given value.
When two values are given (separated by a comma), the first will be used to scale the first axis and the second will be used for the second axis.
If you only need to scale one axis, use @option{1} for the axis you do not need to scale.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.
@item -f FLT[,FLT]
@itemx --flip=FLT[,FLT]
Flip the input image around the given axis(s).
If only one value is given, then both image axes are flipped.
When two values are given (separated by acomma), you can choose which axis to flip over.
@option{--flip} only takes values @code{0} (for no flip), or @code{1} (for a flip).
Hence, if you want to flip by the second axis only, use @option{--flip=0,1}.
@item -e FLT[,FLT]
@itemx --shear=FLT[,FLT]
Shear the input image by the given value(s): @mymath{A} and @mymath{B} in @ref{Linear warping basics}.
If only one value is given, then both image axes will be sheared with the given value.
When two values are given (separated by a comma), the first will be used to shear the first axis and the second will be used for the second axis.
If you only need to shear along one axis, use @option{0} for the axis that must be untouched.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.
@item -t FLT[,FLT]
@itemx --translate=FLT[,FLT]
Translate (move the center of coordinates) the input image by the given value(s): @mymath{c} and @mymath{f} in @ref{Linear warping basics}.
If only one value is given, then both image axes will be translated by the given value.
When two values are given (separated by a comma), the first will be used to translate the first axis and the second will be used for the second axis.
If you only need to translate along one axis, use @option{0} for the axis that must be untouched.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.
@item -p FLT[,FLT]
@itemx --project=FLT[,FLT]
Apply a projection to the input image by the given values(s): @mymath{g} and @mymath{h} in @ref{Linear warping basics}.
If only one value is given, then projection will apply to both axes with the given value.
When two values are given (separated by a comma), the first will be used to project the first axis and the second will be used for the second axis.
If you only need to project along one axis, use @option{0} for the axis that must be untouched.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.
@item -m STR
@itemx --matrix=STR
The warp/transformation matrix.
All the elements in this matrix must be separated by commas(@key{,}) characters and as described above, you can also use fractions (a forward-slash between two numbers).
The transformation matrix can be either a 2 by 2 (4 numbers), or a 3 by 3 (9 numbers) array.
In the former case (if a 2 by 2 matrix is given), then it is put into a 3 by 3 matrix (see @ref{Linear warping basics}).
@cindex NaN
The determinant of the matrix has to be non-zero and it must not contain any non-number values (for example, infinities or NaNs).
The elements of the matrix have to be written row by row.
So for the general Homography matrix of @ref{Linear warping basics}, it should be called with @command{--matrix=a,b,c,d,e,f,g,h,1}.
The raw matrix takes precedence over all the modular warping options listed above, so if it is called with any number of modular warps, the latter are ignored.
@item --centeroncorner
Put the center of coordinates on the corner of the first (bottom-left when viewed in SAO DS9) pixel.
This option is applied after the final warping matrix has been finalized: either through modular warpings or the raw matrix.
See the explanation above for coordinates in the FITS standard to better understand this option and when it should be used.
@item -k
@itemx --keepwcs
@cindex WCSLIB
@cindex World Coordinate System
Do not correct the WCS information of the input image and save it untouched to the output image.
By default the WCS (World Coordinate System) information of the input image is going to be corrected in the output image so the objects in the image are at the same WCS coordinates.
But in some cases it might be useful to keep it unchanged (for example, to correct alignments).
@end table
@node Data analysis, Data modeling, Data manipulation, Top
@chapter Data analysis
Astronomical datasets (images or tables) contain very valuable information, the tools in this section can help in analyzing, extracting, and quantifying that information.
For example, getting general or specific statistics of the dataset (with @ref{Statistics}), detecting signal within a noisy dataset (with @ref{NoiseChisel}), or creating a catalog from an input dataset (with @ref{MakeCatalog}).
@menu
* Statistics:: Calculate dataset statistics.
* NoiseChisel:: Detect objects in an image.
* Segment:: Segment detections based on signal structure.
* MakeCatalog:: Catalog from input and labeled images.
* Match:: Match two datasets.
@end menu
@node Statistics, NoiseChisel, Data analysis, Data analysis
@section Statistics
The distribution of values in a dataset can provide valuable information about it.
For example, in an image, if it is a positively skewed distribution, we can see that there is significant data in the image.
If the distribution is roughly symmetric, we can tell that there is no significant data in the image.
In a table, when we need to select a sample of objects, it is important to first get a general view of the whole sample.
On the other hand, you might need to know certain statistical parameters of the dataset.
For example, if we have run a detection algorithm on an image, and we want to see how accurate it was, one method is to calculate the average of the undetected pixels and see how reasonable it is (if detection is done correctly, the average of undetected pixels should be approximately equal to the background value, see @ref{Sky value}).
In a table, you might have calculated the magnitudes of a certain class of objects and want to get some general characteristics of the distribution immediately on the command-line (very fast!), to possibly change some parameters.
The Statistics program is designed for such situations.
@menu
* Histogram and Cumulative Frequency Plot:: Basic definitions.
* 2D Histograms:: Plotting the distribution of two variables.
* Least squares fitting:: Fitting with various parametric functions.
* Sky value:: Definition and derivation of the Sky value.
* Invoking aststatistics:: Arguments and options to Statistics.
@end menu
@node Histogram and Cumulative Frequency Plot, 2D Histograms, Statistics, Statistics
@subsection Histogram and Cumulative Frequency Plot
@cindex Histogram
Histograms and the cumulative frequency plots are both used to visually study the distribution of a dataset.
A histogram shows the number of data points which lie within pre-defined intervals (bins).
So on the horizontal axis we have the bin centers and on the vertical, the number of points that are in that bin.
You can use it to get a general view of the distribution: which values have been repeated the most? how close/far are the most significant bins? Are there more values in the larger part of the range of the dataset, or in the lower part? Similarly, many very important properties about the dataset can be deduced from a visual inspection of the histogram.
In the Statistics program, the histogram can be either output to a table to plot with your favorite plotting program@footnote{
We recommend @url{http://pgfplots.sourceforge.net/,PGFPlots} which generates your plots directly within @TeX{} (the same tool that generates your document).},
or it can be shown with ASCII characters on the command-line, which is very crude, but good enough for a fast and on-the-go analysis, see the example in @ref{Invoking aststatistics}.
@cindex Intervals, histogram
@cindex Bin width, histogram
@cindex Normalizing histogram
@cindex Probability density function
The width of the bins is only necessary parameter for a histogram.
In the limiting case that the bin-widths tend to zero (while assuming the number of points in the dataset tend to infinity), then the histogram will tend to the @url{https://en.wikipedia.org/wiki/Probability_density_function, probability density function} of the distribution.
When the absolute number of points in each bin is not relevant to the study (only the shape of the histogram is important), you can @emph{normalize} a histogram so like the probability density function, the sum of all its bins will be one.
@cindex Cumulative Frequency Plot
In the cumulative frequency plot of a distribution, the horizontal axis is the sorted data values and the y axis is the index of each data in the sorted distribution.
Unlike a histogram, a cumulative frequency plot does not involve intervals or bins.
This makes it less prone to any sort of bias or error that a given bin-width would have on the analysis.
When a larger number of the data points have roughly the same value, then the cumulative frequency plot will become steep in that vicinity.
This occurs because on the horizontal axis, there is little change while on the vertical axis, the indexes constantly increase.
Normalizing a cumulative frequency plot means to divide each index (y axis) by the total number of data points (or the last value).
Unlike the histogram which has a limited number of bins, ideally the cumulative frequency plot should have one point for every data element.
Even in small datasets (for example, a @mymath{200\times200} image) this will result in an unreasonably large number of points to plot (40000)! As a result, for practical reasons, it is common to only store its value on a certain number of points (intervals) in the input range rather than the whole dataset, so you should determine the number of bins you want when asking for a cumulative frequency plot.
In Gnuastro (and thus the Statistics program), the number reported for each bin is the total number of data points until the larger interval value for that bin.
You can see an example histogram and cumulative frequency plot of a single dataset under the @option{--asciihist} and @option{--asciicfp} options of @ref{Invoking aststatistics}.
So as a summary, both the histogram and cumulative frequency plot in Statistics will work with bins.
Within each bin/interval, the lower value is considered to be within then bin (it is inclusive), but its larger value is not (it is exclusive).
Formally, an interval/bin between a and b is represented by [a, b).
When the over-all range of the dataset is specified (with the @option{--greaterequal}, @option{--lessthan}, or @option{--qrange} options), the acceptable values of the dataset are also defined with a similar inclusive-exclusive manner.
But when the range is determined from the actual dataset (none of these options is called), the last element in the dataset is included in the last bin's count.
@node 2D Histograms, Least squares fitting, Histogram and Cumulative Frequency Plot, Statistics
@subsection 2D Histograms
@cindex 2D histogram
@cindex Histogram, 2D
In @ref{Histogram and Cumulative Frequency Plot} the concept of histograms were introduced on a single dataset.
But they are only useful for viewing the distribution of a single variable (column in a table).
In many contexts, the distribution of two variables in relation to each other may be of interest.
For example, the color-magnitude diagrams in astronomy, where the horizontal axis is the luminosity or magnitude of an object, and the vertical axis is the color.
Scatter plots are useful to see these relations between the objects of interest when the number of the objects is small.
As the density of points in the scatter plot increases, the points will fall over each other and just make a large connected region hide potentially interesting behaviors/correlations in the densest regions.
This is where 2D histograms can become very useful.
A 2D histogram is composed of 2D bins (boxes or pixels), just as a 1D histogram consists of 1D bins (lines).
The number of points falling within each box/pixel will then be the value of that box.
Added with a color-bar, you can now clearly see the distribution independent of the density of points (for example, you can even visualize it in log-scale if you want).
Gnuastro's Statistics program has the @option{--histogram2d} option for this task.
It takes a single argument (either @code{table} or @code{image}) that specifies the format of the output 2D histogram.
The two formats will be reviewed separately in the sub-sections below.
But let's start with the generalities that are common to both (related to the input, not the output).
You can specify the two columns to be shown using the @option{--column} (or @option{-c}) option.
So if you want to plot the color-magnitude diagram from a table with the @code{MAG-R} column on the horizontal and @code{COLOR-G-R} on the vertical column, you can use @option{--column=MAG-r,COLOR-G-r}.
The number of bins along each dimension can be set with @option{--numbins} (for first input column) and @option{--numbins2} (for second input column).
Without specifying any range, the full range of values will be used in each dimension.
If you only want to focus on a certain interval of the values in the columns in any dimension you can use the @option{--greaterequal} and @option{--lessthan} options to limit the values along the first/horizontal dimension and @option{--greaterequal2} and @option{--lessthan2} options for the second/vertical dimension.
@menu
* 2D histogram as a table for plotting:: Format and usage in table format.
* 2D histogram as an image:: Format and usage in image format
@end menu
@node 2D histogram as a table for plotting, 2D histogram as an image, 2D Histograms, 2D Histograms
@subsubsection 2D histogram as a table for plotting
When called with the @option{--histogram=table} option, Statistics will output a table file with three columns that have the information of every box as a column.
If you asked for @option{--numbins=N} and @option{--numbins2=M}, all three columns will have @mymath{M\times N} rows (one row for every box/pixel of the 2D histogram).
The first and second columns are the position of the box along the first and second dimensions.
The third column has the number of input points that fall within that box/pixel.
For example, you can make high-quality plots within your paper (using the same @LaTeX{} engine, thus blending very nicely with your text) using @url{https://ctan.org/pkg/pgfplots, PGFPlots}.
Below you can see one such minimal example, using your favorite text editor, save it into a file, make the two small corrections in it, then run the commands shown at the top.
This assumes that you have @LaTeX{} installed, if not the steps to install a minimally sufficient @LaTeX{} package on your system, see the respective section in @ref{Bootstrapping dependencies}.
The two parts that need to be corrected are marked with '@code{%% <--}': the first one (@code{XXXXXXXXX}) should be replaced by the value to the @option{--numbins} option which is the number of bins along the first dimension.
The second one (@code{FILE.txt}) should be replaced with the name of the file generated by Statistics.
@example
%% Replace 'XXXXXXXXX' with your selected number of bins in the first
%% dimension.
%%
%% Then run these commands to build the plot in a LaTeX command.
%% mkdir tikz
%% pdflatex --shell-escape --halt-on-error report.tex
\documentclass@{article@}
%% Load PGFPlots and set it to build the figure separately in a 'tikz'
%% directory (which has to exist before LaTeX is run). This
%% "externalization" is very useful to include the commands of multiple
%% plots in the middle of your paper/report, but also have the plots
%% separately to use in slides or other scenarios.
\usepackage@{pgfplots@}
\usetikzlibrary@{external@}
\tikzexternalize
\tikzsetexternalprefix@{tikz/@}
%% Define colormap for the PGFPlots 2D histogram
\pgfplotsset@{
/pgfplots/colormap=@{hsvwhitestart@}@{
rgb255(0cm)=(255,255,255)
rgb255(0.10cm)=(128,0,128)
rgb255(0.5cm)=(0,0,230)
rgb255(1.cm)=(0,255,255)
rgb255(2.5cm)=(0,255,0)
rgb255(3.5cm)=(255,255,0)
rgb255(6cm)=(255,0,0)
@}
@}
%% Start the prinable document
\begin@{document@}
You can write a full paper here and include many figures!
Describe what the two axes are, and how you measured them.
Also, do not forget to explain what it shows and how to interpret it.
You also have separate PDFs for every figure in the `tikz' directory.
Feel free to change this text.
%% Draw the plot.
\begin@{tikzpicture@}
\small
\begin@{axis@}[
width=\linewidth,
view=@{0@}@{90@},
colorbar horizontal,
xlabel=X axis,
ylabel=Y axis,
ylabel shift=-0.1cm,
colorbar style=@{at=@{(0,1.01)@}, anchor=south west,
xticklabel pos=upper@},
]
\addplot3[
surf,
shader=flat corner,
mesh/ordering=rowwise,
mesh/cols=XXXXXXXXX, %% <-- Number of bins in 1st column.
] file @{FILE.txt@}; %% <-- Name of aststatistics output.
\end@{axis@}
\end@{tikzpicture@}
%% End the printable document.
\end@{document@}
@end example
Let's assume you have put the @LaTeX{} source above, into a plain-text file called @file{report.tex}.
The PGFPlots call above is configured to build the plots as separate PDF files in a @file{tikz/} directory@footnote{@url{https://www.ctan.org/pkg/pgf, TiKZ} is the name of the lower-level engine behind PGPlots.}.
This allows you to directly load those PDFs in your slides or other reports.
Therefore, before building the PDF report, you should first make a @file{tikz/} directory:
@example
$ mkdir tikz
@end example
To build the final PDF, you should run @command{pdflatex} with the @option{--shell-escape} option, so it can build the separate PDF(s) separately.
We are also adding the @option{--halt-on-error} so it immediately aborts in the case of an error (in the case of an error, by default @LaTeX{} will not abort, it will stop and ask for your input to temporarily change things and try fixing the error, but it has a special interface which can be hard to master).
@example
$ pdflatex --shell-escape --halt-on-error report.tex
@end example
@noindent
You can now open @file{report.pdf} to see your very high quality 2D histogram within your text.
And if you need the plots separately (for example, for slides), you can take the PDF inside the @file{tikz/} directory.
@node 2D histogram as an image, , 2D histogram as a table for plotting, 2D Histograms
@subsubsection 2D histogram as an image
When called with the @option{--histogram=image} option, Statistics will output a FITS file with an image/array extension.
If you asked for @option{--numbins=N} and @option{--numbins2=M} the image will have a size of @mymath{N\times M} pixels (one pixel per 2D bin).
Also, the FITS image will have a linear WCS that is scaled to the 2D bin size along each dimension.
So when you hover your mouse over any part of the image with a FITS viewer (for example, SAO DS9), besides the number of points in each pixel, you can directly also see ``coordinates'' of the pixels along the two axes.
You can also use the optimized and fast FITS viewer features for many aspects of visually inspecting the distributions (which we will not go into further).
@cindex Color-magnitude diagram
@cindex Diagram, Color-magnitude
For example, let's assume you want to derive the color-magnitude diagram (CMD) of the @url{http://uvudf.ipac.caltech.edu, UVUDF survey}.
You can run the first command below to download the table with magnitudes of objects in many filters and run the second command to see general column metadata after it is downloaded.
@example
$ wget http://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz
$ asttable uvudf_rafelski_2015.fits.gz -i
@end example
Let's assume you want to find the color to be between the @code{F606W} and @code{F775W} filters (roughly corresponding to the g and r filters in ground-based imaging).
However, the original table does not have color columns (there would be too many combinations!).
Therefore you can use the @ref{Column arithmetic} feature of Gnuastro's Table program for deriving a new table with the @code{F775W} magnitude in one column and the difference between the @code{F606W} and @code{F775W} on the other column.
With the second command, you can see the actual values if you like.
@example
$ asttable uvudf_rafelski_2015.fits.gz -cMAG_F775W \
-c'arith MAG_F606W MAG_F775W -' \
--colmetadata=ARITH_1,F606W-F775W,"AB mag" -ocmd.fits
$ asttable cmd.fits
@end example
@noindent
You can now construct your 2D histogram as a @mymath{100\times100} pixel FITS image with this command (assuming you want @code{F775W} magnitudes between 22 and 30, colors between -1 and 3 and 100 bins in each dimension).
Note that without the @option{--manualbinrange} option the range of each axis will be determined by the values within the columns (which may be larger or smaller than your desired large).
@example
aststatistics cmd.fits -cMAG_F775W,F606W-F775W --histogram2d=image \
--numbins=100 --greaterequal=22 --lessthan=30 \
--numbins2=100 --greaterequal2=-1 --lessthan2=3 \
--manualbinrange --output=cmd-2d-hist.fits
@end example
@noindent
If you have SAO DS9, you can now open this FITS file as a normal FITS image, for example, with the command below.
Try hovering/zooming over the pixels: not only will you see the number of objects in the UVUDF catalog that fall in each bin, but you also see the @code{F775W} magnitude and color of that pixel also.
@example
$ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit
@end example
@noindent
With the first command below, you can activate the grid feature of DS9 to actually see the coordinate grid, as well as values on each line.
With the second command, DS9 will even read the labels of the axes and use them to generate an almost publication-ready plot.
@example
$ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit -grid yes
$ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit -grid yes \
-grid type publication
@end example
If you are happy with the grid, coloring and the rest, you can also use ds9 to save this as a JPEG image to directly use in your documents/slides with these extra DS9 options (DS9 will write the image to @file{cmd-2d.jpeg} and quit immediately afterwards):
@example
$ ds9 cmd-2d-hist.fits -cmap sls -zoom 4 -grid yes \
-grid type publication -saveimage cmd-2d.jpeg -quit
@end example
@cindex PGFPlots (@LaTeX{} package)
This is good for a fast progress update.
But for your paper or more official report, you want to show something with higher quality.
For that, you can use the PGFPlots package in @LaTeX{} to add axes in the same font as your text, sharp grids and many other elegant/powerful features (like over-plotting interesting points and lines).
But to load the 2D histogram into PGFPlots first you need to convert the FITS image into a more standard format, for example, PDF.
We will use Gnuastro's @ref{ConvertType} for this, and use the @code{sls-inverse} color map (which will map the pixels with a value of zero to white):
@example
$ astconvertt cmd-2d-hist.fits --colormap=sls-inverse \
--borderwidth=0 -ocmd-2d-hist.pdf
@end example
@noindent
Below you can see a minimally working example of how to add axis numbers, labels and a grid to the PDF generated above.
Copy and paste the @LaTeX{} code below into a plain-text file called @file{cmd-report.tex}
Notice the @code{xmin}, @code{xmax}, @code{ymin}, @code{ymax} values and how they are the same as the range specified above.
@example
\documentclass@{article@}
\usepackage@{pgfplots@}
\dimendef\prevdepth=0
\begin@{document@}
You can write all you want here...
\begin@{tikzpicture@}
\begin@{axis@}[
enlargelimits=false,
grid,
axis on top,
width=\linewidth,
height=\linewidth,
xlabel=@{Magnitude (F775W)@},
ylabel=@{Color (F606W-F775W)@}]
\addplot graphics[xmin=22, xmax=30, ymin=-1, ymax=3]
@{cmd-2d-hist.pdf@};
\end@{axis@}
\end@{tikzpicture@}
\end@{document@}
@end example
@noindent
Run this command to build your PDF (assuming you have @LaTeX{} and PGFPlots).
@example
$ pdflatex cmd-report.tex
@end example
The improved quality, blending in with the text, vector-graphics resolution and other features make this plot pleasing to the eye, and let your readers focus on the main point of your scientific argument.
PGFPlots can also built the PDF of the plot separately from the rest of the paper/report, see @ref{2D histogram as a table for plotting} for the necessary changes in the preamble.
@node Least squares fitting, Sky value, 2D Histograms, Statistics
@subsection Least squares fitting
@cindex Radial profile
@cindex Least squares fitting
@cindex Fitting (least squares)
@cindex Star formation main sequence
After completing a good observation, doing robust data reduction and finalizing the measurements, it is commonly necessary to parameterize the derived correlations.
For example, you have derived the radial profile of the PSF of your image (see @ref{Building the extended PSF}).
You may want to parameterize the radial profile to estimate the slope.
Alternatively, you may have found the star formation rate and stellar mass of your sample of galaxies.
Now, you want to derive the star formation main sequence as a parametric relation between the two.
The fitting functions below can be used for such purposes.
@cindex GSL
@cindex GNU Scientific Library
Gnuastro's least squares fitting features are just wrappers over the least squares fitting methods of the @url{https://www.gnu.org/software/gsl/doc/html/lls.html, linear} and @url{https://www.gnu.org/software/gsl/doc/html/nls.html, nonlinear} least-squares fitting functions of the GNU Scientific Library (GSL).
For the low-level details and equations of the methods, please see the GSL documentation.
The names have been preserved here in Gnuastro to simplify the connection with GSL and follow the details in the detailed documentation there.
GSL is a very low-level library, designed for maximal portability to many scenarios, and power.
Therefore calling GSL's functions directly for a fast operation requires a good knowledge of the C programming language and many lines of code.
As a low-level library, GSL is designed to be the back-end of higher-level programs (like Gnuastro).
Through the Statistics program, in Gnuastro we provide a high-level interface to access to GSL's very powerful least squares fitting engine to read/write from/to standard data formats in astronomy.
A fully working example is shown below.
@cindex Gaussian noise
@cindex Noise (Gaussian)
To activate fitting in Statistics, simply give your desired fitting method to the @option{--fit} option (for the full list of acceptable methods, see @ref{Fitting options}).
Statistics accepts both 1-dimensional and 2-dimensional inputs which we'll describe in the sub-sections below.
@menu
* One dimensional polynomial fitting:: Independent variable is a column.
* Two dimensional polynomial fitting:: Independent variable is an image or two columns.
@end menu
@node One dimensional polynomial fitting, Two dimensional polynomial fitting, Least squares fitting, Least squares fitting
@subsubsection One dimensional polynomial fitting
In @ref{Least squares fitting} an introduction was given for fitting in Gnuastro.
Here, we'll go through a hands-on review of 1D fitting.
For example, with the command below, we'll build a fake measurement table (including noise) from the polynomial @mymath{y=1.23-4.56x+7.89x^2}.
To understand how this equation translates to the command below (part before @code{set-y}), see @ref{Reverse polish notation} and @ref{Column arithmetic}.
We will set the X axis to have values from 0.1 to 2, with steps of 0.01 and let's assume a random Gaussian noise to each @mymath{y} measurement: @mymath{\sigma_y=0.1y}.
To make the random number generation exactly reproducible, we are also setting the seed (see @ref{Generating random numbers}, which also uses GSL as a backend).
To learn more about the @code{mknoise-sigma} operator, see the Arithmetic program's @ref{Random number generators}.
@example
$ export GSL_RNG_SEED=1664015492
$ seq 0.1 0.01 2 \
| asttable --output=noisy.fits --envseed -c1 \
-c'arith 1.23 -4.56 $1 x + 7.89 $1 x $1 x + set-y \
0.1 y x set-yerr \
y yerr mknoise-sigma yerr' \
--colmetadata=1,X --colmetadata=2,Y \
--colmetadata=3,Yerr
@end example
@noindent
Let's have a look at the output plot with TOPCAT using the command below.
@example
$ astscript-fits-view noisy.fits
@end example
@noindent
To see the error-bars, after opening the scatter plot, go into the ``Form'' tab for that plot.
Click on the button with a green ``+'' sign followed by ``Forms'' and select ``XYError''.
On the side-menu, in front of ``Y Positive Error'', select the @code{Yerr} column of the input table.
As you see, the error bars do indeed increase for higher X axis values.
Since we have error bars in this example (as in any measurement), we can use weighted fitting.
Also, this isn't a linear relation, so we'll use a polynomial to second order (a maximum power of 2 in the form of @mymath{Y=c_0+c_1X+c_2X^2}):
@example
$ aststatistics noisy.fits -cX,Y,Yerr --fit=polynomial-weighted \
--fitmaxpower=2
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Fitting results (remove extra info with '--quiet' or '-q)
Input file: noisy.fits (hdu: 1) with 191 non-blank rows.
X column: X
Y column: Y
Weight column: Yerr [Standard deviation of Y in each row]
Fit function: Y = c0 + (c1 * X^1) + (c2 * X^2) + ... (cN * X^N)
N: 2
c0: +1.2286211608
c1: -4.5127796636
c2: +7.8435883943
Covariance matrix:
+0.0010496001 -0.0039928488 +0.0028367390
-0.0039928488 +0.0175244127 -0.0138030778
+0.0028367390 -0.0138030778 +0.0128129806
Reduced chi^2 of fit:
+0.9740670090
@end example
As you see from the elaborate message, the weighted polynomial fitting has found return the @mymath{c_0}, @mymath{c_1} and @mymath{c_2} of @mymath{Y=c_0+c_1X+c_2X^2} that best represents the data we inserted.
Our input values were @mymath{c_0=1.23}, @mymath{c_1=-4.56} and @mymath{c_2=7.89}, and the fitted values are @mymath{c_0\approx1.2286}, @mymath{c_1\approx-4.5128} and @mymath{c_2\approx7.8436} (which is statistically a very good fit! given that we knew the original values a-priori!).
The covariance matrix is also calculated, it is necessary to calculate error bars on the estimations and contains a lot of information (e.g., possible correlations between parameters).
Finally, the reduced @mymath{\chi^2} (or @mymath{\chi_{red}^2}) of the fit is also printed (which was the measure to minimize).
A @mymath{\chi_{red}^2\approx1} shows a good fit.
This is good for real-world scenarios when you don't know the original values a-priori.
For more on interpreting @mymath{\chi_{red}^2\approx1}, see Andrae et al. @url{https://arxiv.org/abs/1012.3754,2010}.
The comparison of fitted and input values look pretty good, but nothing beats visual inspection!
To see how this looks compared to the data, let's open the table again:
@example
$ astscript-fits-view noisy.fits
@end example
Repeat the steps below to show the scatter plot and error-bars.
Then, go to the ``Layers'' menu and select ``Add Function Control''.
Use the results above to fill the box in front of ``Function Expression'': @code{1.2286+(-4.5128*x)+(7.8436*x*x)}.
You will see that the second order polynomial falls very nicely over the points@footnote{After plotting, you will notice that the legend made the plot too thin.
Fortunately you have a lot of empty space within the plot.
To bring the legend in, click on the ``Legend'' item on the bottom-left menu, in the ``Location'' tab, click on ``Internal'' and hold and move it to the top-left in the box below.
To make the functional fit more clear, you can click on the ``Function'' item of the bottom-left menu.
In the ``Style'' tab, change the color and thickness.}.
But this fit is not perfect: it also has errors (inherited from the measurement errors).
We need the covariance matrix to estimate the errors on each point, and that can be complex to do by hand.
Fortunately GSL has the tools to easily estimate the function at any point and also calculate its corresponding error.
To access this feature within Gnuastro's Statistics program, you should use the @option{--fitestimate} option.
You can either give an independent table file name (with @option{--fitestimatehdu} and @option{--fitestimatecol} to specify the HDU and column in that file), or just @code{self} so it uses the same X axis column that was used in this fit.
Let's use the easier case:
@example
$ aststatistics noisy.fits -cX,Y,Yerr --fit=polynomial-weighted \
--fitmaxpower=2 --fitestimate=self --output=est.fits
...[[truncated; same as above]]...
Requested estimation:
Written to: est.fits
@end example
The first lines of the printed text are the same as before.
Afterwards, you will see a new line printed in the output, saying that the estimation was written in @file{est.fits}.
You can now inspect the two tables with TOPCAT again with the command below.
After TOPCAT opens, plot both scatter plots:
@example
$ astscript-fits-view noisy.fits est.fits
@end example
It is clear that they fall nicely on top of each other.
The @file{est.fits} table also has a third column with error bars.
You can follow the same steps before and draw the error bars to see how they compare with the scatter of the measured data.
They are much smaller than the error in each point because we had a very good sampling of the function in our noisy data.
Another useful point with the estimated output file is that it contains all the fitting outputs as keywords in the header:
@example
$ astfits est.fits -h1
...[[truncated]]...
/ Fit results
FITTYPE = 'polynomial-weighted' / Functional form of the fitting.
FITMAXP = 2 / Maximum power of polynomial.
FITIN = 'noisy.fits' / Name of file with input columns.
FITINHDU= '1 ' / Name or Number of HDU with input cols.
FITXCOL = 'X ' / Name or Number of independent (X) col.
FITYCOL = 'Y ' / Name or Number of measured (Y) column.
FITWCOL = 'Yerr ' / Name or Number of weight column.
FITWNAT = 'Standard deviation' / Nature of weight column.
FRDCHISQ= 0.974067008958516 / Reduced chi^2 of fit.
FITC0 = 1.22862116084727 / C0: multiple of x^0 in polynomial
FITC1 = -4.51277966356177 / C1: multiple of x^1 in polynomial
FITC2 = 7.84358839431161 / C2: multiple of x^2 in polynomial
FCOV11 = 0.00104960011629718 / Covariance matrix element (1,1).
FCOV12 = -0.00399284880859776 / Covariance matrix element (1,2).
FCOV13 = 0.00283673901863388 / Covariance matrix element (1,3).
FCOV21 = -0.00399284880859776 / Covariance matrix element (2,1).
FCOV22 = 0.0175244126670659 / Covariance matrix element (2,2).
FCOV23 = -0.0138030778380786 / Covariance matrix element (2,3).
FCOV31 = 0.00283673901863388 / Covariance matrix element (3,1).
FCOV32 = -0.0138030778380786 / Covariance matrix element (3,2).
FCOV33 = 0.0128129806394559 / Covariance matrix element (3,3).
...[[truncated]]...
@end example
In scenarios were you don't want the estimation, but only the fitted parameters, all that verbose, human-friendly text or FITS keywords can be an annoying extra step.
For such cases, you should use the @option{--quiet} option like below.
It will print the parameters, rows of the covariance matrix and @mymath{\chi_{red}^2} on separate lines with nothing extra.
This allows you to parse the values in any way that you would like.
@example
$ aststatistics noisy.fits -cX,Y,Yerr --fit=polynomial-weighted \
--fitmaxpower=2 --quiet
+1.2286211608 -4.5127796636 +7.8435883943
+0.0010496001 -0.0039928488 +0.0028367390
-0.0039928488 +0.0175244127 -0.0138030778
+0.0028367390 -0.0138030778 +0.0128129806
+0.9740670090
@end example
As a final example, because real data usually have outliers, let's look at the ``robust'' polynomial fit which has special features to remove outliers.
First, we need to add some outliers to the table.
To do this, we'll make a plain-text table with @command{echo}, and use Table's @option{--catrowfile} to concatenate (or append) those two rows to the original table.
Finally, we'll run the same fitting step above:
@example
$ echo "0.6 20 0.01" > outliers.txt
$ echo "0.8 20 0.01" >> outliers.txt
$ asttable noisy.fits --catrowfile=outliers.txt \
--output=with-outlier.fits
$ aststatistics with-outlier.fits -cX,Y,Yerr --fit=polynomial-weighted \
--fitmaxpower=2 --fitestimate=self \
--output=est-out.fits
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Fitting results (remove extra info with '--quiet' or '-q)
Input file: with-outlier.fits (hdu: 1) with 193 non-blank rows.
X column: X
Y column: Y
Weight column: Yerr [Standard deviation of Y in each row]
Fit function: Y = c0 + (c1 * X^1) + (c2 * X^2) + ... (cN * X^N)
N: 2
c0: -13.6446036899
c1: +66.8463258547
c2: -30.8746303591
Covariance matrix:
+0.0007889160 -0.0027706310 +0.0022208939
-0.0027706310 +0.0113922468 -0.0100306732
+0.0022208939 -0.0100306732 +0.0094087226
Reduced chi^2 of fit:
+4501.8356719150
Requested estimation:
Written to: est-out.fit
@end example
We see that the coefficient values have changed significantly and that @mymath{\chi_{red}^2} has increased to @mymath{4501}!
Recall that a good fit should have @mymath{\chi_{red}^2\approx1}.
These numbers clearly show that the fit was bad, but again, nothing beats a visual inspection.
To visually see the effect of those outliers, let's plot them with the command below.
You see that those two points have clearly caused a turn in the fitted result which is terrible.
@example
$ astscript-fits-view with-outlier.fits est-out.fits
@end example
For such cases, GSL has @url{https://www.gnu.org/software/gsl/doc/html/lls.html#robust-linear-regression, Robust linear regression}.
In Gnuastro's Statistics, you can access it with @option{--fit=polynomial-robust}, like the example below.
Just note that the robust method doesn't take an error column (because it estimates the errors internally while rejecting outliers, based on the method).
@example
$ aststatistics with-outlier.fits -cX,Y --fit=polynomial-robust \
--fitmaxpower=2 --fitestimate=self \
--output=est-out.fits --quiet
$ astfits est-out.fits -h1 | grep ^FITC
FITC0 = 1.20422691185238 / C0: multiple of x^0 in polynomial
FITC1 = -4.4779253576348 / C1: multiple of x^1 in polynomial
FITC2 = 7.84986153686548 / C2: multiple of x^2 in polynomial
$ astscript-fits-view with-outlier.fits est-out.fits
@end example
It is clear that the coefficients are very similar to the no-outlier scenario above and if you run the second command to view the scatter plots on TOPCAT, you also see that the fit nicely follows the curve and is not affected by those two points.
GSL provides many methods to reject outliers.
For their full list, see the description of @option{--fitrobust} in @ref{Fitting options}.
For a description of the outlier rejection methods, see the @url{https://www.gnu.org/software/gsl/doc/html/lls.html#c.gsl_multifit_robust_workspace, GSL manual}.
You may have noticed that unlike the cases before the last Statistics command above didn't print anything on the standard output.
This is becasue @option{--quiet} and @option{--fitestimate} were called together.
In this case, because all the fitting parameters are written as FITS keywords, because of the @option{--quiet} option, they are no longer printed on standard output.
@node Two dimensional polynomial fitting, , One dimensional polynomial fitting, Least squares fitting
@subsubsection Two dimensional polynomial fitting
In @ref{Least squares fitting} an introduction was given for fitting in Gnuastro and one-dimensional fitting was reviewed in @ref{One dimensional polynomial fitting} (which should be read before this in a first read).
For a 2D polynomial fit, two independent variables are needed.
The two independent variables can be given either as an image or as table columns.
We'll start with an image here, and end with using table columns.
Let's use the Arithmetic program to generate an image from the following 2D polynomial: @mymath{Y=5+10X_2+X_1X_2}.
In a FITS image, @mymath{X_1} is the first FITS axis (horizontal) and @mymath{X_2} is the second FITS axis (vertical).
For the details of the operators, see @ref{Arithmetic operators}.
@example
$ astarithmetic 100 100 2 makenew indexonly set-i \
i 100 % 1 + set-X1 \
i 100 / 1 + set-X2 \
5 10 X2 x + X1 X2 x + f64 \
--output=pixels.fits
$ astscript-fits-view pixels.fits
@end example
The gradient of the 2D polynomial is clearly visible in the image.
You can directly give such an image to the statistics program for fitting in a very similar way @ref{One dimensional polynomial fitting} as shown below (with some outputs printed):
@example
$ aststatistics pixels.fits --fit=polynomial --fitmaxpower=2
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Fitting results (remove extra info with '--quiet' or '-q)
Input file: pixels.fits (hdu: 1) with 10000 non-blank rows.
X column: Horizontal position of non-NAN pixels.
Y column: Vertical position of non-NAN pixels.
Fit function [2d, Y=f(X1,X2)]: Y = c0 + c1.X1 + c2.X2 + c3.X1^2
+ c4.X1.X2 + c5.X2^2 + c6.X1^3 + c7.X1^2.X2 + c8.X1.X2^2 + c9.X2^3
+ ... + cn.X1^(j).X2^(d-j)
N: 2
c0: +5.000000000073776e+00
c1: +2.460254222569347e-13
c2: +9.999999999995385e+00
c3: -2.773822838086915e-15
c4: +1.000000000000002e+00
c5: +4.474025316891783e-14
Covariance matrix:
+3.318964094598024e-24 -6.837076956939867e-26 -6.837076956940090e-26
+3.786011000755381e-28 +4.497322158473032e-28 +3.786011000755588e-28
...
@end example
@noindent
Except for the functional form of the the 2D fit and the number of constants, all the outputs are similar to what was introduced in @ref{One dimensional polynomial fitting}.
In this context, the extremely small values (@mymath{\sim10^{-14}}) are just floating point errors and can be considered equal to zero.
We therefore see that the three constants that we had originally set and nicely determined.
But we are rarely lucky enough to have a value for the full plane like above.
Let's simulate such a situation by adding a noisy dataset (@code{n}, with each pixel having a random value from a Gaussian distribution with @mymath{\sigma=100}).
In the command below, we'll use that noisy dataset to randomly remove (set to NaN/blank) all pixels that have a value below 50 in it.
@example
$ astarithmetic 100 100 2 makenew indexonly set-i \
i 100 % 1 + set-X1 \
i 100 / 1 + set-X2 \
5 10 X2 x + X1 X2 x + f64 set-p \
i 0 constant 100 mknoise-sigma set-n \
p n 50 lt nan where --output=pixels.fits
$ astscript-fits-view pixels.fits
$ aststatistics pixels.fits --fit=polynomial --fitmaxpower=2
@end example
With the second command, you can see that many of the pixels have become NaN and from the third command, you can see that the fit is still good.
You can try decreasing the number of useful (non-NaN) pixels (by increasing the @code{50} threshold above) and see how robust the fit is to fewer pixels.
@cartouche
@noindent
@strong{@option{--fitestimate} not yet implemented in 2D}: as of this version, this option is not yet implemented in 2D.
We will try to implement it as soon as time allows.
Until then, the 2D image for a polynomial can be created in formalism like above: where the @code{X1} and @code{X2} of every pixel are extracted and the constants are multiplied to each term of the polynomial.
@end cartouche
As mentioned at the start of this section, you can also feed your input data as table columns.
First let's generate some table columns from the @file{pixels.fits} image we made above: we will feed the resulting table to statistics instead of table.
The top command below will generate a 3 column table with the same number of rows as pixels in the image (one row per pixel).
The first and second columns will be the first (horizontal) and second (vertical) positions of the pixel and the third column will be the pixel value.
@example
$ astarithmetic pixels.fits set-i \
i 1 size set-width \
i indexonly set-ind \
ind width % 1 + u32 set-X1 \
ind width / 1 + u32 set-X2 \
i to-1d X2 to-1d X1 to-1d \
--writeall --output=pixels-tab-raw.fits
@end example
@noindent
If you inspect this table, you will notice that many rows are NaN (as expected!).
To remove the NaN rows, you can use Table's @option{--noblank} option:
@example
$ asttable pixels-tab-raw.fits --noblank=_all \
--output=for-2d-fit.fits
@end example
Now that we have the two independent variables (horizontal and vertical positions) along with the function/pixel value, we are ready to call Statistics for a 2D polynomial fit:
@example
$ aststatistics for-2d-fit.fits --fit=polynomial -c1,2,3 \
--fitmaxpower=2
@end example
@noindent
You will notice that the fitted parameters are the same as the case where you gave an image.
In fact, when given an image, Statistics does a similar thing to what we did here to extract the positions and values of all the non-NaN pixels into a table and then feed them to Gnuastro's internal fitting library.
@node Sky value, Invoking aststatistics, Least squares fitting, Statistics
@subsection Sky value
@cindex Sky
One of the most important aspects of a dataset is its reference value: the value of the dataset where there is no signal.
Without knowing, and thus removing the effect of, this value it is impossible to compare the derived results of many high-level analyses over the dataset with other datasets (in the attempt to associate our results with the ``real'' world).
In astronomy, this reference value is known as the ``Sky'' value: the value that noise fluctuates around: where there is no signal from detectable objects or artifacts (for example, galaxies, stars, planets or comets, star spikes or internal optical ghost).
Depending on the dataset, the Sky value maybe a fixed value over the whole dataset, or it may vary based on location.
For an example of the latter case, see Figure 11 in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
Because of the significance of the Sky value in astronomical data analysis, we have devoted this subsection to it for a thorough review.
We start with a thorough discussion on its definition (@ref{Sky value definition}).
In the astronomical literature, researchers use a variety of methods to estimate the Sky value, so in @ref{Sky value misconceptions}) we review those and discuss their biases.
From the definition of the Sky value, the most accurate way to estimate the Sky value is to run a detection algorithm (for example, @ref{NoiseChisel}) over the dataset and use the undetected pixels.
However, there is also a more crude method that maybe useful when good direct detection is not initially possible (for example, due to too many cosmic rays in a shallow image).
A more crude (but simpler method) that is usable in such situations is discussed in @ref{Quantifying signal in a tile}.
@menu
* Sky value definition:: Definition of the Sky/reference value.
* Sky value misconceptions:: Wrong methods to estimate the Sky value.
* Quantifying signal in a tile:: Method to estimate the presence of signal.
@end menu
@node Sky value definition, Sky value misconceptions, Sky value, Sky value
@subsubsection Sky value definition
@cindex Sky value
This analysis is taken from Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
Let's assume that all instrument defects -- bias, dark and flat -- have been corrected and the magnitude (see @ref{Brightness flux magnitude}) of a detected object, @mymath{O}, is desired.
The sources of flux on pixel@footnote{For this analysis the dimension of the data (image) is irrelevant.
So if the data is an image (2D) with width of @mymath{w} pixels, then a pixel located on column @mymath{x} and row @mymath{y} (where all counting starts from zero and (0, 0) is located on the bottom left corner of the image), would have an index: @mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as follows:
@itemize
@item
Contribution from the target object (@mymath{O_i}).
@item
Contribution from other detected objects (@mymath{D_i}).
@item
Undetected objects or the fainter undetected regions of bright objects (@mymath{U_i}).
@item
@cindex Cosmic rays
A cosmic ray (@mymath{C_i}).
@item
@cindex Background flux
The background flux, which is defined to be the count if none of the others exists on that pixel (@mymath{B_i}).
@end itemize
@noindent
The total flux in this pixel (@mymath{T_i}) can thus be written as:
@dispmath{T_i=B_i+D_i+U_i+C_i+O_i.}
@cindex Cosmic ray removal
@noindent
By definition, @mymath{D_i} is detected and it can be assumed that it is correctly estimated (deblended) and subtracted, we can thus set @mymath{D_i=0}.
There are also methods to detect and remove cosmic rays, for example, the method described in van Dokkum (2001)@footnote{van Dokkum, P. G. (2001).
Publications of the Astronomical Society of the Pacific. 113, 1420.}, or by comparing multiple exposures.
This allows us to set @mymath{C_i=0}.
Note that in practice, @mymath{D_i} and @mymath{U_i} are correlated, because they both directly depend on the detection algorithm and its input parameters.
Also note that no detection or cosmic ray removal algorithm is perfect.
With these limitations in mind, the observed Sky value for this pixel (@mymath{S_i}) can be defined as
@cindex Sky value
@dispmath{S_i\equiv{}B_i+U_i.}
@noindent
Therefore, as the detection process (algorithm and input parameters) becomes more accurate, or @mymath{U_i\to0}, the Sky value will tend to the background value or @mymath{S_i\to B_i}.
Hence, we see that while @mymath{B_i} is an inherent property of the data (pixel in an image), @mymath{S_i} depends on the detection process.
Over a group of pixels, for example, in an image or part of an image, this equation translates to the average of undetected pixels (Sky@mymath{=\sum{S_i}}).
With this definition of Sky, the object flux in the data can be calculated, per pixel, with
@dispmath{ T_{i}=S_{i}+O_{i} \quad\rightarrow\quad
O_{i}=T_{i}-S_{i}.}
@cindex photo-electrons
In the fainter outskirts of an object, a very small fraction of the photo-electrons in a pixel actually belongs to objects, the rest is caused by random factors (noise), see Figure 1b in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
Therefore even a small over estimation of the Sky value will result in the loss of a very large portion of most galaxies.
Besides the lost area/brightness, this will also cause an over-estimation of the Sky value and thus even more under-estimation of the object's magnitude.
It is thus very important to detect the diffuse flux of a target, even if they are not your primary target.
In summary, the more accurately the Sky is measured, the more accurately the magnitude (calculated from the sum of pixel values) of the target object can be measured (photometry).
Any under/over-estimation in the Sky will directly translate to an over/under-estimation of the measured object's magnitude.
@cartouche
@noindent
The @strong{Sky value} is only correctly found when all the detected
objects (@mymath{D_i} and @mymath{C_i}) have been removed from the data.
@end cartouche
@node Sky value misconceptions, Quantifying signal in a tile, Sky value definition, Sky value
@subsubsection Sky value misconceptions
As defined in @ref{Sky value}, the sky value is only accurately defined when the detection algorithm is not significantly reliant on the sky value.
In particular its detection threshold.
However, most signal-based detection tools@footnote{According to Akhlaghi and Ichikawa (2015), signal-based detection is a detection process that relies heavily on assumptions about the to-be-detected objects.
This method was the most heavily used technique prior to the introduction of NoiseChisel in that paper.} use the sky value as a reference to define the detection threshold.
These older techniques therefore had to rely on approximations based on other assumptions about the data.
A review of those other techniques can be seen in Appendix A of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
These methods were extensively used in astronomical data analysis for several decades, therefore they have given rise to a lot of misconceptions, ambiguities and disagreements about the sky value and how to measure it.
As a summary, the major methods used until now were an approximation of the mode of the image pixel distribution and @mymath{\sigma}-clipping.
@itemize
@cindex Histogram
@cindex Distribution mode
@cindex Mode of a distribution
@cindex Probability density function
@item
To find the mode of a distribution those methods would either have to assume (or find) a certain probability density function (PDF) or use the histogram.
But astronomical datasets can have any distribution, making it almost impossible to define a generic function.
Also, histogram-based results are very inaccurate (there is a large dispersion) and it depends on the histogram bin-widths.
Generally, the mode of a distribution also shifts as signal is added.
Therefore, even if it is accurately measured, the mode is a biased measure for the Sky value.
@cindex Sigma-clipping
@item
Another approach was to iteratively clip the brightest pixels in the image (which is known as @mymath{\sigma}-clipping).
See @ref{Sigma clipping} for a complete explanation.
@mymath{\sigma}-clipping is useful when there are clear outliers (an object with a sharp edge in an image for example).
However, real astronomical objects have diffuse and faint wings that penetrate deeply into the noise, see Figure 1 in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
@end itemize
As discussed in @ref{Sky value}, the sky value can only be correctly defined as the average of undetected pixels.
Therefore all such approaches that try to approximate the sky value prior to detection are ultimately poor approximations.
@node Quantifying signal in a tile, , Sky value misconceptions, Sky value
@subsubsection Quantifying signal in a tile
In order to define detection thresholds on the image, or calibrate it for measurements (subtract the signal of the background sky and define errors), we need some basic measurements.
For example, the quantile threshold in NoiseChisel (@option{--qthresh} option), or the mean of the undetected regions (Sky) and the Sky standard deviation (Sky STD) which are the output of NoiseChisel and Statistics.
But astronomical images will contain a lot of stars and galaxies that will bias those measurements if not properly accounted for.
Quantifying where signal is present is thus a very important step in the usage of a dataset; for example, if the Sky level is over-estimated, your target object's magnitude will be under-estimated.
@cindex Data
@cindex Noise
@cindex Signal
@cindex Gaussian distribution
Let's start by clarifying some definitions:
@emph{Signal} is defined as the non-random source of flux in each pixel (you can think of this as the mean in a Gaussian or Poisson distribution).
In astronomical images, signal is mostly photons coming of a star or galaxy, and counted in each pixel.
@emph{Noise} is defined as the random source of flux in each pixel (or the standard deviation of a Gaussian or Poisson distribution).
Noise is mainly due to counting errors in the detector electronics upon data collection.
@emph{Data} is defined as the combination of signal and noise (so a noisy image of a galaxy is one @emph{data}set).
When a dataset does not have any signal (for example, you take an image with a closed shutter, producing an image that only contains noise), the mean, median and mode of the distribution are equal within statistical errors.
Signal from emitting objects, like astronomical targets, always has a positive value and will never become negative, see Figure 1 in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
Therefore, when signal is added to the data (you take an image with an open shutter pointing to a galaxy for example), the mean, median and mode of the dataset shift to the positive, creating a positively skewed distribution.
The shift of the mean is the largest.
The median shifts less, since it is defined after ordering all the elements/pixels (the median is the value at a quantile of 0.5), thus it is not affected by outliers.
Finally, the mode's shift to the positive is the least.
@cindex Mean
@cindex Median
@cindex Quantile
Inverting the argument above gives us a robust method to quantify the significance of signal in a dataset: when the mean and median of a distribution are approximately equal we can argue that there is no significant signal.
In other words: when the quantile of the mean (@mymath{q_{mean}}) is around 0.5.
This definition of skewness through the quantile of the mean is further introduced with a real image the tutorials, see @ref{Skewness caused by signal and its measurement}.
@cindex Signal-to-noise ratio
However, in an astronomical image, some of the pixels will contain more signal than the rest, so we cannot simply check @mymath{q_{mean}} on the whole dataset.
For example, if we only look at the patch of pixels that are placed under the central parts of the brightest stars in the field of view, @mymath{q_{mean}} will be very high.
The signal in other parts of the image will be weaker, and in some parts it will be much smaller than the noise (for example, 1/100-th of the noise level).
When the signal-to-noise ratio is very small, we can generally assume no signal (because its effectively impossible to measure it) and @mymath{q_{mean}} will be approximately 0.5.
To address this problem, we break the image into a grid of tiles@footnote{The options to customize the tessellation are discussed in @ref{Processing options}.} (see @ref{Tessellation}).
For example, a tile can be a square box of size @mymath{30\times30} pixels.
By measuring @mymath{q_{mean}} on each tile, we can find which tiles that contain significant signal and ignore them.
Technically, if a tile's @mymath{|q_{mean}-0.5|} is larger than the value given to the @option{--meanmedqdiff} option, that tile will be ignored for the next steps.
You can read this option as ``mean-median-quantile-difference''.
@cindex Skewness
@cindex Convolution
The raw dataset's pixel distribution (in each tile) is noisy, to decrease the noise/error in estimating @mymath{q_{mean}}, we convolve the image before tessellation (see @ref{Convolution process}.
Convolution decreases the range of the dataset and enhances its skewness, See Section 3.1.1 and Figure 4 in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
This enhanced skewness can be interpreted as an increase in the Signal to noise ratio of the objects buried in the noise.
Therefore, to obtain an even better measure of the presence of signal in a tile, the mean and median discussed above are measured on the convolved image.
@cindex Cosmic rays
There is one final hurdle: raw astronomical datasets are commonly peppered with Cosmic rays.
Images of Cosmic rays are not smoothed by the atmosphere or telescope aperture, so they have sharp boundaries.
Also, since they do not occupy too many pixels, they do not affect the mode and median calculation.
But their very high values can greatly bias the calculation of the mean (recall how the mean shifts the fastest in the presence of outliers), for example, see Figure 15 in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
The effect of outliers like cosmic rays on the mean and standard deviation can be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a complete explanation.
Therefore, after asserting that the mean and median are approximately equal in a tile (see @ref{Tessellation}), the Sky and its STD are measured on each tile after @mymath{\sigma}-clipping with the @option{--sigmaclip} option (see @ref{Sigma clipping}).
In the end, some of the tiles will pass the test and will be given a value.
Others (that had signal in them) will just be assigned a NaN (not-a-number) value.
But we need a measurement over each tile (and thus pixel).
We will therefore use interpolation to assign a value to the NaN tiles.
However, prior to interpolating over the failed tiles, another point should be considered: large and extended galaxies, or bright stars, have wings which sink into the noise very gradually.
In some cases, the gradient over these wings can be on scales that is larger than the tiles (for example, the pixel value changes by @mymath{0.1\sigma} after 100 pixels, but the tile has a width of 30 pixels).
In such cases, the @mymath{q_{mean}} test will be successful, even though there is signal.
Recall that @mymath{q_{mean}} is a measure of skewness.
If we do not identify (and thus set to NaN) such outlier tiles before the interpolation, the photons of the outskirts of the objects will leak into the detection thresholds or Sky and Sky STD measurements and bias our result, see @ref{Detecting large extended targets}.
Therefore, the final step of ``quantifying signal in a tile'' is to look at this distribution of successful tiles and remove the outliers.
@mymath{\sigma}-clipping is a good solution for removing a few outliers, but the problem with outliers of this kind is that there may be many such tiles (depending on the large/bright stars/galaxies in the image).
We therefore apply the following local outlier rejection strategy.
For each tile, we find the nearest @mymath{N_{ngb}} tiles that had a usable value (@mymath{N_{ngb}} is the value given to @option{--outliernumngb}).
We then sort them and find the difference between the largest and second-to-smallest elements (The minimum is not used because the scatter can be large).
Let's call this the tile's @emph{slope} (measured from its neighbors).
All the tiles that are on a region of flat noise will have similar slope values, but if a few tiles fall on the wings of a bright star or large galaxy, their slope will be significantly larger than the tiles with no signal.
We just have to find the smallest tile slope value that is an outlier compared to the rest, and reject all tiles with a slope larger than that.
@cindex Outliers
@cindex Identifying outliers
To identify the smallest outlier, we will use the distribution of distances between sorted elements.
Let's assume the total number of tiles with a good mean-median quantile difference is @mymath{N}.
They are first sorted and searching for the outlier starts on element @mymath{N/3} (integer division).
Let's take @mymath{v_i} to be the @mymath{i}-th element of the sorted input (with no blank values) and @mymath{m} and @mymath{\sigma} as the @mymath{\sigma}-clipped median and standard deviation from the distances of the previous @mymath{N/3-1} elements (not including @mymath{v_i}).
If the value given to @option{--outliersigma} is displayed with @mymath{s}, the @mymath{i}-th element is considered as an outlier when the condition below is true.
@dispmath{{(v_i-v_{i-1})-m\over \sigma}>s}
@noindent
Since @mymath{i} begins from the @mymath{N/3}-th element in the sorted array (a quantile of @mymath{1/3=0.33}), the outlier has to be larger than the @mymath{0.33} quantile value of the dataset (this is usually the case; otherwise, it is hard to define it as an ``outlier''!).
@cindex Bicubic interpolation
@cindex Interpolation, bicubic
@cindex Nearest-neighbor interpolation
@cindex Interpolation, nearest-neighbor
Once the outlying tiles have been successfully identified and set to NaN, we use nearest-neighbor interpolation to give a value to all tiles in the image.
We do not use parametric interpolation methods (like bicubic), because they will effectively extrapolate on the edges, creating strong artifacts.
Nearest-neighbor interpolation is very simple: for each tile, we find the @mymath{N_{ngb}} nearest tiles that had a good value, the tile's value is found by estimating the median.
You can set @mymath{N_{ngb}} through the @option{--interpnumngb} option.
Once all the tiles are given a value, a smoothing step is implemented to remove the sharp value contrast that can happen on the edges of tiles.
The size of the smoothing box is set with the @option{--smoothwidth} option.
As mentioned above, the process above is used for any of the basic measurements (for example, identifying the quantile-based thresholds in NoiseChisel, or the Sky value in Statistics).
You can use the check-image feature of NoiseChisel or Statistics to inspect the steps and visually see each step (all the options that start with @option{--check}).
For example, as mentioned in the @ref{NoiseChisel optimization} tutorial, when given a dataset from a new instrument (with differing noise properties), we highly recommend to use @option{--checkqthresh} in your first call and visually inspect how the parameters above affect the final quantile threshold (e.g., have the wings of bright sources leaked into the threshold?).
The same goes for the @option{--checksky} option of Statistics or NoiseChisel.
@node Invoking aststatistics, , Sky value, Statistics
@subsection Invoking Statistics
Statistics will print statistical measures of an input dataset (table column or image).
The executable name is @file{aststatistics} with the following general template
@example
$ aststatistics [OPTION ...] InputImage.fits
@end example
@noindent
One line examples:
@example
## Print some general statistics of input image:
$ aststatistics image.fits
## Print some general statistics of column named MAG_F160W:
$ aststatistics catalog.fits -h1 --column=MAG_F160W
## Make the histogram of the column named MAG_F160W:
$ aststatistics table.fits -cMAG_F160W --histogram
## Find the Sky value on image with a given kernel:
$ aststatistics image.fits --sky --kernel=kernel.fits
## Print Sigma-clipped results of records with a MAG_F160W
## column value between 26 and 27:
$ aststatistics cat.fits -cMAG_F160W -g26 -l27 --sigmaclip=3,0.2
## Find the polynomial (to third order) that best fits the X and Y
## columns of 'table.fits'. Robust fitting will be used to reject
## outliers. Also, estimate the fitted polynomial on the same input
## column (with errors).
$ aststatistics table.fits --fit=polynomial-robust --fitmaxpower=3 \
-cX,Y --fitestimate=self --output=estimated.fits
## Print the median value of all records in column MAG_F160W that
## have a value larger than 3 in column PHOTO_Z:
$ aststatistics tab.txt -rPHOTO_Z -g3 -cMAG_F160W --median
## Calculate the median of the third column in the input table, but only
## for rows where the mean of the first and second columns is >5.
$ awk '($1+$2)/2 > 5 @{print $3@}' table.txt | aststatistics --median
@end example
@noindent
@cindex Standard input
Statistics can take its input dataset either from a file (image or table) or the Standard input (see @ref{Standard input}).
If any output file is to be created, the value to the @option{--output} option, is used as the base name for the generated files.
Without @option{--output}, the input name will be used to generate an output name, see @ref{Automatic output}.
The options described below are particular to Statistics, but for general operations, it shares a large collection of options with the other Gnuastro programs, see @ref{Common options} for the full list.
For more on reading from standard input, please see the description of @code{--stdintimeout} option in @ref{Input output options}.
Options can also be given in configuration files, for more, please see @ref{Configuration files}.
The input dataset may have blank values (see @ref{Blank pixels}), in this case, all blank pixels are ignored during the calculation.
Initially, the full dataset will be read, but it is possible to select a specific range of data elements to use in the analysis of each run.
You can either directly specify a minimum and maximum value for the range of data elements to use (with @option{--greaterequal} or @option{--lessthan}), or specify the range using quantiles (with @option{--qrange}).
If a range is specified, all pixels outside of it are ignored before any processing.
@cindex ASCII plot
When no operation is requested, Statistics will print some general basic properties of the input dataset on the command-line like the example below (ran on one of the output images of @command{make check}@footnote{You can try it by running the command in the @file{tests} directory, open the image with a FITS viewer and have a look at it to get a sense of how these statistics relate to the input image/dataset.}).
This default behavior is designed to help give you a general feeling of how the data are distributed and help in narrowing down your analysis.
@example
$ aststatistics convolve_spatial_scaled_noised.fits \
--greaterequal=9500 --lessthan=11000
Statistics (GNU Astronomy Utilities) X.X
-------
Input: convolve_spatial_scaled_noised.fits (hdu: 0)
Range: from (inclusive) 9500, upto (exclusive) 11000.
Unit: counts
-------
Number of elements: 9074
Minimum: 9622.35
Maximum: 10999.7
Mode: 10055.45996
Mode quantile: 0.4001983908
Median: 10093.7
Mean: 10143.98257
Standard deviation: 221.80834
-------
Histogram:
| **
| ******
| *******
| *********
| *************
| **************
| ******************
| ********************
| *************************** *
| ***************************************** ***
|* **************************************************************
|-----------------------------------------------------------------
@end example
Gnuastro's Statistics is a very general purpose program, so to be able to easily understand this diversity in its operations (and how to possibly run them together), we will divided the operations into two types: those that do not respect the position of the elements and those that do (by tessellating the input on a tile grid, see @ref{Tessellation}).
The former treat the whole dataset as one and can re-arrange all the elements (for example, sort them), but the former do their processing on each tile independently.
First, we will review the operations that work on the whole dataset.
@cindex AWK
@cindex GNU AWK
The group of options below can be used to get single value measurement(s) of the whole dataset.
They will print only the requested value as one field in a line/row, like the @option{--mean}, @option{--median} options.
These options can be called any number of times and in any order.
The outputs of all such options will be printed on one line following each other (with a space character between them).
This feature makes these options very useful in scripts, or to redirect into programs like GNU AWK for higher-level processing.
These are some of the most basic measures, Gnuastro is still under heavy development and this list will grow.
If you want another statistical parameter, please contact us and we will do out best to add it to this list, see @ref{Suggest new feature}.
@menu
* Input to Statistics:: How to specify the inputs to Statistics.
* Single value measurements:: Can be used together (like --mean, or --maximum).
* Generating histograms and cumulative frequency plots:: Histogram and CFP tables.
* Fitting options:: Least squares fitting.
* Contour options:: Table of contours.
* Statistics on tiles:: Possible to do single-valued measurements on tiles.
@end menu
@node Input to Statistics, Single value measurements, Invoking aststatistics, Invoking aststatistics
@subsubsection Input to Statistics
The following set of options are for specifying the input/outputs of Statistics.
There are many other input/output options that are common to all Gnuastro programs including Statistics, see @ref{Input output options} for those.
@table @option
@item -c STR/INT
@itemx --column=STR/INT
The column to use when the input file is a table with more than one column.
See @ref{Selecting table columns} for a full description of how to use this option.
For more on how tables are read in Gnuastro, please see @ref{Tables}.
@item -g FLT
@itemx --greaterequal=FLT
Limit the range of inputs into those with values greater and equal to what is given to this option.
None of the values below this value will be used in any of the processing steps below.
@item -l FLT
@itemx --lessthan=FLT
Limit the range of inputs into those with values less-than what is given to this option.
None of the values greater or equal to this value will be used in any of the processing steps below.
@item -Q FLT[,FLT]
@itemx --qrange=FLT[,FLT]
Specify the range of usable inputs using the quantile.
This option can take one or two quantiles to specify the range.
When only one number is input (let's call it @mymath{Q}), the range will be those values in the quantile range @mymath{Q} to @mymath{1-Q}.
So when only one value is given, it must be less than 0.5.
When two values are given, the first is used as the lower quantile range and the second is used as the larger quantile range.
@cindex Quantile
The quantile of a given element in a dataset is defined by the fraction of its index to the total number of values in the sorted input array.
So the smallest and largest values in the dataset have a quantile of 0.0 and 1.0.
The quantile is a very useful non-parametric (making no assumptions about the input) relative measure to specify a range.
It can best be understood in terms of the cumulative frequency plot, see @ref{Histogram and Cumulative Frequency Plot}.
The quantile of each horizontal axis value in the cumulative frequency plot is the vertical axis value associate with it.
@end table
@node Single value measurements, Generating histograms and cumulative frequency plots, Input to Statistics, Invoking aststatistics
@subsubsection Single value measurements
@table @option
@item -n
@itemx --number
Print the number of all used (non-blank and in range) elements.
@item --minimum
Print the minimum value of all used elements.
@item --maximum
Print the maximum value of all used elements.
@item --sum
Print the sum of all used elements.
@item -m
@itemx --mean
Print the mean (average) of all used elements.
@item -t
@itemx --std
Print the standard deviation of all used elements.
@item --mad
Print the median absolute deviation (MAD) of all used elements.
@item -E
@itemx --median
Print the median of all used elements.
@item -u FLT[,FLT[,...]]
@itemx --quantile=FLT[,FLT[,...]]
Print the values at the given quantiles of the input dataset.
Any number of quantiles may be given and one number will be printed for each.
Values can either be written as a single number or as fractions, but must be between zero and one (inclusive).
Hence, in effect @command{--quantile=0.25 --quantile=0.75} is equivalent to @option{--quantile=0.25,3/4}, or @option{-u1/4,3/4}.
The returned value is one of the elements from the dataset.
Taking @mymath{q} to be your desired quantile, and @mymath{N} to be the total number of used (non-blank and within the given range) elements, the returned value is at the following position in the sorted array: @mymath{round(q\times{}N}).
@item --quantfunc=FLT[,FLT[,...]]
Print the quantiles of the given values in the dataset.
This option is the inverse of the @option{--quantile} and operates similarly except that the acceptable values are within the range of the dataset, not between 0 and 1.
Formally it is known as the ``Quantile function''.
Since the dataset is not continuous this function will find the nearest element of the dataset and use its position to estimate the quantile function.
@item --quantofmean
@cindex Quantile of the mean
Print the quantile of the mean in the dataset.
This is a very good measure of detecting skewness or outliers.
The concept is used by programs like NoiseChisel to identify the presence of signal in a tile of the image (because signal in noise causes skewness).
For example, take this simple array: @code{1 2 20 4 5 6 3}.
The mean is @code{5.85}.
The nearest element to this mean is @code{6} and the quantile of @code{6} in this distribution is 0.8333.
Here is how we got to this: in the sorted dataset (@code{1 2 3 4 5 6 20}), @code{6} is the 5-th element (counting from zero, since a quantile of zero corresponds to the minimum, by definition) and the maximum is the 6-th element (again, counting from zero).
So the quantile of the mean in this case is @mymath{5/6=0.8333}.
In the example above, if we had @code{7} instead of @code{20} (which was an outlier), then the mean would be @code{4} and the quantile of the mean would be 0.5 (which by definition, is the quantile of the median), showing no outliers.
As the number of elements increases, the mean itself is less affected by a small number of outliers, but skewness can be nicely identified by the quantile of the mean.
@item -O
@itemx --mode
Print the mode of all used elements.
The mode is found through the mirror distribution which is fully described in Appendix C of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
See that section for a full description.
This mode calculation algorithm is non-parametric, so when the dataset is not large enough (larger than about 1000 elements usually), or does not have a clear mode it can fail.
In such cases, this option will return a value of @code{nan} (for the floating point NaN value).
As described in that paper, the easiest way to assess the quality of this mode calculation method is to use it's symmetricity (see @option{--modesym} below).
A better way would be to use the @option{--mirror} option to generate the histogram and cumulative frequency tables for any given mirror value (the mode in this case) as a table.
If you generate plots like those shown in Figure 21 of that paper, then your mode is accurate.
@item --modequant
Print the quantile of the mode.
You can get the actual mode value from the @option{--mode} described above.
In many cases, the absolute value of the mode is irrelevant, but its position within the distribution is important.
In such cases, this option will become handy.
@item --modesym
Print the symmetricity of the calculated mode.
See the description of @option{--mode} for more.
This mode algorithm finds the mode based on how symmetric it is, so if the symmetricity returned by this option is too low, the mode is not too accurate.
See Appendix C of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} for a full description.
In practice, symmetricity values larger than 0.2 are mostly good.
@item --modesymvalue
Print the value in the distribution where the mirror and input distributions are no longer symmetric, see @option{--mode} and Appendix C of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} for more.
@item --sigclip-std
@item --sigclip-mad
@itemx --sigclip-mean
@itemx --sigclip-number
@itemx --sigclip-median
Calculate the desired statistic after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}, part of the tutorial @ref{Clipping outliers}).
@mymath{\sigma}-clipping configuration is done with the @option{--sclipparams} option.
@cindex Outlier
Here is one scenario where this can be useful: assume you have a table and you would like to remove the rows that are outliers (not within the @mymath{\sigma}-clipping range).
Let's assume your table is called @file{table.fits} and you only want to keep the rows that have a value in @code{COLUMN} within the @mymath{\sigma}-clipped range (to @mymath{3\sigma}, with a tolerance of 0.1).
This command will return the @mymath{\sigma}-clipped median and standard deviation (used to define the range later).
@example
$ aststatistics table.fits -cCOLUMN --sclipparams=3,0.1 \
--sigclip-median --sigclip-std
@end example
@cindex GNU AWK
You can then use the @option{--range} option of Table (see @ref{Table}) to select the proper rows.
But for that, you need the actual starting and ending values of the range (@mymath{m\pm s\sigma}; where @mymath{m} is the median and @mymath{s} is the multiple of sigma to define an outlier).
Therefore, the raw outputs of Statistics in the command above are not enough.
To get the starting and ending values of the non-outlier range (and put a `@key{,}' between them, ready to be used in @option{--range}), pipe the result into AWK.
But in AWK, we will also need the multiple of @mymath{\sigma}, so we will define it as a shell variable (@code{s}) before calling Statistics (note how @code{$s} is used two times now):
@example
$ s=3
$ aststatistics table.fits -cCOLUMN --sclipparams=$s,0.1 \
--sigclip-median --sigclip-std \
| awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}'
@end example
To pass it onto Table, we will need to keep the printed output from the command above in another shell variable (@code{r}), not print it.
In Bash, can do this by putting the whole statement within a @code{$()}:
@example
$ s=3
$ r=$(aststatistics table.fits -cCOLUMN --sclipparams=$s,0.1 \
--sigclip-median --sigclip-std \
| awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}')
$ echo $r # Just to confirm.
@end example
Now you can use Table with the @option{--range} option to only print the rows that have a value in @code{COLUMN} within the desired range:
@example
$ asttable table.fits --range=COLUMN,$r
@end example
To save the resulting table (that is clean of outliers) in another file (for example, named @file{cleaned.fits}, it can also have a @file{.txt} suffix), just add @option{--output=cleaned.fits} to the command above.
@item --madclip-std
@item --madclip-mad
@itemx --madclip-mean
@itemx --madclip-number
@itemx --madclip-median
Calculate the desired statistic after applying median absolute deviation (MAD) clipping (see @ref{MAD clipping}, part of the tutorial @ref{Clipping outliers}).
MAD-clipping configuration is done with the @option{--mclipparams} option.
This option behaves similarly to @option{--sigclip-*} options, read their description for usage examples.
@item --concentration=FLT[,FLT[,...]]
Return the ``concentration'' around the median (see rest of this description for the definition); the input value(s) are the quantile width(s) where it is measured.
For a uniform distribution, the output of this operation will be approximately @mymath{1.0}.
With a higher density of values around the median, the value will be larger for a Gaussian distribution, and even larger for more concentrated distributions (than a Gaussian).
This is the algorithm used to measure this value:
@enumerate
@item
Sort the input dataset and remove all blank values.
If there is one non-blank value or less, then return NaN.
@item
The minimum and maximum are respectively selected to be the second and second-to-last elements in the sorted array.
The first and last elements are not selected as minimum and maximum because they are affected too strongly by scatter.
@item
Subtract each element from the minimum, and divide it by the difference between the minimum and maximum.
After this operation, the input's values@footnote{Technically, the second sorted value will be 0 and the second-to-last value will be 1.} will be between 0 and 1.
This scaling does not change the order of the input elements; instead, each element's value now shows its relation to the range of the whole distribution's values (the minimum and maximum values above).
@item
Calculate the scaled values corresponding to quantiles that are defined by the width above.
For example, if the given width (value to this option) is 0.2, the quantiles of @mymath{0.5-(0.2/2)=0.4} and @mymath{0.5+(0.2/2)=0.5} will be measured.
@item
The width is divided by the difference between the quantiles and returned as the concentration.
@end enumerate
In a uniform distribution, the scaling step will convert each input into its quantile: the spacing between scaled values will be uniform.
As a result, the difference between the quantiles measured around the median will be equal to the input width and the result will be approximately @mymath{1.0}.
However, if the distribution is concentrated around the median, the spacing between the scaled values will be much less around the median and the quantile difference will be less than the width.
Therefore, when we divide the width by the quantile difference, the value will be larger than one.
The example commands below create two randomly distributed ``noisy'' images, one with a Gaussian distribution and one with a uniform distribution.
We will then run this option on both to see the different concentrations@footnote{The values you get will be slightly different because of the different random seeds.
To get a reproducible result, see @ref{Generating random numbers}.}.
See @ref{Generating histograms and cumulative frequency plots} on how you can generate the histogram of these two images on the command-line to visualize the distribution.
@example
$ astarithmetic 1000 1000 2 makenew 10 mknoise-sigma \
--output=gaussian.fits
$ astarithmetic 1000 1000 2 makenew 10 mknoise-uniform \
--output=uniform.fits
$ aststatistics gaussian.fits --concentration=0.25
3.71347573489440e+00
$ aststatistics uniform.fits --concentration=0.25
9.99988794452348e-01
@end example
Note that this option is primarily designed for symmetric distributions, not skewed ones (where the mode and median will be distant).
Here, we define the ``center'' in ``concentration'' as the median, not the mode.
To check if the distribution is symmetric (that the mode and median are similar), you can use the @option{--quantofmean} option described above.
Recall that you can call all the options in this section in one call to the Statistics program like below:
@example
$ aststatistics gaussian.fits \
--quantofmean --concentration=0.25
5.00260500260500e-01 3.71347573489440e+00
@end example
From the quantile-of-mean value of approximately 0.5, we see that the distribution is symmetric and from the concentration, we see that it is not a uniform one.
@end table
@node Generating histograms and cumulative frequency plots, Fitting options, Single value measurements, Invoking aststatistics
@subsubsection Generating histograms and cumulative freq.
The list of options below are for those statistical operations that output more than one value.
So while they can be called together in one run, their outputs will be distinct (each one's output will usually be printed in more than one line).
@table @option
@item -A
@itemx --asciihist
Print an ASCII histogram of the usable values within the input dataset along with some basic information like the example below (from the UVUDF catalog@footnote{@url{https://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz}}).
The width and height of the histogram (in units of character widths and heights on your command-line terminal) can be set with the @option{--numasciibins} (for the width) and @option{--asciiheight} options.
For a full description of the histogram, please see @ref{Histogram and Cumulative Frequency Plot}.
An ASCII plot is certainly very crude and cannot be used in any publication, but it is very useful for getting a general feeling of the input dataset very fast and easily on the command-line without having to take your hands off the keyboard (which is a major distraction!).
If you want to try it out, you can write it all in one line and ignore the @key{\} and extra spaces.
@example
$ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1 \
--column=MAG_F160W --lessthan=40 \
--asciihist --numasciibins=55
ASCII Histogram:
Number: 8593
Y: (linear: 0 to 660)
X: (linear: 17.7735 -- 31.4679, in 55 bins)
| ****
| *****
| ******
| ********
| *********
| ***********
| **************
| *****************
| ***********************
| ********************************
|*** ***************************************************
|-------------------------------------------------------
@end example
@item --asciicfp
Print the cumulative frequency plot of the usable elements in the input dataset.
Please see descriptions under @option{--asciihist} for more, the example below is from the same input table as that example.
To better understand the cumulative frequency plot, please see @ref{Histogram and Cumulative Frequency Plot}.
@example
$ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1 \
--column=MAG_F160W --lessthan=40 \
--asciicfp --numasciibins=55
ASCII Cumulative frequency plot:
Y: (linear: 0 to 8593)
X: (linear: 17.7735 -- 31.4679, in 55 bins)
| *******
| **********
| ***********
| *************
| **************
| ***************
| *****************
| *******************
| ***********************
| ******************************
|*******************************************************
|-------------------------------------------------------
@end example
@item -H
@itemx --histogram
Save the histogram of the usable values in the input dataset into a table.
The first column is the value at the center of the bin and the second is the number of points in that bin.
If the @option{--cumulative} option is also called with this option in a run, then the table will have three columns (the third is the cumulative frequency plot).
Through the @option{--numbins}, @option{--onebinstart}, or @option{--manualbinrange}, you can modify the first column values and with @option{--normalize} and @option{--maxbinone} you can modify the second columns.
See below for the description of each.
By default (when no @option{--output} is specified) a plain text table will be created, see @ref{Gnuastro text table format}.
If a FITS name is specified, you can use the common option @option{--tableformat} to have it as a FITS ASCII or FITS binary format, see @ref{Common options}.
This table can then be fed into your favorite plotting tool and get a much more clean and nice histogram than what the raw command-line can offer you (with the @option{--asciihist} option).
@item --histogram2d
Save the 2D histogram of two input columns into an output file, see @ref{2D Histograms}.
The output will have three columns: the first two are the coordinates of each box's center in the first and second dimensions/columns.
The third will be number of input points that fall within that box.
@item -C
@itemx --cumulative
Save the cumulative frequency plot of the usable values in the input dataset into a table, similar to @option{--histogram}.
@item --madclip
Do median absolute deviation (MAD) clipping on the usable pixels of the input dataset.
See @ref{MAD clipping} for a description on MAD-clipping and @ref{Clipping outliers} for a complete tutorial on clipping of outliers.
The MAD-clipping parameters can be set through the @option{--mclipparams} option (see below).
@item -s
@itemx --sigmaclip
Do @mymath{\sigma}-clipping on the usable pixels of the input dataset.
See @ref{Sigma clipping} for a full description on @mymath{\sigma}-clipping and @ref{Clipping outliers} for a complete tutorial on clipping of outliers.
The @mymath{\sigma}-clipping parameters can be set through the @option{--sclipparams} option (see below).
@item --mirror=FLT
Make a histogram and cumulative frequency plot of the mirror distribution for the given dataset when the mirror is located at the value to this option.
The mirror distribution is fully described in Appendix C of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and currently it is only used to calculate the mode (see @option{--mode}).
Just note that the mirror distribution is a discrete distribution like the input, so while you may give any number as the value to this option, the actual mirror value is the closest number in the input dataset to this value.
If the two numbers are different, Statistics will warn you of the actual mirror value used.
This option will make a table as output.
Depending on your selected name for the output, it will be either a FITS table or a plain text table (which is the default).
It contains three columns: the first is the center of the bins, the second is the histogram (with the largest value set to 1) and the third is the normalized cumulative frequency plot of the mirror distribution.
The bins will be positioned such that the mode is on the starting interval of one of the bins to make it symmetric around the mirror.
With this output file and the input histogram (that you can generate in another run of Statistics, using the @option{--onebinvalue}), it is possible to make plots like Figure 21 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
@end table
The list of options below allow customization of the histogram and cumulative frequency plots (for the @option{--histogram}, @option{--cumulative}, @option{--asciihist}, and @option{--asciicfp} options).
@table @option
@item --numbins
The number of bins (rows) to use in the histogram and the cumulative frequency plot tables (outputs of @option{--histogram} and @option{--cumulative}).
@item --numasciibins
The number of bins (characters) to use in the ASCII plots when printing the histogram and the cumulative frequency plot (outputs of @option{--asciihist} and @option{--asciicfp}).
@item --asciiheight
The number of lines to use when printing the ASCII histogram and cumulative frequency plot on the command-line (outputs of @option{--asciihist} and @option{--asciicfp}).
@item -n
@itemx --normalize
Normalize the histogram or cumulative frequency plot tables (outputs of @option{--histogram} and @option{--cumulative}).
For a histogram, the sum of all bins will become one and for a cumulative frequency plot the last bin value will be one.
@item --maxbinone
Divide all the histogram values by the maximum bin value so it becomes one and the rest are similarly scaled.
In some situations (for example, if you want to plot the histogram and cumulative frequency plot in one plot) this can be very useful.
@item --onebinstart=FLT
Make sure that one bin starts with the value to this option.
In practice, this will shift the bins used to find the histogram and cumulative frequency plot such that one bin's lower interval becomes this value.
For example, when a histogram range includes negative and positive values and zero has a special significance in your analysis, then zero might fall somewhere in one bin.
As a result that bin will have counts of positive and negative.
By setting @option{--onebinstart=0}, you can make sure that one bin will only count negative values in the vicinity of zero and the next bin will only count positive ones in that vicinity.
@cindex NaN
Note that by default, the first row of the histogram and cumulative frequency plot show the central values of each bin.
So in the example above you will not see the 0.000 in the first column, you will see two symmetric values.
If the value is not within the usable input range, this option will be ignored.
When it is, this option is the last operation before the bins are finalized, therefore it has a higher priority than options like @option{--manualbinrange}.
@item --manualbinrange
Use the values given to the @option{--greaterequal} and @option{--lessthan} to define the range of all bin-based calculations like the histogram.
This option itself does not take any value, but just tells the program to use the values of those two options instead of the minimum and maximum values of a plot.
If any of the two options are not given, then the minimum or maximum will be used respectively.
Therefore, if none of them are called calling this option is redundant.
The @option{--onebinstart} option has a higher priority than this option.
In other words, @option{--onebinstart} takes effect after the range has been finalized and the initial bins have been defined, therefore it has the power to (possibly) shift the bins.
If you want to manually set the range of the bins @emph{and} have one bin on a special value, it is thus better to avoid @option{--onebinstart}.
@item --numbins2=INT
Similar to @option{--numbins}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.
@item --greaterequal2=FLT
Similar to @option{--greaterequal}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.
@item --lessthan2=FLT
Similar to @option{--lessthan}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.
@item --onebinstart2=FLT
Similar to @option{--onebinstart}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.
@end table
@node Fitting options, Contour options, Generating histograms and cumulative frequency plots, Invoking aststatistics
@subsubsection Fitting options
With the options below, you can customize the least squares fitting features of Statistics.
For a tutorial of the usage of least squares fitting in Statistics, please see @ref{Least squares fitting}.
Here, we will just review the details of each option.
To activate least squares fitting in Statistics, it is necessary to use the @option{--fit} option to specify the type of fit you want to do.
See the description of @option{--fit} for the various available fitting models.
The fitting models that account for weights require three input columns, while the non-weighted ones only take two input columns.
Here is a summary of the input columns:
@enumerate
@item
The first input column is assumed to be the independent variable (on the horizontal axis of a plot, or @mymath{X} in the equations of each fit).
@item
The second input column is assumed to be the measured value (on the vertical axis of a plot, or @mymath{Y} in the equation above).
@item
The third input column is only for fittings with a weight.
It is assumed to be the ``weight'' of the measurement column.
The nature of the ``weight'' can be set with the @option{--fitweight} option, for example, if you have the standard deviation of the error in @mymath{Y}, you can use @option{--fitweight=std} (which is the default, so unless the default value has been changed, you will not need to set this).
@end enumerate
If three columns are given to a model without weight, or two columns are given to a model that requires weights, Statistics will abort and inform you.
Below you can see an example of fitting with the same linear model, once weighted and once without weights.
@example
$ aststatistics table.fits --column=X,Y --fit=linear
$ aststatistics table.fits --column=X,Y,Yerr --fit=linear-weighted
@end example
The output of the fitting can be in three modes listed below.
For a complete example, see the tutorial in @ref{Least squares fitting}).
@table @asis
@item Human friendly format
By default (for example, the commands above) the output is an elaborate description of the model parameters.
For example, @mymath{c_0} and @mymath{c_1} in the linear model (@mymath{Y=c_0+c_1X}).
Their covariance matrix and the reduced @mymath{\chi^2} of the fit are also printed on the output.
@item Raw numbers
If you don't need the human friendly components of the output (which are annoying when you want to parse the outputs in some scenarios), you can use @option{--quiet} option.
Only the raw output numbers will be printed.
@item Estimate on a custom X column
Through the @option{--fitestimate} option, you can specify an independent table column to estimate the fit (it can also take a single value).
See the description of this option for more.
@end table
@table @option
@item -f STR
@itemx --fit=STR
The name of the fitting method to use.
They are based on the @url{https://www.gnu.org/software/gsl/doc/html/lls.html, linear} and @url{https://www.gnu.org/software/gsl/doc/html/nls.html, nonlinear} least-squares fitting functions of the GNU Scientific Library (GSL).
@table @code
@item linear
@mymath{Y=c_0+c_1X}
@item linear-weighted
@mymath{Y=c_0+c_1X}; accounting for ``weights'' in @mymath{Y}.
@item linear-no-constant
@mymath{Y=c_1X}.
@item linear-no-constant-weighted
@mymath{Y=c_1X}; accounting for ``weights'' in @mymath{Y}.
@item polynomial
@mymath{Y=c_0+c_1X+c_2X^2+\cdots+c_nX^n}; the maximum required power (@mymath{n}) is specified by @option{--fitmaxpower}.
@item polynomial-weighted
@mymath{Y=c_0+c_1X+c_2X^2+\cdots+c_nX^n}; accounting for ``weights'' in @mymath{Y}.
The maximum required power (@mymath{n}) is specified by @option{--fitmaxpower}.
@item polynomial-robust
@cindex Robust polynomial fit
@cindex Polynomial fit (robust)
@mymath{Y=c_0+c_1X+c_2X^2+\cdots+c_nX^n}; rejects outliers.
The function to use for outlier removal can be specified with the @option{--fitrobust} option described below.
This model doesn't take weights since they are calculated internally based on the outlier removal function (requires two input columns).
The maximum required power (@mymath{n}) is specified by @option{--fitmaxpower}.
For a comprehensive review of ``robust'' fitting and the available functions, please see the @url{https://www.gnu.org/software/gsl/doc/html/lls.html#robust-linear-regression, Robust linear regression} section of the GNU Scientific Library.
@end table
@item --fitweight=STR
The nature of the ``weight'' column (when a weight is necessary for the model).
It can take one of the following values:
@table @code
@item std
Standard deviation of each @mymath{Y} axis measurement: this is the usual ``error'' associated with a measurement (for example, in @ref{MakeCatalog}) and is the default value to this option.
@item var
Variance of each @mymath{Y} axis measurement.
Assuming a Gaussian distribution with standard deviation @mymath{\sigma}, the variance is @mymath{\sigma^2}.
@item inv-var
Inverse variance of each @mymath{Y} axis measurement.
Assuming a Gaussian distribution with standard deviation @mymath{\sigma}, the variance is @mymath{1/\sigma^2}.
@end table
@item --fitmaxpower=INT
The maximum power (an integer) in a polynomial (@mymath{n} in @mymath{Y=c_0+c_1X+c_2X^2+\cdots+c_nX^n}).
This is only relevant when one of the polynomial models is given to @option{--fit}.
The fit will return @mymath{n+1} coefficients.
@item --fitrobust=STR
The function for rejecting outliers in the @code{polynomial-robust} fitting model.
For a comprehensive review of ``robust'' fitting and the available functions, please see the @url{https://www.gnu.org/software/gsl/doc/html/lls.html#robust-linear-regression, Robust linear regression} section of the GNU Scientific Library.
This function can take the following values:
@table @code
@item bisquare
@cindex Tukey’s biweight (bisquare) function
@cindex Biweight function of Tukey
@cindex Bisquare function of Tukey
Tukey’s biweight (bisquare) function, this is the default function.
According to the GSL manual, this is a good general purpose weight function.
@item cauchy
@cindex Cauchy's function (robust weight)
@cindex Lorentzian function (robust weight)
Cauchy’s function (also known as the Lorentzian function).
It doesn't guarantee a unique solution, so it should be used with care.
@item fair
@cindex Fair function (robust weight)
The fair function.
It guarantees a unique solution and has continuous derivatives to three orders.
@item huber
@cindex Huber function (robust weight)
Huber's @mymath{\rho} function.
This is also a good general purpose weight function for rejecting outliers, but can cause difficulty in some special scenarios.
@item ols
Ordinary Least Squares (OLS) solution with a constant weight of unity.
@item welsch
@cindex Welsch function (robust weight)
Welsch function which is useful when the residuals follow an exponential distribution.
@end table
@item --fitestimate=STR/FLT
Estimate the fitted function at a single point or a complete column of points.
Note that this option is currently only available for a 1D fit (due to lack of time); we will try our best to implement it in 2D fitting as soon as time allows.
The input @mymath{X} axis positions to estimate the function can be specified in the following ways:
@itemize
@item
A real number: the fitted function will be estimated at that @mymath{X} position and the corresponding @mymath{Y} and its error will be printed to standard output.
@item
@code{self}: in this mode, the same X axis column that was used in the fit will be used for estimating the fitted function.
This can be useful to visually/easily check the fit, see @ref{Least squares fitting}.
@item
A file name: If the value is none of the above, Statistics expects it to be a file name containing a table.
If the file is a FITS file, the HDU containing the table should be specified with the @option{--fitestimatehdu} option.
The column of the table to use for the @mymath{X} axis points should be specified with the @option{--fitestimatecol} option.
@end itemize
The output in this mode can be customized in the following ways:
@itemize
@item
If a single floating point value is given @option{--fitestimate}, the fitted function will be estimated on that point and printed to standard output.
@item
When nothing is given to @option{--output}, the independent column and the estimated values and errors are printed on the standard output.
@item
If a file name is given to @option{--output}, the estimated table above is saved in that file.
It can have any of the formats in @ref{Recognized table formats}.
As a FITS file, all the fit outputs (coefficients, covariance matrix and reduced @mymath{\chi^2}) are kept as FITS keywords in the same HDU of the estimated table.
For a complete example, see @ref{Least squares fitting}.
When the covariance matrix (and thus the @mymath{\chi^2}) cannot be calculated (for example if you only have two rows!), the printed values on the terminal will be NaN.
However, the FITS standard does not allow NaN values in keyword values!
Therefore, when writing the @mymath{\chi^2} and covariance matrix elements into the output FITS keywords, the largest value of the 64-bit floating point type will be written: @mymath{1.79769313486232\times10^{308}}; see @ref{Numeric data types}.
@item
When @option{--quiet} is given with @option{--fitestimate}, the fitted parameters are no longer printed on the standard output; they are available as FITS keywords in the file given to @option{--output}.
@end itemize
@item --fitestimatehdu=STR/INT
HDU name or counter (counting from zero) that contains the table to be used for the estimating the fitted function over many points through @option{--fitestimate}.
For more on selecting a HDU, see the description of @option{--hdu} in @ref{Input output options}.
@item --fitestimatecol=STR/INT
Column name or counter (counting from one) that contains the table to be used for the estimating the fitted function over many points through @option{--fitestimate}.
See @ref{Selecting table columns}.
@end table
@node Contour options, Statistics on tiles, Fitting options, Invoking aststatistics
@subsubsection Contour options
Contours are useful to highlight the 2D shape of a certain flux level over an image.
To derive contours in Statistics, you can use the option below:
@table @option
@item -R FLT[,FLT[,FLT...]]
@itemx --contour=FLT[,FLT[,FLT...]]
@cindex Contour
@cindex Plot: contour
Write the contours for the requested levels in a file ending with @file{_contour.txt}.
It will have three columns: the first two are the coordinates of each point and the third is the level it belongs to (one of the input values).
Each disconnected contour region will be separated by a blank line.
This is the requested format for adding contours with PGFPlots in @LaTeX{}.
If any other format can be useful for your work please let us know so we can add it.
If the image has World Coordinate System information, the written coordinates will be in RA and Dec, otherwise, they will be in pixel coordinates.
Note that currently, this is a very crude/simple implementation, please let us know if you find problematic situations so we can fix it.
@end table
@node Statistics on tiles, , Contour options, Invoking aststatistics
@subsubsection Statistics on tiles
All the options described until now were from the first class of operations discussed above: those that treat the whole dataset as one.
However, it often happens that the relative position of the dataset elements over the dataset is significant.
For example, you do not want one median value for the whole input image, you want to know how the median changes over the image.
For such operations, the input has to be tessellated (see @ref{Tessellation}).
Thus this class of options cannot currently be called along with the options above in one run of Statistics.
@table @option
@item -t
@itemx --ontile
Do the respective single-valued calculation over one tile of the input dataset, not the whole dataset.
This option must be called with at least one of the single valued options discussed above (for example, @option{--mean} or @option{--quantile}).
The output will be a file in the same format as the input.
If the @option{--oneelempertile} option is called, then one element/pixel will be used for each tile (see @ref{Processing options}).
Otherwise, the output will have the same size as the input, but each element will have the value corresponding to that tile's value.
If multiple single valued operations are called, then for each operation there will be one extension in the output FITS file.
@item -y
@itemx --sky
Estimate the Sky value on each tile as fully described in @ref{Quantifying signal in a tile}.
As described in that section, several options are necessary to configure the Sky estimation which are listed below.
The output file will have two extensions: the first is the Sky value and the second is the Sky standard deviation on each tile.
Similar to @option{--ontile}, if the @option{--oneelempertile} option is called, then one element/pixel will be used for each tile (see @ref{Processing options}).
@end table
The parameters for estimating the sky value can be set with the following options, except for the @option{--sclipparams} option (which is also used by the @option{--sigmaclip}), the rest are only used for the Sky value estimation.
@table @option
@item -k=FITS
@itemx --kernel=FITS
File name of kernel to help in estimating the significance of signal in a
tile, see @ref{Quantifying signal in a tile}.
@item --khdu=STR
Kernel HDU to help in estimating the significance of signal in a tile, see
@ref{Quantifying signal in a tile}.
@item --meanmedqdiff=FLT
The maximum acceptable distance between the quantiles of the mean and median, see @ref{Quantifying signal in a tile}.
The initial Sky and its standard deviation estimates are measured on tiles where the quantiles of their mean and median are less distant than the value given to this option.
For example, @option{--meanmedqdiff=0.01} means that only tiles where the mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 0.5) will be used.
@item --sclipparams=FLT,FLT
The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}.
This option takes two values which are separated by a comma (@key{,}).
Each value can either be written as a single number or as a fraction of two numbers (for example, @code{3,1/10}).
The first value to this option is the multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that section).
The second value is the exit criteria.
If it is less than 1, then it is interpreted as tolerance and if it is larger than one it is a specific number.
Hence, in the latter case the value must be an integer.
@item --mclipparams=FLT,FLT
The MAD-clipping parameters.
This is very similar to @option{--sclipparams} above, see there for more.
@item --outliersclip=FLT,FLT
@mymath{\sigma}-clipping parameters for the outlier rejection of the Sky
value (similar to @option{--sclipparams}).
Outlier rejection is useful when the dataset contains a large and diffuse (almost flat within each tile) signal.
The flatness of the profile will cause it to successfully pass the mean-median quantile difference test, so we will need to use the distribution of successful tiles for removing these false positive.
For more, see the latter half of @ref{Quantifying signal in a tile}.
@item --outliernumngb=INT
Number of neighboring tiles to use for outlier rejection (mostly the wings of bright stars or galaxies).
If this option is given a value of zero, no outlier rejection will take place.
For more see the latter half of @ref{Quantifying signal in a tile}.
@item --outliersigma=FLT
Multiple of sigma to define an outlier in the Sky value estimation.
If this option is given a value of zero, no outlier rejection will take place.
For more see @option{--outliersclip} and the latter half of @ref{Quantifying signal in a tile}.
@item --smoothwidth=INT
Width of a flat kernel to convolve the interpolated tile values.
Tile interpolation is done using the median of the @option{--interpnumngb} neighbors of each tile (see @ref{Processing options}).
If this option is given a value of zero or one, no smoothing will be done.
Without smoothing, strong boundaries will probably be created between the values estimated for each tile.
It is thus good to smooth the interpolated image so strong discontinuities do not show up in the final Sky values.
The smoothing is done through convolution (see @ref{Convolution process}) with a flat kernel, so the value to this option must be an odd number.
@item --ignoreblankintiles
Do not set the input's blank pixels to blank in the tiled outputs (for example, Sky and Sky standard deviation extensions of the output).
This is only applicable when the tiled output has the same size as the input, in other words, when @option{--oneelempertile} is not called.
By default, blank values in the input (commonly on the edges which are outside the survey/field area) will be set to blank in the tiled outputs also.
But in other scenarios this default behavior is not desired; for example, if you have masked something in the input, but want the tiled output under that also.
@item --checksky
Create a multi-extension FITS file showing the steps that were used to estimate the Sky value over the input, see @ref{Quantifying signal in a tile}.
The file will have two extensions for each step (one for the Sky and one for the Sky standard deviation).
@item --checkskynointerp
Similar to @code{--checksky}, but it will stop as soon as the outlier tiles have been identified and before it interpolates the values to cover the whole image.
This is useful when you want the good tile values before interpolation, and don't want to slow down your pipeline with the extra computing that interpolation and smoothing require.
@end table
@node NoiseChisel, Segment, Statistics, Data analysis
@section NoiseChisel
@cindex Labeling
@cindex Detection
@cindex Segmentation
Once instrumental signatures are removed from the raw data (image) in the initial reduction process (see @ref{Data manipulation}).
You are naturally eager to start answering the scientific questions that motivated the data collection in the first place.
However, the raw dataset/image is just an array of values/pixels, that is all! These raw values cannot directly be used to answer your scientific questions; for example, ``how many galaxies are there in the image?'' and ``What is their magnitude?''.
The first high-level step your analysis will therefore be to classify, or label, the dataset elements (pixels) into two classes:
1) Noise, where random effects are the major contributor to the value, and
2) Signal, where non-random factors (for example, light from a distant galaxy) are present.
This classification of the elements in a dataset is formally known as @emph{detection}.
In an observational/experimental dataset, signal is always buried in noise: only mock/simulated datasets are free of noise.
Therefore detection, or the process of separating signal from noise, determines the number of objects you study and the accuracy of any higher-level measurement you do on them.
Detection is thus the most important step of any analysis and is not trivial.
In particular, the most scientifically interesting astronomical targets are faint, can have a large variety of morphologies, along with a large distribution in magnitude and size.
Therefore when noise is significant, proper detection of your targets is a uniquely decisive step in your final scientific analysis/result.
@cindex Erosion
NoiseChisel is Gnuastro's program for detection of targets that do not have a sharp border (almost all astronomical objects).
When the targets have sharp edges/borders (for example, cells in biological imaging), a simple threshold is enough to separate them from noise and each other (if they are not touching).
To detect such sharp-edged targets, you can use Gnuastro's Arithmetic program in a command like below (assuming the threshold is @code{100}, see @ref{Arithmetic}):
@example
$ astarithmetic in.fits 100 gt 2 connected-components
@end example
Since almost no astronomical target has such sharp edges, we need a more advanced detection methodology.
NoiseChisel uses a new noise-based paradigm for detection of very extended and diffuse targets that are drowned deeply in the ocean of noise.
It was initially introduced in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and improvements after the first four were published in Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
Please take the time to go through these papers to most effectively understand the need of NoiseChisel and how best to use it.
The name of NoiseChisel is derived from the first thing it does after thresholding the dataset: to erode it.
In mathematical morphology, erosion on pixels can be pictured as carving-off boundary pixels.
Hence, what NoiseChisel does is similar to what a wood chisel or stone chisel do.
It is just not a hardware, but a software.
In fact, looking at it as a chisel and your dataset as a solid cube of rock will greatly help in effectively understanding and optimally using it: with NoiseChisel you literally carve your targets out of the noise.
Try running it with the @option{--checkdetection} option, and open the temporary output as a multi-extension cube, to see each step of the carving process on your input dataset (see @ref{Viewing FITS file contents with DS9 or TOPCAT}).
@cindex Segmentation
NoiseChisel's primary output is a binary detection map with the same size as the input but its pixels only have two values: 0 (background) and 1 (foreground).
Pixels that do not harbor any detected signal (noise) are given a label (or value) of zero and those with a value of 1 have been identified as hosting signal.
Segmentation is the process of classifying the signal into higher-level constructs.
For example, if you have two separate galaxies in one image, NoiseChisel will give a value of 1 to the pixels of both (each forming an ``island'' of touching foreground pixels).
After segmentation, the connected foreground pixels will get separate labels, enabling you to study them individually.
NoiseChisel is only focused on detection (separating signal from noise), to @emph{segment} the signal (into separate galaxies for example), Gnuastro has a separate specialized program @ref{Segment}.
NoiseChisel's output can be directly/readily fed into Segment.
For more on NoiseChisel's output format and its benefits (especially in conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see Akhlaghi @url{https://arxiv.org/abs/1611.06387,2016}.
Just note that when that paper was published, Segment was not yet spun-off into a separate program, and NoiseChisel done both detection and segmentation.
NoiseChisel's output is designed to be generic enough to be easily used in any higher-level analysis.
If your targets are not touching after running NoiseChisel and you are not interested in their sub-structure, you do not need the Segment program at all.
You can ask NoiseChisel to find the connected pixels in the output with the @option{--label} option.
In this case, the output will not be a binary image any more, the signal will have counters/labels starting from 1 for each connected group of pixels.
You can then directly feed NoiseChisel's output into MakeCatalog for measurements over the detections and the production of a catalog (see @ref{MakeCatalog}).
Thanks to the published papers mentioned above, there is no need to provide a more complete introduction to NoiseChisel in this book.
However, published papers cannot be updated any more, but the software has evolved/changed.
The changes since publication are documented in @ref{NoiseChisel changes after publication}.
In @ref{Invoking astnoisechisel}, the details of running NoiseChisel and its options are discussed.
As discussed above, detection is one of the most important steps for your scientific result.
It is therefore very important to obtain a good understanding of NoiseChisel (and afterwards @ref{Segment} and @ref{MakeCatalog}).
We strongly recommend reviewing two tutorials of @ref{General program usage tutorial} and @ref{Detecting large extended targets}.
They are designed to show how to most effectively use NoiseChisel for the detection of small faint objects and large extended objects.
In the meantime, they also show the modular principle behind Gnuastro's programs and how they are built to complement, and build upon, each other.
@ref{General program usage tutorial} culminates in using NoiseChisel to detect galaxies and use its outputs to find the galaxy colors.
Defining colors is a very common process in most science-cases.
Therefore it is also recommended to (patiently) complete that tutorial for optimal usage of NoiseChisel in conjunction with all the other Gnuastro programs.
@ref{Detecting large extended targets} shows you can optimize NoiseChisel's settings for very extended objects to successfully carve out to signal-to-noise ratio levels of below 1/10.
After going through those tutorials, play a little with the settings (in the order presented in the paper and @ref{Invoking astnoisechisel}) on a dataset you are familiar with and inspect all the check images (options starting with @option{--check}) to see the effect of each parameter.
Below, in @ref{Invoking astnoisechisel}, we will review NoiseChisel's input, detection, and output options in @ref{NoiseChisel input}, @ref{Detection options}, and @ref{NoiseChisel output}.
If you have used NoiseChisel within your research, please run it with @option{--cite} to list the papers you should cite and how to acknowledge its funding sources.
@menu
* NoiseChisel changes after publication:: Updates since published papers.
* Invoking astnoisechisel:: Options and arguments for NoiseChisel.
@end menu
@node NoiseChisel changes after publication, Invoking astnoisechisel, NoiseChisel, NoiseChisel
@subsection NoiseChisel changes after publication
NoiseChisel was initially introduced in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and updates after the first four years were published in Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
To help in understanding how it works, those papers have many figures showing every step on multiple mock and real examples.
We recommended to read these papers for a good understanding of what it does and how each parameter influences the output.
However, the papers cannot be updated anymore, but NoiseChisel has evolved (and will continue to do so): better algorithms or steps have been found and implemented and some options have been added, removed or changed behavior.
This book is thus the final and definitive guide to NoiseChisel.
The aim of this section is to make the transition from the papers above to the installed version on your system, as smooth as possible with the list below.
For a more detailed list of changes in each Gnuastro version, please see the @file{NEWS} file@footnote{The @file{NEWS} file is present in the released Gnuastro tarball, see @ref{Release tarball}.}.
@itemize
@item
An improved outlier rejection for identifying tiles without any signal has been implemented in the quantile-threshold phase:
Prior to version 0.14, outliers were defined globally: the distribution of all tiles with an acceptable @option{--meanmedqdiff} was inspected and outliers were found and rejected.
However, this caused problems when there are strong gradients over the image (for example, an image prior to flat-fielding, or in the presence of a large foreground galaxy).
In these cases, the faint wings of galaxies/stars could be mistakenly identified as Sky (leaving a footprint of the object on the Sky output) and wrongly subtracted.
It was possible to play with the parameters to correct this for that particular dataset, but that was frustrating.
Therefore from version 0.14, instead of finding outliers from the full tile distribution, we now measure the @emph{slope} of the tile's nearby tiles and find outliers locally.
Three options have been added to configure this part of NoiseChisel: @option{--outliernumngb}, @option{--outliersclip} and @option{--outliersigma}.
For more on the local outlier-by-distance algorithm and the definition of @emph{slope} mentioned above, see @ref{Quantifying signal in a tile}.
In our tests, this gave a much improved estimate of the quantile thresholds and final Sky values with default values.
@end itemize
@node Invoking astnoisechisel, , NoiseChisel changes after publication, NoiseChisel
@subsection Invoking NoiseChisel
NoiseChisel will detect signal in noise producing a multi-extension dataset containing a binary detection map which is the same size as the input.
Its output can be readily used for input into @ref{Segment}, for higher-level segmentation, or @ref{MakeCatalog} to do measurements and generate a catalog.
The executable name is @file{astnoisechisel} with the following general template
@example
$ astnoisechisel [OPTION ...] InputImage.fits
@end example
@noindent
One line examples:
@example
## Detect signal in input.fits.
$ astnoisechisel input.fits
## Inspect all the detection steps after changing a parameter.
$ astnoisechisel input.fits --qthresh=0.4 --checkdetection
## Detect signal assuming input has 4 amplifier channels along first
## dimension and 1 along the second. Also set the regular tile size
## to 100 along both dimensions:
$ astnoisechisel --numchannels=4,1 --tilesize=100,100 input.fits
@end example
@cindex Gaussian
@noindent
If NoiseChisel is to do processing (for example, you do not want to get help, or see the values to each input parameter), an input image should be provided with the recognized extensions (see @ref{Arguments}).
NoiseChisel shares a large set of common operations with other Gnuastro programs, mainly regarding input/output, general processing steps, and general operating modes.
To help in a unified experience between all of Gnuastro's programs, these operations have the same command-line options, see @ref{Common options} for a full list/description (they are not repeated here).
As in all Gnuastro programs, options can also be given to NoiseChisel in configuration files.
For a thorough description on Gnuastro's configuration file parsing, please see @ref{Configuration files}.
All of NoiseChisel's options with a short description are also always available on the command-line with the @option{--help} option, see @ref{Getting help}.
To inspect the option values without actually running NoiseChisel, append your command with @option{--printparams} (or @option{-P}).
NoiseChisel's input image may contain blank elements (see @ref{Blank pixels}).
Blank elements will be ignored in all steps of NoiseChisel.
Hence if your dataset has bad pixels which should be masked with a mask image, please use Gnuastro's @ref{Arithmetic} program (in particular its @command{where} operator) to convert those pixels to blank pixels before running NoiseChisel.
Gnuastro's Arithmetic program has bitwise operators helping you select specific kinds of bad-pixels when necessary.
A convolution kernel can also be optionally given.
If a value (file name) is given to @option{--kernel} on the command-line or in a configuration file (see @ref{Configuration files}), then that file will be used to convolve the image prior to thresholding.
Otherwise a default kernel will be used.
For a 2D image, the default kernel is a 2D Gaussian with a FWHM of 2 pixels truncated at 5 times the FWHM.
This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
For a 3D cube, it is a Gaussian with FWHM of 1.5 pixels in the first two dimensions and 0.75 pixels in the third dimension.
See @ref{Convolution kernel} for kernel related options.
Passing @code{none} to @option{--kernel} will disable convolution.
On the other hand, through the @option{--convolved} option, you may provide an already convolved image, see descriptions below for more.
NoiseChisel defines two tessellations over the input (see @ref{Tessellation}).
This enables it to deal with possible gradients in the input dataset and also significantly improve speed by processing each tile on different threads simultaneously.
Tessellation related options are discussed in @ref{Processing options}.
In particular, NoiseChisel uses two tessellations (with everything between them identical except the tile sizes): a fine-grained one with smaller tiles (used in thresholding and Sky value estimations) and another with larger tiles which is used for pseudo-detections over non-detected regions of the image.
The common Tessellation options described in @ref{Processing options} define all parameters of both tessellations.
The large tile size for the latter tessellation is set through the @option{--largetilesize} option.
To inspect the tessellations on your input dataset, run NoiseChisel with @option{--checktiles}.
@cartouche
@noindent
@strong{Usage TIP:} Frequently use the options starting with @option{--check}.
Since the noise properties differ between different datasets, you can often play with the parameters/options for a better result than the default parameters.
You can start with @option{--checkdetection} for the main steps.
For the full list of NoiseChisel's checking options please run:
@example
$ astnoisechisel --help | grep check
@end example
@end cartouche
@cartouche
@noindent
@strong{Not detecting wings of bright galaxies:} In such cases, probably the best solution is to increase @option{--outliernumngb} (to reject tiles that are affected by very flat diffuse signal).
For more, see @ref{Quantifying signal in a tile}.
@end cartouche
When working on 3D data cubes, the tessellation options need three values and updating them every time can be annoying/buggy.
To simplify the job, NoiseChisel also installs a @file{astnoisechisel-3d.conf} configuration file (see @ref{Configuration files}).
You can use this for default values on data cubes.
For example, if you installed Gnuastro with the prefix @file{/usr/local} (the default location, see @ref{Installation directory}), you can benefit from this configuration file by running NoiseChisel like the example below.
@example
$ astnoisechisel cube.fits \
--config=/usr/local/etc/gnuastro/astnoisechisel-3d.conf
@end example
@cindex Shell alias
@cindex Alias (shell)
@cindex Shell startup
@cindex Startup, shell
To further simplify the process, you can define a shell alias in any startup file (for example, @file{~/.bashrc}, see @ref{Installation directory}).
Assuming that you installed Gnuastro in @file{/usr/local}, you can add this line to the startup file (you may put it all in one line, it is broken into two lines here for fitting within page limits).
@example
alias astnoisechisel-3d="astnoisechisel \
--config=/usr/local/etc/gnuastro/astnoisechisel-3d.conf"
@end example
@noindent
Using this alias, you can call NoiseChisel with the name @command{astnoisechisel-3d} (instead of @command{astnoisechisel}).
It will automatically load the 3D specific configuration file first, and then parse any other arguments, options or configuration files.
You can change the default values in this 3D configuration file by calling them on the command-line as you do with @command{astnoisechisel}@footnote{Recall that for single-invocation options, the last command-line invocation takes precedence over all previous invocations (including those in the 3D configuration file).
See the description of @option{--config} in @ref{Operating mode options}.}.
For example:
@example
$ astnoisechisel-3d --numchannels=3,3,1 cube.fits
@end example
Below, we will discuss NoiseChisel's options, classified into separate sub-sections to help in easy navigation.
@ref{NoiseChisel input} discusses the basic options relating to input file(s) and data; these have no effect on the the detection process.
Afterwards, @ref{Detection options} fully describes every configuration parameter (option) related to detection and how they affect the final result.
The order of options in this section follow the logical order within NoiseChisel.
On first reading (while you are still new to NoiseChisel), it is therefore strongly recommended to read the options in the given order below.
The output of @option{--printparams} (or @option{-P}) also has this order.
However, the output of @option{--help} is sorted alphabetically.
Finally, in @ref{NoiseChisel output} the format of NoiseChisel's output is discussed.
@menu
* NoiseChisel input:: NoiseChisel's input options.
* Detection options:: Configure detection in NoiseChisel.
* NoiseChisel output:: NoiseChisel's output options and format.
@end menu
@node NoiseChisel input, Detection options, Invoking astnoisechisel, Invoking astnoisechisel
@subsubsection NoiseChisel input
The options here can be used to configure the inputs and output of NoiseChisel, along with some general processing options.
Recall that you can always see the full list of Gnuastro's options with the @option{--help} (see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see their values (see @ref{Operating mode options}).
@table @option
@item -k FITS
@itemx --kernel=FITS
File name of kernel to smooth the image before applying the threshold, see @ref{Convolution kernel}.
If no convolution is needed, give this option a value of @option{none}.
The first step of NoiseChisel is to convolve/smooth the image and use the convolved image in multiple steps including the finding and applying of the quantile threshold (see @option{--qthresh}).
The @option{--kernel} option is not mandatory.
If not called, for a 2D, image a 2D Gaussian profile with a FWHM of 2 pixels truncated at 5 times the FWHM is used.
This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi and Ichikawa [2015].
For a 3D cube, when no file name is given to @option{--kernel}, a Gaussian with FWHM of 1.5 pixels in the first two dimensions and 0.75 pixels in the third dimension will be used.
The reason for this particular configuration is that commonly in astronomical applications, 3D datasets do not have the same nature in all three dimensions, commonly the first two dimensions are spatial (RA and Dec) while the third is spectral (for example, wavelength).
The samplings are also different, in the default case, the spatial sampling is assumed to be larger than the spectral sampling, hence a wider FWHM in the spatial directions, see @ref{Sampling theorem}.
You can use MakeProfiles to build a kernel with any of its recognized profile types and parameters.
For more details, please see @ref{MakeProfiles output dataset}.
For example, the command below will make a Moffat kernel (with @mymath{\beta=2.8}) with FWHM of 2 pixels truncated at 10 times the FWHM.
@example
$ astmkprof --oversample=1 --kernel=moffat,2,2.8,10
@end example
Since convolution can be the slowest step of NoiseChisel, for large datasets, you can convolve the image once with Gnuastro's Convolve (see @ref{Convolve}), and use the @option{--convolved} option to feed it directly to NoiseChisel.
This can help getting faster results when you are playing/testing the higher-level options.
@item --khdu=STR
HDU containing the kernel in the file given to the @option{--kernel}
option.
@item --convolved=FITS
Use this file as the convolved image and do not do convolution (ignore @option{--kernel}).
NoiseChisel will just check the size of the given dataset is the same as the input's size.
If a wrong image (with the same size) is given to this option, the results (errors, bugs, etc.) are unpredictable.
So please use this option with care and in a highly controlled environment, for example, in the scenario discussed below.
In almost all situations, as the input gets larger, the single most CPU (and time) consuming step in NoiseChisel (and other programs that need a convolved image) is convolution.
Therefore minimizing the number of convolutions can save a significant amount of time in some scenarios.
One such scenario is when you want to segment NoiseChisel's detections using the same kernel (with @ref{Segment}, which also supports this @option{--convolved} option).
This scenario would require two convolutions of the same dataset: once by NoiseChisel and once by Segment.
Using this option in both programs, only one convolution (prior to running NoiseChisel) is enough.
Another common scenario where this option can be convenient is when you are testing NoiseChisel (or Segment) for the best parameters.
You have to run NoiseChisel multiple times and see the effect of each change.
However, once you are happy with the kernel, re-convolving the input on every change of higher-level parameters will greatly hinder, or discourage, further testing.
With this option, you can convolve the input image with your chosen kernel once before running NoiseChisel, then feed it to NoiseChisel on each test run and thus save valuable time for better/more tests.
To build your desired convolution kernel, you can use @ref{MakeProfiles}.
To convolve the image with a given kernel you can use @ref{Convolve}.
Spatial domain convolution is mandatory: in the frequency domain, blank pixels (if present) will cover the whole image and gradients will appear on the edges, see @ref{Spatial vs. Frequency domain}.
Below you can see an example of the second scenario: you want to see how variation of the growth level (through the @option{--detgrowquant} option) will affect the final result.
Recall that you can ignore all the extra spaces, new lines, and backslash's (`@code{\}') if you are typing in the terminal.
In a shell script, remove the @code{$} signs at the start of the lines.
@example
## Make the kernel to convolve with.
$ astmkprof --oversample=1 --kernel=gaussian,2,5
## Convolve the input with the given kernel.
$ astconvolve input.fits --kernel=kernel.fits \
--domain=spatial --output=convolved.fits
## Run NoiseChisel with seven growth quantile values.
$ for g in 60 65 70 75 80 85 90; do \
astnoisechisel input.fits --convolved=convolved.fits \
--detgrowquant=0.$g --output=$g.fits; \
done
@end example
@item --chdu=STR
The HDU/extension containing the convolved image in the file given to @option{--convolved}.
@item -w FITS
@itemx --widekernel=FITS
File name of a wider kernel to use in estimating the difference of the mode and median in a tile (this difference is used to identify the significance of signal in that tile, see @ref{Quantifying signal in a tile}).
As displayed in Figure 4 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}, a wider kernel will help in identifying the skewness caused by data in noise.
The image that is convolved with this kernel is @emph{only} used for this purpose.
Once the mode is found to be sufficiently close to the median, the quantile threshold is found on the image convolved with the sharper kernel (@option{--kernel}), see @option{--qthresh}).
Since convolution will significantly slow down the processing, this feature is optional.
When it is not given, the image that is convolved with @option{--kernel} will be used to identify good tiles @emph{and} apply the quantile threshold.
This option is mainly useful in conditions were you have a very large, extended, diffuse signal that is still present in the usable tiles when using @option{--kernel}.
See @ref{Detecting large extended targets} for a practical demonstration on how to inspect the tiles used in identifying the quantile threshold.
@item --whdu=STR
HDU containing the kernel file given to the @option{--widekernel} option.
@item -L INT[,INT]
@itemx --largetilesize=INT[,INT]
The size of each tile for the tessellation with the larger tile sizes.
Except for the tile size, all the other parameters for this tessellation are taken from the common options described in @ref{Processing options}.
The format is identical to that of the @option{--tilesize} option that is discussed in that section.
@end table
@node Detection options, NoiseChisel output, NoiseChisel input, Invoking astnoisechisel
@subsubsection Detection options
Detection is the process of separating the pixels in the image into two groups: 1) Signal, and 2) Noise.
Through the parameters below, you can customize the detection process in NoiseChisel.
Recall that you can always see the full list of NoiseChisel's options with the @option{--help} (see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see their values (see @ref{Operating mode options}).
@table @option
@item -Q FLT
@itemx --meanmedqdiff=FLT
The maximum acceptable distance between the quantiles of the mean and median in each tile, see @ref{Quantifying signal in a tile}.
The quantile threshold estimates are measured on tiles where the quantiles of their mean and median are less distant than the value given to this option.
For example, @option{--meanmedqdiff=0.01} means that only tiles where the mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 0.5) will be used.
@item -a INT
@itemx --outliernumngb=INT
Number of neighboring tiles to use for outlier rejection (mostly the wings of bright stars or galaxies).
For optimal detection of the wings of bright stars or galaxies, this is @strong{the most important} option in NoiseChisel.
This is because the extended wings of bright galaxies or stars (the PSF) can become flat over the tile.
In this case, they will satisfy the @option{--meanmedqdiff} condition and pass that step.
Therefore, to correctly identify such bad tiles, we need to look at the neighboring nearby tiles.
A tile that is on the wing of a bright galaxy/star will clearly be an outlier when looking at the neighbors.
For more on the details of the outlier rejection algorithm, see the latter half of @ref{Quantifying signal in a tile}.
If this option is given a value of zero, no outlier rejection will take place.
@item --outliersclip=FLT,FLT
@mymath{\sigma}-clipping parameters for the outlier rejection of the quantile threshold.
The format of the given values is similar to @option{--sigmaclip} below.
In NoiseChisel, outlier rejection on tiles is used when identifying the quantile thresholds (@option{--qthresh}, @option{--noerodequant}, and @option{detgrowquant}).
Outlier rejection is useful when the dataset contains a large and diffuse (almost flat within each tile) signal.
The flatness of the profile will cause it to successfully pass the mean-median quantile difference test, so we will need to use the distribution of successful tiles for removing these false positives.
For more, see the latter half of @ref{Quantifying signal in a tile}.
@item --outliersigma=FLT
Multiple of sigma to define an outlier.
If this option is given a value of zero, no outlier rejection will take place.
For more see @option{--outliersclip} and the latter half of @ref{Quantifying signal in a tile}.
@item -t FLT
@itemx --qthresh=FLT
The quantile threshold to apply to the convolved image.
The detection process begins with applying a quantile threshold to each of the tiles in the small tessellation.
The quantile is only calculated for tiles that do not have any significant signal within them, see @ref{Quantifying signal in a tile}.
Interpolation is then used to give a value to the unsuccessful tiles and it is finally smoothed.
@cindex Quantile
@cindex Binary image
@cindex Foreground pixels
@cindex Background pixels
The quantile value is a floating point value between 0 and 1.
Assume that we have sorted the @mymath{N} data elements of a distribution (the pixels in each mesh on the convolved image).
The quantile (@mymath{q}) of this distribution is the value of the element with an index of (the nearest integer to) @mymath{q\times{N}} in the sorted data set.
After thresholding is complete, we will have a binary (two valued) image.
The pixels above the threshold are known as foreground pixels (have a value of 1) while those which lie below the threshold are known as background (have a value of 0).
@item --smoothwidth=INT
Width of flat kernel used to smooth the interpolated quantile thresholds, see @option{--qthresh} for more.
@cindex NaN
@item --checkqthresh
Check the quantile threshold values on the mesh grid.
A multi-extension FITS file, suffixed with @file{_qthresh.fits} will be created showing each step of how the final quantile threshold is found.
With this option, NoiseChisel will abort as soon as quantile estimation has been completed, allowing you to inspect the steps leading to the final quantile threshold, this can be disabled with @option{--continueaftercheck}.
By default the output will have the same pixel size as the input, but with the @option{--oneelempertile} option, only one pixel will be used for each tile (see @ref{Processing options}).
The key things to remember are:
@itemize
@item
The measurements to find the thresholds are done on tiles that cover the whole image in a tessellation.
Recall that you can set the size of tiles with @option{--tilesize} and check them with @option{--checktiles}.
Therefore except for the first and last extensions, the rest only show tiles.
@item
NoiseChisel ultimately has three thresholds: the quantile threshold (that you set with @option{--qthresh}), the no-erode quantile (set with @option{--noerodequant}) and the growth quantile (set with @option{--detgrowquant}).
Therefore for each step, we have three extensions.
@end itemize
The output file will have the following extensions.
Below, the extensions are put in the same order as you see in the file, with their name.
@table @code
@item CONVOLVED
This is the input image after convolution with the kernel (which is a FWHM=2 Gaussian by default, but you can change with @option{--kernel}).
Recall that the thresholds are defined on the convolved image.
@item QTHRESH_ERODE
@itemx QTHRESH_NOERODE
@itemx QTHRESH_EXPAND
In these three extensions, the tiles that have a quantile-of-mean more/less than 0.5 (quantile of median) @mymath{\pm d} are set to NaN (@mymath{d} is the value given to @option{--meanmedqdiff}, see @ref{Quantifying signal in a tile}).
Therefore the non-NaN tiles that you see here are the tiles where there is no significant skewness (changing signal) within that tile.
The only differing thing between the three extensions is the values of the non-NaN tiles.
These values will be used to construct the final threshold map over the whole image.
@item VALUE1_NO_OUTLIER
@itemx VALUE2_NO_OUTLIER
@itemx VALUE3_NO_OUTLIER
All outlier tiles have been masked.
The reason for removing outliers is that the quantile-of-mean is only sensitive to signal that varies on a scale that is smaller than the tile size.
Therefore the extended wings of large galaxies or bright stars (which vary on scales much larger than the tile size) will pass that test.
As described in @ref{Quantifying signal in a tile} outlier rejection is customized through @option{--outliernumngb}, @option{--outliersclip} and @option{--outliersigma}.
@item THRESH1_INTERP
@itemx THRESH2_INTERP
@itemx THRESH3_INTERP
Using the successful values that remain after the previous step, give values to all (interpolate) the tiles in the image.
The interpolation is done using the nearest-neighbor method: for each tile, the N nearest neighbors are found and the median of their values is used to fill it.
You can set the value of N through the @option{--interpnumngb} option.
@item THRESH1_SMOOTH
@itemx THRESH2_SMOOTH
@itemx THRESH3_SMOOTH
Smooth the interpolated image to remove the strong differences between touching tiles.
Because we used the median value of the N nearest neighbors in the previous step, there can be strong discontinuities on the edges of tiles (which can directly show in the image after applying the threshold).
The scale of the smoothing (number of nearby tiles to smooth with) is set with the @option{--smoothwidth} option.
@item QTHRESH-APPLIED
The pixels in this image can only have three values:
@table @code
@item 0
These pixels had a value below the quantile threshold.
@item 1
These pixels had a value above the quantile threshold, but below the threshold for no erosion.
Therefore in the next step, NoiseChisel will erode (set them to 0) these pixels if they are touching a 0-valued pixel.
@item 2
These pixels had a value above the no-erosion threshold.
So NoiseChisel will not erode these pixels, it will only apply Opening to them afterwards.
Recall that this was done to avoid loosing sharp point-sources (like stars in space-based imaging).
@end table
@end table
@item --blankasforeground
In the erosion and opening steps below, treat blank elements as foreground (regions above the threshold).
By default, blank elements in the dataset are considered to be background, so if a foreground pixel is touching it, it will be eroded.
This option is irrelevant if the datasets contains no blank elements.
When there are many blank elements in the dataset, treating them as foreground will systematically erode their regions less, therefore systematically creating more false positives.
So use this option (when blank values are present) with care.
@item -e INT
@itemx --erode=INT
@cindex Erosion
The number of erosions to apply to the binary thresholded image.
Erosion is simply the process of flipping (from 1 to 0) any of the foreground pixels that neighbor a background pixel.
In a 2D image, there are two kinds of neighbors, 4-connected and 8-connected neighbors.
In a 3D dataset, there are three: 6-connected, 18-connected, and 26-connected.
You can specify which class of neighbors should be used for erosion with the @option{--erodengb} option, see below.
Erosion has the effect of shrinking the foreground pixels.
To put it another way, it expands the holes.
This is a founding principle in NoiseChisel: it exploits the fact that with very low thresholds, the holes in the very low surface brightness regions of an image will be smaller than regions that have no signal.
Therefore by expanding those holes, we are able to separate the regions harboring signal.
@item --erodengb=INT
The type of neighborhood (structuring element) used in erosion, see @option{--erode} for an explanation on erosion.
If the input is a 2D image, only two integer values are acceptable: 4 or 8.
For a 3D input data cube, the acceptable values are: 6, 18 and 26.
In 2D 4-connectivity, the neighbors of a pixel are defined as the four pixels on the top, bottom, right and left of a pixel that share an edge with it.
The 8-connected neighbors on the other hand include the 4-connected neighbors along with the other 4 pixels that share a corner with this pixel.
See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015) for a demonstration.
A similar argument applies to 3D data cubes.
@item --noerodequant
Pure erosion is going to carve off sharp and small objects completely out of the detected regions.
This option can be used to avoid missing such sharp and small objects (which have significant pixels, but not over a large area).
All pixels with a value larger than the significance level specified by this option will not be eroded during the erosion step above.
However, they will undergo the erosion and dilation of the opening step below.
Like the @option{--qthresh} option, the significance level is determined using the quantile (a value between 0 and 1).
Just as a reminder, in the normal distribution, @mymath{1\sigma}, @mymath{1.5\sigma}, and @mymath{2\sigma} are approximately on the 0.84, 0.93, and 0.98 quantiles.
@item -p INT
@itemx --opening=INT
Depth of opening to be applied to the eroded binary image.
Opening is a composite operation.
When opening a binary image with a depth of @mymath{n}, @mymath{n} erosions (explained in @option{--erode}) are followed by @mymath{n} dilations.
Simply put, dilation is the inverse of erosion.
When dilating an image any background pixel is flipped (from 0 to 1) to become a foreground pixel.
Dilation has the effect of fattening the foreground.
Note that in NoiseChisel, the erosion which is part of opening is independent of the initial erosion that is done on the thresholded image (explained in @option{--erode}).
The structuring element for the opening can be specified with the @option{--openingngb} option.
Opening has the effect of removing the thin foreground connections (mostly noise) between separate foreground `islands' (detections) thereby completely isolating them.
Once opening is complete, we have @emph{initial} detections.
@item --openingngb=INT
The structuring element used for opening, see @option{--erodengb} for more information about a structuring element.
@item --skyfracnoblank
Ignore blank pixels when estimating the fraction of undetected pixels for Sky estimation.
NoiseChisel only measures the Sky over the tiles that have a sufficiently large fraction of undetected pixels (value given to @option{--minskyfrac}).
By default this fraction is found by dividing number of undetected pixels in a tile by the tile's area.
But this default behavior ignores the possibility of blank pixels.
In situations that blank/masked pixels are scattered across the image and if they are large enough, all the tiles can fail the @option{--minskyfrac} test, thus not allowing NoiseChisel to proceed.
With this option, such scenarios can be fixed: the denominator of the fraction will be the number of non-blank elements in the tile, not the total tile area.
@item -B FLT
@itemx --minskyfrac=FLT
Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a tile.
Only tiles with a fraction of undetected pixels (Sky) larger than this value will be used to estimate the Sky value.
NoiseChisel uses this option value twice to estimate the Sky value: after initial detections and in the end when false detections have been removed.
Because of the PSF and their intrinsic amorphous properties, astronomical objects (except cosmic rays) never have a clear cutoff and commonly sink into the noise very slowly.
Even below the very low thresholds used by NoiseChisel.
So when a large fraction of the area of one mesh is covered by detections, it is very plausible that their faint wings are present in the undetected regions (hence causing a bias in any measurement).
To get an accurate measurement of the above parameters over the tessellation, tiles that harbor too many detected regions should be excluded.
The used tiles are visible in the respective @option{--check} option of the given step.
@item --checkdetsky
Check the initial approximation of the sky value and its standard deviation in a FITS file ending with @file{_detsky.fits}.
With this option, NoiseChisel will abort as soon as the sky value used for defining pseudo-detections is complete.
This allows you to inspect the steps leading to the final quantile threshold, this behavior can be disabled with @option{--continueaftercheck}.
By default the output will have the same pixel size as the input, but with the @option{--oneelempertile} option, only one pixel will be used for each tile (see @ref{Processing options}).
@item -s FLT,FLT
@itemx --sigmaclip=FLT,FLT
The @mymath{\sigma}-clipping parameters for measuring the initial and final Sky values from the undetected pixels, see @ref{Sigma clipping}.
This option takes two values which are separated by a comma (@key{,}).
Each value can either be written as a single number or as a fraction of two numbers (for example, @code{3,1/10}).
The first value to this option is the multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that section).
The second value is the exit criteria.
If it is less than 1, then it is interpreted as tolerance and if it is larger than one it is assumed to be the fixed number of iterations.
Hence, in the latter case the value must be an integer.
@item -R FLT
@itemx --dthresh=FLT
The detection threshold: a multiple of the initial Sky standard deviation added with the initial Sky approximation (which you can inspect with @option{--checkdetsky}).
This flux threshold is applied to the initially undetected regions on the unconvolved image.
The background pixels that are completely engulfed in a 4-connected foreground region are converted to background (holes are filled) and one opening (depth of 1) is applied over both the initially detected and undetected regions.
The Signal to noise ratio of the resulting `pseudo-detections' are used to identify true vs. false detections.
See Section 3.1.5 and Figure 7 in Akhlaghi and Ichikawa (2015) for a very complete explanation.
@item --dopening=INT
The number of openings to do after applying @option{--dthresh}.
@item --dopeningngb=INT
The connectivity used in the opening of @option{--dopening}.
In a 2D image this must be either 4 or 8.
The stronger the connectivity, the more smaller regions will be discarded.
@item --holengb=INT
The connectivity (defined by the number of neighbors) to fill holes after applying @option{--dthresh} (above) to find pseudo-detections.
For example, in a 2D image it must be 4 (the neighbors that are most strongly connected) or 8 (all neighbors).
The stronger the connectivity, the stronger the hole will be enclosed.
So setting a value of 8 in a 2D image means that the walls of the hole are 4-connected.
If standard (near Sky level) values are given to @option{--dthresh}, setting @option{--holengb=4}, might fill the complete dataset and thus not create enough pseudo-detections.
@item --pseudoconcomp=INT
The connectivity (defined by the number of neighbors) to find individual pseudo-detections.
If it is a weaker connectivity (4 in a 2D image), then pseudo-detections that are connected on the corners will be treated as separate.
@item -m INT
@itemx --snminarea=INT
The minimum area to calculate the Signal to noise ratio on the pseudo-detections of both the initially detected and undetected regions.
When the area in a pseudo-detection is too small, the Signal to noise ratio measurements will not be accurate and their distribution will be heavily skewed to the positive.
So it is best to ignore any pseudo-detection that is smaller than this area.
Use @option{--detsnhistnbins} to check if this value is reasonable or not.
@item --checksn
Save the S/N values of the pseudo-detections (and possibly grown detections if @option{--cleangrowndet} is called) into separate tables.
If @option{--tableformat} is a FITS table, each table will be written into a separate extension of one file suffixed with @file{_detsn.fits}.
If it is plain text, a separate file will be made for each table (ending in @file{_detsn_sky.txt}, @file{_detsn_det.txt} and @file{_detsn_grown.txt}).
For more on @option{--tableformat} see @ref{Input output options}.
You can use these to inspect the S/N values and their distribution (in combination with the @option{--checkdetection} option to see where the pseudo-detections are).
You can use Gnuastro's @ref{Statistics} to make a histogram of the distribution or any other analysis you would like for better understanding of the distribution (for example, through a histogram).
@item --minnumfalse=INT
The minimum number of `pseudo-detections' over the undetected regions to identify a Signal-to-Noise ratio threshold.
The Signal to noise ratio (S/N) of false pseudo-detections in each tile is found using the quantile of the S/N distribution of the pseudo-detections over the undetected pixels in each mesh.
If the number of S/N measurements is not large enough, the quantile will not be accurate (can have large scatter).
For example, if you set @option{--snquant=0.99} (or the top 1 percent), then it is best to have at least 100 S/N measurements.
@item -c FLT
@itemx --snquant=FLT
The quantile of the Signal to noise ratio distribution of the pseudo-detections in each mesh to use for filling the large mesh grid.
Note that this is only calculated for the large mesh grids that satisfy the minimum fraction of undetected pixels (value of @option{--minbfrac}) and minimum number of pseudo-detections (value of @option{--minnumfalse}).
@item --snthresh=FLT
Manually set the signal-to-noise ratio of true pseudo-detections.
With this option, NoiseChisel will not attempt to find pseudo-detections over the noisy regions of the dataset, but will directly go onto applying the manually input value.
This option is useful in crowded images where there is no blank sky to find the sky pseudo-detections.
You can get this value on a similarly reduced dataset (from another region of the Sky with more undetected regions spaces).
@item -d FLT
@itemx --detgrowquant=FLT
Quantile limit to ``grow'' the final detections.
As discussed in the previous options, after applying the initial quantile threshold, layers of pixels are carved off the objects to identify true signal.
With this step you can return those low surface brightness layers that were carved off back to the detections.
To disable growth, set the value of this option to @code{1}.
The process is as follows: after the true detections are found, all the non-detected pixels above this quantile will be put in a list and used to ``grow'' the true detections (seeds of the growth).
Like all quantile thresholds, this threshold is defined and applied to the convolved dataset.
Afterwards, the dataset is dilated once (with minimum connectivity) to connect very thin regions on the boundary: imagine building a dam at the point rivers spill into an open sea/ocean.
Finally, all holes are filled.
In the geography metaphor, holes can be seen as the closed (by the dams) rivers and lakes, so this process is like turning the water in all such rivers and lakes into soil.
See @option{--detgrowmaxholesize} for configuring the hole filling.
Note that since the growth occurs on all neighbors of a data element, the
quantile for 3D detection must be must larger than that of 2D
detection. Recall that in 2D each element has 8 neighbors while in 3D there
are 27 neighbors.
@item --detgrowmaxholesize=INT
The maximum hole size to fill during the final expansion of the true detections as described in @option{--detgrowquant}.
This is necessary when the input contains many smaller objects and can be used to avoid marking blank sky regions as detections.
For example, multiple galaxies can be positioned such that they surround an empty region of sky.
If all the holes are filled, the Sky region in between them will be taken as a detection which is not desired.
To avoid such cases, the integer given to this option must be smaller than the hole between such objects.
However, we should caution that unless the ``hole'' is very large, the combined faint wings of the galaxies might actually be present in between them, so be very careful in not filling such holes.
On the other hand, if you have a very large (and extended) galaxy, the diffuse wings of the galaxy may create very large holes over the detections.
In such cases, a large enough value to this option will cause all such holes to be detected as part of the large galaxy and thus help in detecting it to extremely low surface brightness limits.
Therefore, especially when large and extended objects are present in the image, it is recommended to give this option (very) large values.
For one real-world example, see @ref{Detecting large extended targets}.
@item --cleangrowndet
After dilation, if the signal-to-noise ratio of a detection is less than the derived pseudo-detection S/N limit, that detection will be discarded.
In an ideal/clean noise, a true detection's S/N should be larger than its constituent pseudo-detections because its area is larger and it also covers more signal.
However, on a false detections (especially at lower @option{--snquant} values), the increase in size can cause a decrease in S/N below that threshold.
This will improve purity and not change completeness (a true detection will not be discarded).
Because a true detection has flux in its vicinity and dilation will catch more of that flux and increase the S/N.
So on a true detection, the final S/N cannot be less than pseudo-detections.
However, in many real images bad processing creates artifacts that cannot be accurately removed by the Sky subtraction.
In such cases, this option will decrease the completeness (will artificially discard true detections).
So this feature is not default and should to be explicitly called when you know the noise is clean.
@item --checkdetection
Every step of the detection process will be added as an extension to a file with the suffix @file{_det.fits}.
Going through each would just be a repeat of the explanations above and also of those in Akhlaghi and Ichikawa (2015).
The extension label should be sufficient to recognize which step you are observing.
Viewing all the steps can be the best guide in choosing the best set of parameters.
With this option, NoiseChisel will abort as soon as a snapshot of all the detection process is saved.
This behavior can be disabled with @option{--continueaftercheck}.
@item --checksky
Check the derivation of the final sky and its standard deviation values on the mesh grid.
With this option, NoiseChisel will abort as soon as the sky value is estimated over the image (on each tile).
This behavior can be disabled with @option{--continueaftercheck}.
By default the output will have the same pixel size as the input, but with the @option{--oneelempertile} option, only one pixel will be used for each tile (see @ref{Processing options}).
@end table
@node NoiseChisel output, , Detection options, Invoking astnoisechisel
@subsubsection NoiseChisel output
NoiseChisel's output is a multi-extension FITS file.
The main extension/dataset is a (binary) detection map.
It has the same size as the input but with only two possible values for all pixels: 0 (for pixels identified as noise) and 1 (for those identified as signal/detections).
The detection map is followed by a Sky and Sky standard deviation dataset (which are calculated from the binary image).
By default (when @option{--rawoutput} is not called), NoiseChisel will also subtract the Sky value from the input and save the sky-subtracted input as the first extension in the output with data.
The zero-th extension (that contains no data), contains NoiseChisel's configuration as FITS keywords, see @ref{Output FITS files}.
The name of the output file can be set by giving a value to @option{--output} (this is a common option between all programs and is therefore discussed in @ref{Input output options}).
If @option{--output} is not used, the input name will be suffixed with @file{_detected.fits} and used as output, see @ref{Automatic output}.
If any of the options starting with @option{--check*} are given, NoiseChisel will not complete and will abort as soon as the respective check images are created.
For more information on the different check images, see the description for the @option{--check*} options in @ref{Detection options} (this can be disabled with @option{--continueaftercheck}).
The last two extensions of the output are the Sky and its Standard deviation, see @ref{Sky value} for a complete explanation.
They are calculated on the tile grid that you defined for NoiseChisel.
By default these datasets will have the same size as the input, but with all the pixels in one tile given one value.
To be more space-efficient (keep only one pixel per tile), you can use the @option{--oneelempertile} option, see @ref{Tessellation}.
@cindex GNOME
To inspect any of NoiseChisel's output files, assuming you use SAO DS9, you can configure your Graphic User Interface (GUI) to open NoiseChisel's output as a multi-extension data cube.
This will allow you to flip through the different extensions and visually inspect the results.
This process has been described for the GNOME GUI (most common GUI in GNU/Linux operating systems) in @ref{Viewing FITS file contents with DS9 or TOPCAT}.
NoiseChisel's output configuration options are described in detail below.
@table @option
@item --continueaftercheck
Continue NoiseChisel after any of the options starting with @option{--check} (see @ref{Detection options}.
NoiseChisel involves many steps and as a result, there are many checks, allowing you to inspect the status of the processing.
The results of each step affect the next steps of processing.
Therefore, when you want to check the status of the processing at one step, the time spent to complete NoiseChisel is just wasted/distracting time.
To encourage easier experimentation with the option values, when you use any of the NoiseChisel options that start with @option{--check}, NoiseChisel will abort once its desired extensions have been written.
With @option{--continueaftercheck} option, you can disable this behavior and ask NoiseChisel to continue with the rest of the processing, even after the requested check files are complete.
@item --ignoreblankintiles
Do not set the input's blank pixels to blank in the tiled outputs (for example, Sky and Sky standard deviation extensions of the output).
This is only applicable when the tiled output has the same size as the input, in other words, when @option{--oneelempertile} is not called.
By default, blank values in the input (commonly on the edges which are outside the survey/field area) will be set to blank in the tiled outputs also.
But in other scenarios this default behavior is not desired; for example, if you have masked something in the input, but want the tiled output under that also.
@item -l
@itemx --label
Run a connected-components algorithm on the finally detected pixels to identify which pixels are connected to which.
By default the main output is a binary dataset with only two values: 0 (for noise) and 1 (for signal/detections).
See @ref{NoiseChisel output} for more.
The purpose of NoiseChisel is to detect targets that are extended and diffuse, with outer parts that sink into the noise very gradually (galaxies and stars for example).
Since NoiseChisel digs down to extremely low surface brightness values, many such targets will commonly be detected together as a single large body of connected pixels.
To properly separate connected objects, sophisticated segmentation methods are commonly necessary on NoiseChisel's output.
Gnuastro has the dedicated @ref{Segment} program for this job.
Since input images are commonly large and can take a significant volume, the extra volume necessary to store the labels of the connected components in the detection map (which will be created with this @option{--label} option, in 32-bit signed integer type) can thus be a major waste of space.
Since the default output is just a binary dataset, an 8-bit unsigned dataset is enough.
The binary output will also encourage users to segment the result separately prior to doing higher-level analysis.
As an alternative to @option{--label}, if you have the binary detection image, you can use the @code{connected-components} operator in Gnuastro's Arithmetic program to identify regions that are connected with each other.
For example, with this command (assuming NoiseChisel's output is called @file{nc.fits}):
@example
$ astarithmetic nc.fits 2 connected-components -hDETECTIONS
@end example
@item --rawoutput
Do not include the Sky-subtracted input image as the first extension of the output.
By default, the Sky-subtracted input is put in the first extension of the output.
The next extensions are NoiseChisel's main outputs described above.
The extra Sky-subtracted input can be convenient in checking NoiseChisel's output and comparing the detection map with the input: visually see if everything you expected is detected (reasonable completeness) and that you do not have too many false detections (reasonable purity).
This visual inspection is simplified if you use SAO DS9 to view NoiseChisel's output as a multi-extension data-cube, see @ref{Viewing FITS file contents with DS9 or TOPCAT}.
When you are satisfied with your NoiseChisel configuration (therefore you do not need to check on every run), or you want to archive/transfer the outputs, or the datasets become large, or you are running NoiseChisel as part of a pipeline, this Sky-subtracted input image can be a significant burden (take up a large volume).
The fact that the input is also noisy, makes it hard to compress it efficiently.
In such cases, this @option{--rawoutput} can be used to avoid the extra sky-subtracted input in the output.
It is always possible to easily produce the Sky-subtracted dataset from the input (assuming it is in extension @code{1} of @file{in.fits}) and the @code{SKY} extension of NoiseChisel's output (let's call it @file{nc.fits}) with a command like below (assuming NoiseChisel was not run with @option{--oneelempertile}, see @ref{Tessellation}):
@example
$ astarithmetic in.fits nc.fits - -h1 -hSKY
@end example
@end table
@cartouche
@noindent
@cindex Compression
@strong{Save space:} with the @option{--rawoutput} and @option{--oneelempertile}, NoiseChisel's output will only be one binary detection map and two much smaller arrays with one value per tile.
Since none of these have noise they can be compressed very effectively (without any loss of data) with exceptionally high compression ratios.
This makes it easy to archive, or transfer, NoiseChisel's output even on huge datasets.
To compress it with the most efficient method (take up less volume), run the following command:
@cindex GNU Gzip
@example
$ gzip --best noisechisel_output.fits
@end example
@noindent
The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's programs directly, or viewed in viewers like SAO DS9, without having to decompress it separately (they will just take a little longer, because they have to internally decompress it before starting).
See @ref{NoiseChisel optimization for storage} for an example on a real dataset.
@end cartouche
@node Segment, MakeCatalog, NoiseChisel, Data analysis
@section Segment
Once signal is separated from noise (for example, with @ref{NoiseChisel}), you have a binary dataset: each pixel is either signal (1) or noise (0).
Signal (for example, every galaxy in your image) has been ``detected'', but all detections have a label of 1.
Therefore while we know which pixels contain signal, we still cannot find out how many galaxies they contain or which detected pixels correspond to which galaxy.
At the lowest (most generic) level, detection is a kind of segmentation (segmenting the whole dataset into signal and noise, see @ref{NoiseChisel}).
Here, we will define segmentation only on signal: to separate sub-structure within the detections.
@cindex Connected component labeling
If the targets are clearly separated, or their detected regions are not touching, a simple connected components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}} algorithm (very basic segmentation) is enough to separate the regions that are touching/connected.
This is such a basic and simple form of segmentation that Gnuastro's Arithmetic program has an operator for it: see @code{connected-components} in @ref{Arithmetic operators}.
Assuming the binary dataset is called @file{binary.fits}, you can use it with a command like this:
@example
$ astarithmetic binary.fits 2 connected-components
@end example
@noindent
You can even do a very basic detection (a threshold, say at value
@code{100}) @emph{and} segmentation in Arithmetic with a single command
like below:
@example
$ astarithmetic in.fits 100 gt 2 connected-components
@end example
However, in most astronomical situations our targets are not nicely separated or have a sharp boundary/edge (for a threshold to suffice): they touch (for example, merging galaxies), or are simply in the same line-of-sight (which is much more common).
This causes their images to overlap.
In particular, when you do your detection with NoiseChisel, you will detect signal to very low surface brightness limits: deep into the faint wings of galaxies or bright stars (which can extend very far and irregularly from their center).
Therefore, it often happens that several galaxies are detected as one large detection.
Since they are touching, a simple connected components algorithm will not suffice.
It is therefore necessary to do a more sophisticated segmentation and break up the detected pixels (even those that are touching) into multiple target objects as accurately as possible.
Segment will use a detection map and its corresponding dataset to find sub-structure over the detected areas and use them for its segmentation.
Until Gnuastro version 0.6 (released in 2018), Segment was part of @ref{NoiseChisel}.
Therefore, similar to NoiseChisel, the best place to start reading about Segment and understanding what it does (with many illustrative figures) is Section 3.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}, and continue with Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
@cindex river
@cindex Watershed algorithm
As a summary, Segment first finds true @emph{clump}s over the detections.
Clumps are associated with local maxima/minima@footnote{By default the maximum is used as the first clump pixel, to define clumps based on local minima, use the @option{--minima} option.} and extend over the neighboring pixels until they reach a local minimum/maximum (@emph{river}/@emph{watershed}).
By default, Segment will use the distribution of clump signal-to-noise ratios over the undetected regions as reference to find ``true'' clumps over the detections.
Using the undetected regions can be disabled by directly giving a signal-to-noise ratio to @option{--clumpsnthresh}.
The true clumps are then grown to a certain threshold over the detections.
Based on the strength of the connections (rivers/watersheds) between the grown clumps, they are considered parts of one @emph{object} or as separate @emph{object}s.
See Section 3.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} for more.
Segment's main output are thus two labeled datasets: 1) clumps, and 2) objects.
See @ref{Segment output} for more.
To start learning about Segment, especially in relation to detection (@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended references are Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}, Akhlaghi @url{https://arxiv.org/abs/1611.06387,2016} and Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
If you have used Segment within your research, please run it with @option{--cite} to list the papers you should cite and how to acknowledge its funding sources.
Those papers cannot be updated any more but the software will evolve.
For example, Segment became a separate program (from NoiseChisel) in 2018 (after those papers were published).
Therefore this book is the definitive reference.
@c To help in the transition from those papers to the software you are using, see @ref{Segment changes after publication}.
Finally, in @ref{Invoking astsegment}, we will discuss Segment's inputs, outputs and configuration options.
@menu
* Invoking astsegment:: Inputs, outputs and options to Segment
@end menu
@c @node Segment changes after publication, Invoking astsegment, Segment, Segment
@c @subsection Segment changes after publication
@c Segment's main algorithm and working strategy were initially defined and introduced in Section 3.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and @url{https://arxiv.org/abs/1909.11230,2019}.
@c It is strongly recommended to read those papers for a good understanding of what Segment does, how it relates to detection, and how each parameter influences the output.
@c They have many figures showing every step on multiple mock and real examples.
@c However, the papers cannot be updated anymore, but Segment has evolved (and will continue to do so): better algorithms or steps have been (and will be) found.
@c This book is thus the final and definitive guide to Segment.
@c The aim of this section is to make the transition from the paper to your installed version, as smooth as possible through the list below.
@c For a more detailed list of changes in previous Gnuastro releases/versions, please follow the @file{NEWS} file@footnote{The @file{NEWS} file is present in the released Gnuastro tarball, see @ref{Release tarball}.}.
@node Invoking astsegment, , Segment, Segment
@subsection Invoking Segment
Segment will identify substructure within the detected regions of an input image.
Segment's output labels can be directly used for measurements (for example, with @ref{MakeCatalog}).
The executable name is @file{astsegment} with the following general template
@example
$ astsegment [OPTION ...] InputImage.fits
@end example
@noindent
One line examples:
@example
## Segment NoiseChisel's detected regions.
$ astsegment default-noisechisel-output.fits
## Use a hand-input S/N value for keeping true clumps
## (avoid finding the S/N using the undetected regions).
$ astsegment nc-out.fits --clumpsnthresh=10
## Inspect all the segmentation steps after changing a parameter.
$ astsegment input.fits --snquant=0.9 --checksegmentaion
## Use the fixed value of 0.01 for the input's Sky standard deviation
## (in the units of the input), and assume all the pixels are a
## detection (for example, a large structure extending over the whole
## image), and only keep clumps with S/N>10 as true clumps.
$ astsegment in.fits --std=0.01 --detection=all --clumpsnthresh=10
@end example
@cindex Gaussian
@noindent
If Segment is to do processing (for example, you do not want to get help, or see the values of each option), at least one input dataset is necessary along with detection and error information, either as separate datasets (per-pixel) or fixed values, see @ref{Segment input}.
Segment shares a large set of common operations with other Gnuastro programs, mainly regarding input/output, general processing steps, and general operating modes.
To help in a unified experience between all of Gnuastro's programs, these common operations have the same names and defined in @ref{Common options}.
As in all Gnuastro programs, options can also be given to Segment in configuration files.
For a thorough description of Gnuastro's configuration file parsing, please see @ref{Configuration files}.
All of Segment's options with a short description are also always available on the command-line with the @option{--help} option, see @ref{Getting help}.
To inspect the option values without actually running Segment, append your command with @option{--printparams} (or @option{-P}).
To help in easy navigation between Segment's options, they are separately discussed in the three sub-sections below: @ref{Segment input} discusses how you can customize the inputs to Segment.
@ref{Segmentation options} is devoted to options specific to the high-level segmentation process.
Finally, in @ref{Segment output}, we will discuss options that affect Segment's output.
@menu
* Segment input:: Input files and options.
* Segmentation options:: Parameters of the segmentation process.
* Segment output:: Outputs of Segment
@end menu
@node Segment input, Segmentation options, Invoking astsegment, Invoking astsegment
@subsubsection Segment input
Besides the input dataset (for example, astronomical image), Segment also needs to know the Sky standard deviation and the regions of the dataset that it should segment.
The values dataset is assumed to be Sky subtracted by default.
If it is not, you can ask Segment to subtract the Sky internally by calling @option{--sky}.
For the rest of this discussion, we will assume it is already sky subtracted.
The Sky and its standard deviation can be a single value (to be used for the whole dataset) or a separate dataset (for a separate value per pixel).
If a dataset is used for the Sky and its standard deviation, they must either be the size of the input image, or have a single value per tile (generated with @option{--oneelempertile}, see @ref{Processing options} and @ref{Tessellation}).
The detected regions/pixels can be specified as a detection map (for example, see @ref{NoiseChisel output}).
If @option{--detection=all}, Segment will not read any detection map and assume the whole input is a single detection.
For example, when the dataset is fully covered by a large nearby galaxy/globular cluster.
When dataset are to be used for any of the inputs, Segment will assume they are multiple extensions of a single file by default (when @option{--std} or @option{--detection} are not called).
For example, NoiseChisel's default output @ref{NoiseChisel output}.
When the Sky-subtracted values are in one file, and the detection and Sky standard deviation are in another, you just need to use @option{--detection}: in the absence of @option{--std}, Segment will look for both the detection labels and Sky standard deviation in the file given to @option{--detection}.
Ultimately, if all three are in separate files, you need to call both @option{--detection} and @option{--std}.
The extensions of the three mandatory inputs can be specified with @option{--hdu}, @option{--dhdu}, and @option{--stdhdu}.
For a full discussion on what to give to these options, see the description of @option{--hdu} in @ref{Input output options}.
To see their default values (along with all the other options), run Segment with the @option{--printparams} (or @option{-P}) option.
Just recall that in the absence of @option{--detection} and @option{--std}, all three are assumed to be in the same file.
If you only want to see Segment's default values for HDUs on your system, run this command:
@example
$ astsegment -P | grep hdu
@end example
By default Segment will convolve the input with a kernel to improve the signal-to-noise ratio of true peaks.
If you already have the convolved input dataset, you can pass it directly to Segment for faster processing (using the @option{--convolved} and @option{--chdu} options).
Just do not forget that the convolved image must also be Sky-subtracted before calling Segment.
If a value/file is given to @option{--sky}, the convolved values will also be Sky subtracted internally.
Alternatively, if you prefer to give a kernel (with @option{--kernel} and @option{--khdu}), Segment can do the convolution internally.
To disable convolution, use @option{--kernel=none}.
@table @option
@item --sky=STR/FLT
The Sky value(s) to subtract from the input.
This option can either be given a constant number or a file name containing a dataset (multiple values, per pixel or per tile).
By default, Segment will assume the input dataset is Sky subtracted, so this option is not mandatory.
If the value cannot be read as a number, it is assumed to be a file name.
When the value is a file, the extension can be specified with @option{--skyhdu}.
When it is not a single number, the given dataset must either have the same size as the output or the same size as the tessellation (so there is one pixel per tile, see @ref{Tessellation}).
When this option is given, its value(s) will be subtracted from the input and the (optional) convolved dataset (given to @option{--convolved}) prior to starting the segmentation process.
@item --skyhdu=STR/INT
The HDU/extension containing the Sky values.
This is mandatory when the value given to @option{--sky} is not a number.
Please see the description of @option{--hdu} in @ref{Input output options} for the different ways you can identify a special extension.
@item --std=STR/FLT
The Sky standard deviation value(s) corresponding to the input.
The value can either be a constant number or a file name containing a dataset (multiple values, per pixel or per tile).
The Sky standard deviation is mandatory for Segment to operate.
If the value cannot be read as a number, it is assumed to be a file name.
When the value is a file, the extension can be specified with @option{--skyhdu}.
When it is not a single number, the given dataset must either have the same size as the output or the same size as the tessellation (so there is one pixel per tile, see @ref{Tessellation}).
When this option is not called, Segment will assume the standard deviation is a dataset and in a HDU/extension (@option{--stdhdu}) of another one of the input file(s).
If a file is given to @option{--detection}, it will assume that file contains the standard deviation dataset, otherwise, it will look into input filename (the main argument, without any option).
@item --stdhdu=INT/STR
The HDU/extension containing the Sky standard deviation values, when the value given to @option{--std} is a file name.
Please see the description of @option{--hdu} in @ref{Input output options} for the different ways you can identify a special extension.
@item --variance
The input Sky standard deviation value/dataset is actually variance.
When this option is called, the square root of input Sky standard deviation (see @option{--std}) is used internally, not its raw value(s).
@item -d FITS
@itemx --detection=FITS
Detection map to use for segmentation.
If given a value of @option{all}, Segment will assume the whole dataset must be segmented, see below.
If a detection map is given, the extension can be specified with @option{--dhdu}.
If not given, Segment will assume the desired HDU/extension is in the main input argument (input file specified with no option).
The final segmentation (clumps or objects) will only be over the non-zero pixels of this detection map.
The dataset must have the same size as the input image.
Only datasets with an integer type are acceptable for the labeled image, see @ref{Numeric data types}.
If your detection map only has integer values, but it is stored in a floating point container, you can use Gnuastro's Arithmetic program (see @ref{Arithmetic}) to convert it to an integer container, like the example below:
@example
$ astarithmetic float.fits int32 --output=int.fits
@end example
It may happen that the whole input dataset is covered by signal, for example, when working on parts of the Andromeda galaxy, or nearby globular clusters (that cover the whole field of view).
In such cases, segmentation is necessary over the complete dataset, not just specific regions (detections).
By default Segment will first use the undetected regions as a reference to find the proper signal-to-noise ratio of ``true'' clumps (give a purity level specified with @option{--snquant}).
Therefore, in such scenarios you also need to manually give a ``true'' clump signal-to-noise ratio with the @option{--clumpsnthresh} option to disable looking into the undetected regions, see @ref{Segmentation options}.
In such cases, is possible to make a detection map that only has the value @code{1} for all pixels (for example, using @ref{Arithmetic}), but for convenience, you can also use @option{--detection=all}.
@item --dhdu
The HDU/extension containing the detection map given to @option{--detection}.
Please see the description of @option{--hdu} in @ref{Input output options} for the different ways you can identify a special extension.
@item -k FITS
@itemx --kernel=FITS
The name of file containing kernel that will be used to convolve the input image.
The usage of this option is identical to NoiseChisel's @option{--kernel} option (@ref{NoiseChisel input}).
Please see the descriptions there for more.
To disable convolution, you can give it a value of @option{none}.
@item --khdu
The HDU/extension containing the kernel used for convolution.
For acceptable values, please see the description of @option{--hdu} in @ref{Input output options}.
@item --convolved=FITS
The convolved image's file name to avoid internal convolution by Segment.
The usage of this option is identical to NoiseChisel's @option{--convolved} option.
Please see @ref{NoiseChisel input} for a thorough discussion of the usefulness and best practices of using this option.
If you want to use the same convolution kernel for detection (with @ref{NoiseChisel}) and segmentation, with this option, you can use the same convolved image (that is also available in NoiseChisel) and avoid two convolutions.
However, just be careful to use the input to NoiseChisel as the input to Segment also, then use the @option{--sky} and @option{--std} to specify the Sky and its standard deviation (from NoiseChisel's output).
Recall that when NoiseChisel is not called with @option{--rawoutput}, the first extension of NoiseChisel's output is the @emph{Sky-subtracted} input (see @ref{NoiseChisel output}).
So if you use the same convolved image that you fed to NoiseChisel, but use NoiseChisel's output with Segment's @option{--convolved}, then the convolved image will not be Sky subtracted.
@item --chdu
The HDU/extension containing the convolved image (given to @option{--convolved}).
For acceptable values, please see the description of @option{--hdu} in @ref{Input output options}.
@item -L INT[,INT]
@itemx --largetilesize=INT[,INT]
The size of the large tiles to use for identifying the clump S/N threshold over the undetected regions.
The usage of this option is identical to NoiseChisel's @option{--largetilesize} option (@ref{NoiseChisel input}).
Please see the descriptions there for more.
The undetected regions can be a significant fraction of the dataset and finding clumps requires sorting of the desired regions, which can be slow.
To speed up the processing, Segment finds clumps in the undetected regions over separate large tiles.
This allows it to have to sort a much smaller set of pixels and also to treat them independently and in parallel.
Both these issues greatly speed it up.
Just be sure to not decrease the large tile sizes too much (less than 100 pixels in each dimension).
It is important for them to be much larger than the clumps.
@end table
@node Segmentation options, Segment output, Segment input, Invoking astsegment
@subsubsection Segmentation options
The options below can be used to configure every step of the segmentation process in the Segment program.
For a more complete explanation (with figures to demonstrate each step), please see Section 3.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}, and also @ref{Segment}.
By default, Segment will follow the procedure described in the paper to find the S/N threshold based on the noise properties.
This can be disabled by directly giving a trustable signal-to-noise ratio to the @option{--clumpsnthresh} option.
Recall that you can always see the full list of Gnuastro's options with the @option{--help} (see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see their values (see @ref{Operating mode options}).
@table @option
@item -B FLT
@itemx --minskyfrac=FLT
Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a large tile.
Only (large) tiles with a fraction of undetected pixels (Sky) greater than this value will be used for finding clumps.
The clumps found in the undetected areas will be used to estimate a S/N threshold for true clumps.
Therefore this is an important option (to decrease) in crowded fields.
Operationally, this is almost identical to NoiseChisel's @option{--minskyfrac} option (@ref{Detection options}).
Please see the descriptions there for more.
@item --minima
Build the clumps based on the local minima, not maxima.
By default, clumps are built starting from local maxima (see Figure 8 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}).
Therefore, this option can be useful when you are searching for true local minima (for example, absorption features).
@item -m INT
@itemx --snminarea=INT
The minimum area which a clump in the undetected regions should have in order to be considered in the clump Signal to noise ratio measurement.
If this size is set to a small value, the Signal to noise ratio of false clumps will not be accurately found.
It is recommended that this value be larger than the value to NoiseChisel's @option{--snminarea}.
Because the clumps are found on the convolved (smoothed) image while the pseudo-detections are found on the input image.
You can use @option{--checksn} and @option{--checksegmentation} to see if your chosen value is reasonable or not.
@item --checksn
Save the S/N values of the clumps over the sky and detected regions into separate tables.
If @option{--tableformat} is a FITS format, each table will be written into a separate extension of one file suffixed with @file{_clumpsn.fits}.
If it is plain text, a separate file will be made for each table (ending in @file{_clumpsn_sky.txt} and @file{_clumpsn_det.txt}).
For more on @option{--tableformat} see @ref{Input output options}.
You can use these tables to inspect the S/N values and their distribution (in combination with the @option{--checksegmentation} option to see where the clumps are).
You can use Gnuastro's @ref{Statistics} to make a histogram of the distribution (ready for plotting in a text file, or a crude ASCII-art demonstration on the command-line).
With this option, Segment will abort as soon as the two tables are created.
This allows you to inspect the steps leading to the final S/N quantile threshold, this behavior can be disabled with @option{--continueaftercheck}.
@item --minnumfalse=INT
The minimum number of clumps over undetected (Sky) regions to identify the requested Signal-to-Noise ratio threshold.
Operationally, this is almost identical to NoiseChisel's @option{--minnumfalse} option (@ref{Detection options}).
Please see the descriptions there for more.
@item -c FLT
@itemx --snquant=FLT
The quantile of the signal-to-noise ratio distribution of clumps in undetected regions, used to define true clumps.
After identifying all the usable clumps in the undetected regions of the dataset, the given quantile of their signal-to-noise ratios is used to define the signal-to-noise ratio of a ``true'' clump.
Effectively, this can be seen as an inverse p-value measure.
See Figure 9 and Section 3.2.1 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} for a complete explanation.
The full distribution of clump signal-to-noise ratios over the undetected areas can be saved into a table with @option{--checksn} option and visually inspected with @option{--checksegmentation}.
@item -v
@itemx --keepmaxnearriver
Keep a clump whose maximum (minimum if @option{--minima} is called) flux is 8-connected to a river pixel.
By default such clumps over detections are considered to be noise and are removed irrespective of their significance measure; see Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
Over large profiles, that sink into the noise very slowly, noise can cause part of the profile (which was flat without noise) to become a very large and with a very high Signal to noise ratio.
In such cases, the pixel with the maximum flux in the clump will be immediately touching a river pixel.
@item -s FLT
@itemx --clumpsnthresh=FLT
The signal-to-noise threshold for true clumps.
If this option is given, then the segmentation options above will be ignored and the given value will be directly used to identify true clumps over the detections.
This can be useful if you have a large dataset with similar noise properties.
You can find a robust signal-to-noise ratio based on a (sufficiently large) smaller portion of the dataset.
Afterwards, with this option, you can speed up the processing on the whole dataset.
Other scenarios where this option may be useful is when, the image might not contain enough/any Sky regions.
@item -G FLT
@itemx --gthresh=FLT
Threshold (multiple of the sky standard deviation added with the sky) to stop growing true clumps.
Once true clumps are found, they are set as the basis to segment the detected region.
They are grown until the threshold specified by this option.
@item -y INT
@itemx --minriverlength=INT
The minimum length of a river between two grown clumps for it to be considered in signal-to-noise ratio estimations.
Similar to @option{--snminarea}, if the length of the river is too short, the signal-to-noise ratio can be noisy and unreliable.
Any existing rivers shorter than this length will be considered as non-existent, independent of their Signal to noise ratio.
The clumps are grown on the input image, therefore this value can be smaller than the value given to @option{--snminarea}.
Recall that the clumps were defined on the convolved image so @option{--snminarea} should be larger.
@item -O FLT
@itemx --objbordersn=FLT
The maximum Signal to noise ratio of the rivers between two grown clumps in order to consider them as separate `objects'.
If the Signal to noise ratio of the river between two grown clumps is larger than this value, they are defined to be part of one `object'.
Note that the physical reality of these `objects' can never be established with one image, or even multiple images from one broad-band filter.
Any method we devise to define `object's over a detected region is ultimately subjective.
Two very distant galaxies or satellites in one halo might lie in the same line of sight and be detected as clumps on one detection.
On the other hand, the connection (through a spiral arm or tidal tail for example) between two parts of one galaxy might have such a low surface brightness that they are broken up into multiple detections or objects.
In fact if you have noticed, exactly for this purpose, this is the only Signal to noise ratio that the user gives into NoiseChisel.
The `true' detections and clumps can be objectively identified from the noise characteristics of the image, so you do not have to give any hand input Signal to noise ratio.
@item --checksegmentation
A file with the suffix @file{_seg.fits} will be created.
This file keeps all the relevant steps in finding true clumps and segmenting the detections into multiple objects in various extensions.
Having read the paper or the steps above.
Examining this file can be an excellent guide in choosing the best set of parameters.
Note that calling this function will significantly slow NoiseChisel.
In verbose mode (without the @option{--quiet} option, see @ref{Operating mode options}) the important steps (along with their extension names) will also be reported.
With this option, NoiseChisel will abort as soon as the two tables are created.
This behavior can be disabled with @option{--continueaftercheck}.
@end table
@node Segment output, , Segmentation options, Invoking astsegment
@subsubsection Segment output
The main output of Segment are two label datasets (with integer types, separating the dataset's elements into different classes).
They have HDU/extension names of @code{CLUMPS} and @code{OBJECTS}.
Similar to all Gnuastro's FITS outputs, the zero-th extension/HDU of the main output file only contains header keywords and image or table.
It contains the Segment input files and parameters (option names and values) as FITS keywords.
Note that if an option name is longer than 8 characters, the keyword name is the second word.
The first word is @code{HIERARCH}.
Also note that according to the FITS standard, the keyword names must be in capital letters, therefore, if you want to use Grep to inspect these keywords, use the @option{-i} option, like the example below.
@example
$ astfits image_segmented.fits -h0 | grep -i snquant
@end example
@cindex DS9
@cindex SAO DS9
By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's output will also contain the (technically redundant) sky-subtracted input dataset (@code{INPUT-NO-SKY}) and the sky standard deviation dataset (@code{SKY_STD}, if it was not a constant number).
This can help in visually inspecting the result when viewing the images as a ``Multi-extension data cube'' in SAO DS9 for example, (see @ref{Viewing FITS file contents with DS9 or TOPCAT}).
You can simply flip through the extensions and see the same region of the image and its corresponding clumps/object labels.
It also makes it easy to feed the output (as one file) into MakeCatalog when you intend to make a catalog afterwards (see @ref{MakeCatalog}.
To remove these redundant extensions from the output (for example, when designing a pipeline), you can use @option{--rawoutput}.
The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into @ref{MakeCatalog} to generate a catalog for higher-level analysis.
If you want to treat each clump separately, you can give a very large value (or even a NaN, which will always fail) to the @option{--gthresh} option (for example, @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation options}.
For a complete definition of clumps and objects, please see Section 3.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and @ref{Segmentation options}.
The clumps are ``true'' local maxima (minima if @option{--minima} is called) and their surrounding pixels until a local minimum/maximum (caused by noise fluctuations, or another ``true'' clump).
Therefore it may happen that some of the input detections are not covered by clumps at all (very diffuse objects without any strong peak), while some objects may contain many clumps.
Even in those that have clumps, there will be regions that are too diffuse.
The diffuse regions (within the input detected regions) are given a negative label (-1) to help you separate them from the undetected regions (with a value of zero).
Each clump is labeled with respect to its host object.
Therefore, if an object has three clumps for example, the clumps within it have labels 1, 2 and 3.
As a result, if an initial detected region has multiple objects, each with a single clump, all the clumps will have a label of 1.
The total number of clumps in the dataset is stored in the @code{NCLUMPS} keyword of the @code{CLUMPS} extension and printed in the verbose output of Segment (when @option{--quiet} is not called).
The @code{OBJECTS} extension of the output will give a positive counter/label to every detected pixel in the input.
As described in Akhlaghi and Ichikawa [2015], the true clumps are grown until a certain threshold.
If the grown clumps touch other clumps and the connection is strong enough, they are considered part of the same @emph{object}.
Once objects (grown clumps) are identified, they are grown to cover the whole detected area.
The options to configure the output of Segment are listed below:
@table @option
@item --continueaftercheck
Do not abort Segment after producing the check image(s).
The usage of this option is identical to NoiseChisel's @option{--continueaftercheck} option (@ref{NoiseChisel input}).
Please see the descriptions there for more.
@item --noobjects
Abort Segment after finding true clumps and do not continue with finding options.
Therefore, no @code{OBJECTS} extension will be present in the output.
Each true clump in @code{CLUMPS} will get a unique label, but diffuse regions will still have a negative value.
To make a catalog of the clumps, the input detection map (where all the labels are one) can be fed into @ref{MakeCatalog} along with the input detection map to Segment (that only had a value of @code{1} for all detected pixels) with @option{--clumpscat}.
In this way, MakeCatalog will assume all the clumps belong to a single ``object''.
@item --grownclumps
In the output @code{CLUMPS} extension, store the grown clumps.
If a detected region contains no clumps or only one clump, then it will be fully given a label of @code{1} (no negative valued pixels).
@item --rawoutput
Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output file.
Without this option (by default), the first and last extensions of the output will the Sky-subtracted input dataset and the Sky standard deviation dataset (if it was not a number).
When the datasets are small, these redundant extensions can make it convenient to inspect the results visually or feed the output to @ref{MakeCatalog} for measurements.
Ultimately both the input and Sky standard deviation datasets are redundant (you had them before running Segment).
When the inputs are large/numerous, these extra dataset can be a burden.
@end table
@cartouche
@noindent
@cindex Compression
@strong{Save space:} with the @option{--rawoutput}, Segment's output will only be two labeled datasets (only containing integers).
Since they have no noise, such datasets can be compressed very effectively (without any loss of data) with exceptionally high compression ratios.
You can use the following command to compress it with the best ratio:
@cindex GNU Gzip
@example
$ gzip --best segment_output.fits
@end example
@noindent
The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's programs directly, without having to decompress it separately (it will just take them a little longer, because they have to decompress it internally before use).
@end cartouche
When the input is a 2D image, to inspect NoiseChisel's output you can configure SAO DS9 in your Graphic User Interface (GUI) to open NoiseChisel's output as a multi-extension data cube.
This will allow you to flip through the different extensions and visually inspect the results.
This process has been described for the GNOME GUI (most common GUI in GNU/Linux operating systems) in @ref{Viewing FITS file contents with DS9 or TOPCAT}.
@node MakeCatalog, Match, Segment, Data analysis
@section MakeCatalog
At the lowest level, a dataset (for example, an image) is just a collection of values, placed after each other in any number of dimensions (for example, an image is a 2D dataset).
Each data-element (pixel) just has two properties: its position (relative to the rest) and its value.
In higher-level analysis, an entire dataset (an image for example) is rarely treated as a singular entity@footnote{You can derive the over-all properties of a complete dataset (1D table column, 2D image, or 3D data-cube) treated as a single entity with Gnuastro's Statistics program (see @ref{Statistics}).}.
You usually want to know/measure the properties of the (separate) scientifically interesting targets that are embedded in it.
For example, the magnitudes, positions and elliptical properties of the galaxies that are in the image.
MakeCatalog is Gnuastro's program for localized measurements over a dataset.
In other words, MakeCatalog is Gnuastro's program to convert low-level datasets (like images), to high level catalogs.
The role of MakeCatalog in a scientific analysis and the benefits of its model (where detection/segmentation is separated from measurement) is discussed in Akhlaghi @url{https://arxiv.org/abs/1611.06387v1,2016}@footnote{A published paper cannot undergo any more change, so this manual is the definitive guide.} and summarized in @ref{Detection and catalog production}.
We strongly recommend reading this short paper for a better understanding of this methodology.
Understanding the effective usage of MakeCatalog, will thus also help effective use of other (lower-level) Gnuastro's programs like @ref{NoiseChisel} or @ref{Segment}.
Following the Unix philosophy@footnote{Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".}, MakeCatalog is specialized in doing measurements accurately and efficiently, not to do detection, segmentation, or defining apertures on requested positions in your dataset.
It is therefore important to define the pixels (of each ``object'' in the image for example) @emph{before} running MakeCatalog.
There are separate, highly specialized and customizable programs in Gnuastro for these other jobs as discussed below, their outputs can be fed into MakeCatalog (for a usage example in a real-world analysis, see @ref{General program usage tutorial} and @ref{Detecting large extended targets}).
@itemize
@item
@ref{Arithmetic}: Detection with a simple threshold.
@item
@ref{NoiseChisel}: Advanced detection.
@item
@ref{Segment}: Segmentation (substructure over detections).
@item
@ref{MakeProfiles}: Aperture creation for known positions.
@end itemize
To identify which pixels should be measured together, a ``label'' dataset is necessary (which is the main input to MakeCatalog).
The number of labels in the labels dataset will be the number of rows in the output.
Having identified which pixels to use for each row of the output catalog, the ``values'' dataset will be used to read the value of each pixel to use in the measurement.
The labeled dataset should have the same size/dimensions as the values image, but with integer valued pixels.
Any pixel with the same label/counter for each sub-set of pixels that must be measured together.
For example, all the pixels covering one galaxy in an image, get the same label.
The final result is a catalog where each row corresponds to the measurements on pixels with a specific label.
For example, the flux weighted average position of all the pixels with a label of 42 will be written into the 42nd row of the output catalog/table's central position column@footnote{See @ref{Morphology measurements elliptical} for a discussion on this and the derivation of positional parameters, which includes the center.}.
Similarly, the sum of all these pixels will be the 42nd row in the sum column, etc.
Pixels with labels equal to, or smaller than, zero will be ignored by MakeCatalog.
In other words, the number of rows in MakeCatalog's output is already known before running it (the maximum value of the labeled dataset).
Before getting into the details of running MakeCatalog (in @ref{Invoking astmkcatalog} (last sub-section), we will start with some basics that are good to review if you are new to optical/astronomical data analysis.
The basics of MakeCatalog's approach to separating detection/segmentation from measurements in is first discussed in @ref{Detection and catalog production}.
We then introduce the core units/concepts of brightness measurements in optical astronomy (which inherits a lot from its multi-millennial history) in @ref{Brightness flux magnitude}.
Having reviewed the core concepts, @ref{MakeCatalog measurements on each label} provides a categorized list of all measurements that are currently available in MakeCatalog.
A very important factor in any measurement is understanding its validity range, or limits.
Therefore in @ref{Metameasurements on full input}, we will discuss how to estimate the reliability of the detection and basic measurements.
This section will continue with a derivation of elliptical parameters from the labeled datasets in @ref{Morphology measurements elliptical}.
For those who feel MakeCatalog's existing measurements/columns are not enough and would like to add further measurements, in @ref{Adding new columns to MakeCatalog}, a checklist of steps is provided for readily adding your own new measurements/columns.
@menu
* Detection and catalog production:: Discussing why/how to treat these separately.
* Brightness flux magnitude:: More on Magnitudes, surface brightness, etc.
* Standard deviation vs Standard error:: To avoid confusions with error measurements.
* MakeCatalog measurements on each label:: The columns that you can request in output catalog.
* Metameasurements on full input:: Measurements on/about measurements.
* Manual metameasurements:: These need custom runs of MakeCatalog and/or other programs.
* Adding new columns to MakeCatalog:: How to add new columns.
* Invoking astmkcatalog:: Options and arguments to MakeCatalog.
@end menu
@node Detection and catalog production, Brightness flux magnitude, MakeCatalog, MakeCatalog
@subsection Detection and catalog production
Most existing common tools in low-level astronomical data-analysis (for example, SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) merge the two processes of detection and measurement (catalog production) in one program.
However, in light of Gnuastro's modularized approach (modeled on the Unix system) detection is separated from measurements and catalog production.
This modularity is therefore new to many experienced astronomers and deserves a short review here.
Further discussion on the benefits of this methodology can be seen in Akhlaghi @url{https://arxiv.org/abs/1611.06387v1,2016}.
As discussed in the introduction of @ref{MakeCatalog}, detection (identifying which pixels to do measurements on) can be done with different programs.
Their outputs (a labeled dataset) can be directly fed into MakeCatalog to do the measurements and write the result as a catalog/table.
Beyond that, Gnuastro's modular approach has many benefits that will become clear as you get more experienced in astronomical data analysis and want to be more creative in using your valuable data for the exciting scientific project you are working on.
In short the reasons for this modularity can be classified as below:
@itemize
@item
Simplicity/robustness of independent, modular tools: making a catalog is a logically separate process from labeling (detection, segmentation, or aperture production).
A user might want to do certain operations on the labeled regions before creating a catalog for them.
Another user might want the properties of the same pixels/objects in another image (another filter for example) to measure the colors or SED fittings.
Here is an example of doing both: suppose you have images in various broad band filters at various resolutions and orientations.
The image of one color will thus not lie exactly on another or even be in the same scale.
However, it is imperative that the same pixels be used in measuring the colors of galaxies.
To solve the problem, NoiseChisel can be run on the reference image to generate the labeled detection image.
Afterwards, the labeled image can be warped into the grid of the other color (using @ref{Warp}).
MakeCatalog will then generate the same catalog for both colors (with the different labeled images).
It is currently customary to warp the images to the same pixel grid, however, modification of the scientific dataset is very harmful for the data and creates correlated noise.
It is much more accurate to do the transformations on the labeled image.
@item
Complexity of a monolith: Adding in a catalog functionality to the detector program will add several more steps (and many more options) to its processing that can equally well be done outside of it.
This makes following what the program does harder for the users and developers, it can also potentially add many bugs.
As an example, if the parameter you want to measure over one profile is not provided by the developers of MakeCatalog.
You can simply open this tiny little program and add your desired calculation easily.
This process is discussed in @ref{Adding new columns to MakeCatalog}.
However, if making a catalog was part of NoiseChisel for example, adding a new column/measurement would require a lot of energy to understand all the steps and internal structures of that huge program.
It might even be so intertwined with its processing, that adding new columns might cause problems/bugs in its primary job (detection).
@end itemize
@node Brightness flux magnitude, Standard deviation vs Standard error, Detection and catalog production, MakeCatalog
@subsection Brightness, Flux, Magnitude and Surface brightness
@cindex cgs units (centimeter–gram–second)
@cindex centimeter–gram–second (cgs) units
After taking an image with your camera (or a large telescope), the value in each pixel of the output file is a proxy for the amount of @emph{energy} that accumulated in it (while it was exposed to light).
In astrophysics, the centimeter–gram–second (cgs) units are commonly used so energy is commonly reported in units of @mymath{erg}.
In an image, the energy of a galaxy for example will be distributed over many pixels.
Therefore, the collected energy of an object in the image is the total sum of the values in the pixels associated to the object.
To be able to compare our scientific data with other data, optical astronomers have a unique terminology based on the concept of ``Magnitude''s.
But before getting to those, let's review the following basic physical concepts first:
@table @asis
@item Energy (@mymath{erg}; also in @emph{counts} or @emph{ADU}s)
Within the electromagnetic regime, we measure the received energy of astronomical source by counting the number of photons that have been converted to electrons (electric potential) in our detectors.
@cindex Gain
@cindex Counts
@cindex ADU (Analog-to-digital unit)
@cindex Analog-to-digital unit (ADU)
When counting/measuring the electric potential changes, the optical (but also near ultra-violet and near infra-red) detectors do not actually count individual electrons but bundles/packages of electrons known as the analog-to-digital unit (ADU).
The number of electrons in each ADU is known as the @emph{gain} of the instrument and is measured as part of its calibration.
@item Power (@mymath{erg/s})
The amount of energy in a fixed interval of time (1 second) is known as power.
Power is used in two contexts within astronomy which are listed below.
Both have the same units of energy per time, but their difference is very important to understand in physical interpretation:
@table @asis
@item Brightness
@cindex Brightness
The @emph{received} power from a source (the thing we measure).
To be able to compare data taken with different exposure times, we define the received power of the source in a detector as the @emph{brightness}.
@cindex Luminosity
@item Luminosity
The total @emph{emitted} power of a source in @emph{all directions}.
@end table
Unlike brightness (a measured property), luminosity is an inherent property of the object that is calculated from the combination of multiple measurements (flux and distance; see below).
@item Flux (@mymath{erg/s/cm^2})
@cindex Flux
To be able to compare with data from different telescopes (with different collecting areas), we define the @emph{flux} which is defined by dividing the brightness by the exposed aperture of our telescope.
Because we are using the cgs units, the collecting area is reported in @mymath{cm^2}.
Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can derive its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
@item Spectral flux density (@mymath{erg/s/cm^2/Hz} or @mymath{erg/s/cm^2/\AA})
@cindex Spectral Flux Density
@cindex Frequency Flux Density
@cindex Wavelength Flux Density
@cindex Flux density (spectral)
To take into account the different spectral coverage of filters and detectors, we define the @emph{spectral flux density}, which is defined in either of these units (based on context): @mymath{erg/s/cm^2/Hz} (frequency-based) @mymath{erg/s/cm^2/\AA} (wavelength-based).
@cindex Bolometric luminosity
As in other objects in nature, astronomical objects do not emit or reflect the same flux at all wavelengths.
On the other hand, our detector technologies are different for different wavelength ranges.
Therefore, even if we wanted to, there is no way to measure the ``total'' (at all wavelengths; also known as ``bolometric'') luminosity of an object with a single tool.
To be able to analyze objects with different spectral features (compare measurements of the same object taken in different spectral regimes), it is therefore important to account for the wavelength (or frequency) range of the photons that we measured through the spectral flux density.
@table @asis
@item Jansky (@mymath{10^{-23}erg/s/cm^2/Hz})
@cindex Jansky (Jy)
A ``Jansky'' is a predefined/nominal level of frequency flux density that is commonly used in radio astronomy.
The AB magnitude system (see below; used in optical astronomy) is also in frequency-based so there is a simple conversion between the two.
Janskys can be converted to wavelength flux density using the @code{jy-to-wavelength-flux-density} operator of Gnuastro's Arithmetic program, see the derivation under this operator's description in @ref{Unit conversion operators}.
@end table
@end table
Having summarized the relevant basic physical concepts above, let's review the terminology that is used in optical astronomy.
The reason optical astronomers don't use modern physical terminology is that optical astronomy precedes modern physical concepts by thousands of years!
Once the modern physical concepts where mature enough, optical astronomers found the correct conversion factors to better define their own terminology (and easily use previous results) instead of abandoning them.
Other fields of astronomy (for example X-ray or radio) were discovered in the last century when modern physical concepts had already matured and were being extensively used, so for those fields, the concepts above are enough.
@table @asis
@item Magnitude
@cindex Magnitudes from flux
@cindex Flux to magnitude conversion
@cindex Astronomical Magnitude system
The observed spectral flux density of astronomical objects span over a very large range: the Sun (as the brightest object) is roughly @mymath{10^{24}} times brighter than the fainter galaxies we can currently detect in our deepest images.
Therefore the scale that was originally used from the ancient times to measure the incoming light (used by Hipparchus of Nicaea; 190-120 BC) can be parameterized as a logarithmic function of the spectral flux density.
@cindex Hipparchus of Nicaea
But the logarithm can only be usable with a value which is always positive and has no units.
Fortunately brightness is always positive.
To remove the units, we divide the spectral flux density of the object (@mymath{F}) by a reference spectral flux density (@mymath{F_r}).
We then define a logarithmic scale through the relation below and call it the @emph{magnitude}.
The @mymath{-2.5} factor is also a legacy of our ancient origins: was necessary to approximately match the used magnitude system of Hipparchus.
@dispmath{m-m_r=-2.5\log_{10} \left( F \over F_r \right)}
@noindent
In the equation above, @mymath{m} is the magnitude of the object and @mymath{m_r} is the pre-defined magnitude of the reference spectral flux density.
For estimating the error in measuring a magnitude, see @ref{Metameasurements on full input}.
The equation above is ultimately a relative relation.
To tie it to physical units, astronomers use the concept of a zero point which is discussed in the next item.
See the @option{mag-to-luminosity} operator of Arithmetic in @ref{Unit conversion operators} for more on the conversion of the observed magnitudes (described below) of an object to luminosity.
The received brightness of two object with the same luminosity but at different distances is going to be different (the closer one will be brighter).
Therefore, astronomers have defined the following terminology to help avoid confusing distant-dependent and distance-independent magnitudes.
@table @asis
@item Apparent magnitude
The apparent magnitude is directly related (through the equation above) to the received spectral flux density (that we measure in our detectors).
Therefore, it depends on the distance to the object (and any absorption that may occur in the light path).
@item Absolute magnitude
Knowing the distance of an object and absorptions in the light path, we can obtain the luminosity.
From the luminosity, we can measure the apparent magnitude if the object was a point at a nominal (or fixed, or standard, or absolute) distance.
The magnitude at this absolute distance is known as the absolute magnitude.
The standard (or abstract: just to help in comparisons) distance is historically defined as 10 parsecs.
By reporting the magnitude at a fixed distance for all objects, the absolute magnitude therefore helps to compare the intrinsic (independent of distance) magnitude of astronomical objects.
@end table
@item Zero point
@cindex Zero point magnitude
@cindex Magnitude zero point
A unique situation in the magnitude equation above occurs when the reference spectral flux density is unity (@mymath{F_r=1}).
In other words, the increase in spectral flux density that produces an increment in the detector's native measurement units (ADUs).
The word ``increment'' above is used intentionally: because ADUs are discrete and measured as integers.
In other words, an increase in spectral flux density that is below @mymath{F_r} will not be measured by the device.
The reference magnitude (@mymath{m_r}) that corresponds to @mymath{F_r} is known as the @emph{Zero point} magnitude of that detector + filter + atmosphere (for on-ground observations).
@cindex ISM (inter-stellar medium)
@cindex Inter-stellar medium (ISM)
Therefore, the increase in spectral flux density (from an astrophysical source) that produces an increment in ADUs depends on all hardware and observational parameters that the image was taken in.
These include the quantum efficiency of the detector, the detector's coating, the filter transmission curve, the transmission of the optical path and the atmospheric absorption (for ground-based images; for example observations at different altitudes from the horizon where the thickness of the atmosphere is different).
The rest of the absorptions (for example due to the interstellar medium, or ISM) are not considered in the zero point definition because for most purposes, they are not related to our observing conditions, but position on the sky.
In other words, while ISM absorption should be taken into account when measuring the luminosity of the source for example, ISM absorption is not in the zero point.
If we can later observe the universe from outside the Milky Way, the ISM absorption should also be included (it would become like the atmosphere).
But the farthest we have got so far for scientific observations beyond the Solar system is the L2 orbit of Earth (for instruments like Euclid, Gaia or JWST).
The zero point therefore allows us to summarize all these ``observational'' (non-astrophysical) factors into a single number and compare different observations from different instruments at different observing conditions (which are critical to do science).
Defining the zero point magnitude as @mymath{m_r=Z} in the magnitude equation, we can write it in simpler format (recall that @mymath{F_r=1}):
@dispmath{m = -2.5\log_{10}(F) + Z}
The zero point is found through comparison of measurements with pre-defined standards (in other words, it is a calibration of the pixel values).
Gnuastro has an installed script with a complete tutorial to estimate the zero point of any image, see @ref{Zero point estimation}.
@cindex AB magnitude
@cindex Magnitude, AB
Historically, the reference was defined to be measurements of the star Vega, producing the @emph{vega magnitude} system.
In this system, the star Vega had a magnitude of zero (similar to the catalog of Hipparchus of Nicaea).
However, this caused many problems because Vega itself has its unique spectral features which are not in other stars and it is a variable star when measured precisely.
Therefore, based on previous efforts, in 1983 Oke & Gunn @url{https://ui.adsabs.harvard.edu/abs/1983ApJ...266..713O,proposed} the AB (absolute) magnitude system from accurate spectroscopy of Vega.
To avoid confusion with the ``absolute magnitude'' of a source (at a fixed distance), this magnitude system is always written as AB magnitude.
The AB magnitude zero point (when the input is frequency flux density; @mymath{F_\nu} with units of @mymath{erg/s/cm^2/Hz}) was defined such that a star with a flat spectra around @mymath{5480\AA} have a similar magnitude in the AB and Vega-based systems:
@dispmath{m_{AB} = -2.5\log_{10}(F_\nu) + 48.60}
Reversing this equation and using Janskys, an object with a magnitude of zero (@mymath{m_{AB}=0}) has a spectral flux density of @mymath{3631Jy}.
Once the AB magnitude zero point of an image is found, you can directly convert any measurement on it from instrument ADUs to Janskys.
In Gnuastro, the Arithmetic program has an operator called @code{counts-to-jy} which will do this though a given AB Magnitude-based zero point like below (SDSS data have a fixed zero point of 22.5 in the AB magnitude system):
@example
$ astarithmetic sdss.fits 22.5 counts-to-jy
@end example
@cartouche
@noindent
@strong{Verify the zero point definition on new databases:} observational factors like the exposure time, the gain, telescope aperture, filter transmission curve and other factors are usually taken into account in the reduction pipeline that produces high-level science products.
But some reduction pipelines may not account for some of these for special reasons: for example not accounting for the gain or exposure time.
To avoid annoyingly strange results, when using a new database, verify (in the documentation of the database) that the zero points they provide directly converts pixel values to Janskys (is an AB magnitude zero point), or not.
If they not, you need to apply corrections your self.
@end cartouche
Let's look at one example where the given zero point has not accounted for the exposure time (in other words it is only for a fixed exposure time: @mymath{Z_E}), but the pixel values (@mymath{p}) have been corrected for the exposure time.
One solution would be to first multiply the pixels by the exposure time, use that zero point to get your desired measurement and delete the temporary file.
But a more optimal way (in terms of storage, execution and clean code) would be to correct the zero point.
Let's take @mymath{t} to show time in units of seconds and @mymath{p_E} to be the pixel value that would be measured after the fixed exposure time (in other words @mymath{p_E=p\times t}).
We then have the following:
@dispmath{m = -2.5\log_{10}(p_E) + Z_E = -2.5\log_{10}(p\times t) + Z_E}
From the properties of logarithms, we can then derive the correct zero point (@mymath{Z}) to use directly (without touching the original pixels):
@dispmath{m = -2.5\log_{10}(p) + Z \quad\rm{where}\quad Z = Z_E - 2.5\log_{10}(t)}
@item Surface brightness
@cindex Steradian
@cindex Angular coverage
@cindex Celestial sphere
@cindex Surface brightness
@cindex SI (International System of Units)
The definition of magnitude above was for the total spectral flux density coming from an object (recall how we mentioned at the start of this section that the total energy of an object is calculated by summing all its pixels).
The total flux is (mostly!) independent of the angular size of your pixels, so we didn't need to account for the pixel area.
But when you want to study extended structures where the total magnitude is not desired (for example the sub-structure of a galaxy, or the brightness of the background sky), you need to report values that are independent of the area that total spectral flux density was measured on.
For this, we define the @emph{surface brightness} to be the magnitude of an object's brightness divided by its solid angle over the celestial sphere (or coverage in the sky, commonly in units of arcsec@mymath{^2}).
The solid angle is expressed in units of arcsec@mymath{^2} because astronomical targets are usually much smaller than one steradian.
Recall that the steradian is the dimension-less SI unit of a solid angle and 1 steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial sphere.
Surface brightness is therefore most commonly expressed in units of mag/arcsec@mymath{^2}.
For example, when the spectral flux density is measured over an area of A arcsec@mymath{^2}, the surface brightness is calculated by:
@dispmath{S = -2.5\log_{10}(F/A) + Z = -2.5\log_{10}(F) + 2.5\log_{10}(A) + Z}
@noindent
In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units of arcsec@mymath{^2}) through this equation:
@dispmath{S = m + 2.5\log_{10}(A)}
A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and divide the object's magnitude by its area.
But this is wrong because magnitude is a logarithmic scale while area is linear.
It is the spectral flux density that should be divided by the solid angle because both have linear scales.
The magnitude of that ratio is then defined to be the surface brightness.
Besides applications in catalogs and the resulting scientific analysis, converting pixels to surface brightness is usually a good way to display a FITS file in a publication!
See @ref{FITS images in a publication} for a fully working tutorial on how to do this.
@end table
@cartouche
@noindent
@strong{Do not warp or convolve magnitude or surface brightness images:} Warping an image involves calculating new pixel values (of the new pixel grid) from the input grid's pixel values.
Convolution is also a process of finding the weighted mean of pixel values.
During these processes, many arithmetic operations are done on the original pixel values, for example, addition or multiplication.
However, @mymath{log_{10}(a+b)\ne log_{10}(a)+log_{10}(b)}.
Therefore if you generate color, magnitude or surface brightness images (where pixels are in units of magnitudes), do not apply any such operations on them!
If you need to warp or convolve the image, do it @emph{before} the conversion to magnitude-based units.
@end cartouche
@node Standard deviation vs Standard error, MakeCatalog measurements on each label, Brightness flux magnitude, MakeCatalog
@subsection Standard deviation vs Standard error
The standard deviation and standard error are different concepts to convey different aspects of a measurement, but they can be easily confused with each other: for example, the standard deviation is also called as the ``error''.
A nice description of this difference is given in the following quote from Wikipedia.
In this section, we'll show the concept described in this quote with a hands-on and practical set of commands to clarify this important distinction.
@quotation
The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem. Put simply, the @emph{standard error} of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the @emph{standard deviation} of the sample is the degree to which individuals within the sample differ from the sample mean.
@author Wikipedia page on ``Standard error'' (retrieved on 2024-08-09)
@end quotation
Let's simulate an observation of the sky, but without any astronomical sources to simplify the logic.
In other words, we only have a background flux level (assume you want to measure the brightness of the twilight sky that is not yet faint enough to let stars be visible).
With the first command below, let's make an image called @file{1.fits} that contains @mymath{200\times200} pixels that are filled with random noise from a Poisson distribution with a mean of 100 counts (the flux from the background sky).
With the second command, we'll have a look at the image.
Recall that the Poisson distribution is equal to a normal distribution for large mean values (as in this case).
@example
$ astarithmetic 200 200 2 makenew 100 mknoise-poisson \
--output=1.fits
$ astscript-fits-view 1.fits
@end example
The standard deviation (@mymath{\sigma}) of the Poisson distribution is the square root of the mean, see @ref{Photon counting noise}.
Note that due to the random nature of the noise, the values reported in the next steps on your computer will be slightly different (which is one of the points of this section!).
To reproducible exactly the same values in different runs, see @ref{Generating random numbers}, and for more on the first command, see @ref{Arithmetic}.
Each pixel shows the result of one sampling from the Poisson distribution.
In other words, assuming the sky emission in our simulation is constant over our field of view, each pixel's value shows one measurement of the same sky emission.
Statistically speaking, a ``measurement'' is a sampling from an underlying distribution of values.
Through our measurements, we aim to identify that underlying distribution (the ``truth'')!
With the command below, let's look at the pixel statistics of @file{1.fits} (output is shown immediately under it).
@c If you change this output, replace the standard deviation (10.0) below
@c in the text.
@example
$ aststatistics 1.fits
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: 1.fits (hdu: 1)
-------
Number of elements: 40000
Minimum: 61
Maximum: 155
Median: 100
Mean: 100.044925
Standard deviation: 10.00066032
-------
Histogram:
| * *
| * * *
| * * * *
| * * * * *
| * * * * *
| * * ******** * *
| * ************* *
| * ****************** *
| ************************ *
| *********************************
|* ********************************************************** ** *
|----------------------------------------------------------------------
@end example
As expected, you see that the ASCII histogram nicely resembles a normal distribution.
The measured mean and standard deviation are also very similar to the input (mean of 100, standard deviation of 10).
But the measured mean (and standard deviation) aren't exactly equal to the input!
Every time we make a different simulated image from the same distribution, the measured mean and standard deviation will slightly differ.
Run the commands above one more time (this time calling the output @file{2.fits}) and check for your self (actually open the two FITS images and check visually, don't just rely on the statistics).
Now that you have a good feeling of the change, let's automate this and scale it up for some nice statistics.
With the commands below, you will build 500 images like above and measure their mean and standard deviation and save each measurement into a file (@file{mean-stds.txt}.
In the first command we are deleting it to make sure we write into an empty file within the first run of the loop.
With the third command, let's view the top 10 rows:
@example
$ rm -f mean-stds.txt
$ for i in $(seq 500); do \
astarithmetic 200 200 2 makenew 100 mknoise-poisson \
--output=$i.fits --quiet; \
aststatistics $i.fits --mean --std >> mean-stds.txt; \
echo "$i: complete"; \
done
$ asttable mean-stds.txt -Y --head=10
99.989381 9.936407
100.036622 10.059997
100.006054 9.985470
99.944535 9.960069
100.050318 9.970116
100.002718 9.905395
100.067555 9.964038
100.027167 10.018562
100.051951 9.995859
100.000212 9.970293
@end example
From this table, you see that each simulation has produced a slightly different measured mean and measured standard deviation.
They are just fluctuating around the input mean (which was 100) and input standard deviation (which was 10).
Let's have a look at the distribution of mean measurements:
@example
$ aststatistics mean-stds.txt -c1
Statistics (GNU Astronomy Utilities) @value{VERSION}
-------
Input: mean-stds.txt
Column: 1
-------
Number of elements: 500
Minimum: 9.98183528700191e+01
Maximum: 1.00146490891332e+02
Mode: 99.99709739
Mode quantile: 0.49498998
Median: 9.99977393190436e+01
Mean: 99.99891826
Standard deviation: 0.04901635275
-------
Histogram:
| *
| * **
| ****** **** * *
| ****** **** * * *
| * * ************* * *
| * ****************** **
| * ********************* *** *
| * ***************************** ***
| *** ********************************** *
| *** ******************************************* **
| * ************************************************* ** *
|----------------------------------------------------------------------
@end example
@cindex Standard error of mean
The standard deviation you see above (approximately @mymath{0.05}) shows the scatter in measuring the mean with an image of this size and is different from the standard deviation of the Poisson distribution that the values were drawn from.
This is therefore defined as the @emph{standard error of the mean}, or ``standard error'' for short (since most measurements are actually the mean of a population).
From the example above, you see that the error is smaller than the standard deviation (smaller when you have a larger sample).
In fact, @url{https://en.wikipedia.org/wiki/Standard_error#Derivation, it can be shown} that this ``error of the mean'' (@mymath{\sigma_{\bar{x}}}; recall that @mymath{\bar{x}} represents the mean) is related to the distribution standard deviation (@mymath{\sigma}) through the following equation.
Where @mymath{N} is the number of points used to measure the mean in one sample (@mymath{N=200\times200=40000} in this case).
Note that the @mymath{10.0} below was reported as ``standard deviation'' in the first run of @code{aststatistics} on @file{1.fits} above):
@dispmath{\sigma_{\bar{x}}\approx\frac{\sigma}{\sqrt{N}} = \frac{10.0}{200} = 0.05}
Therefore the standard error of the mean is directly related to the number of pixels you used to measure the mean.
You can test this by changing the @code{200}s in the commands above to smaller or larger values.
As you make larger and larger images, you will be able to measure the mean much more precisely (the standard error of the mean will go to zero).
But no matter how many pixels you use, the standard deviation will always be the same.
Within MakeCatalog, the options related to dispersion/error in the measurements have the following conventions:
@table @asis
@item @option{--*std}
For example @option{--std} or @option{--sigclip-std}.
These return the standard deviation of the values within a label.
If the underlying object (in the noise) is flat, then this will be the @option{\sigma} that is mentioned above.
However, no object in astronomy is flat!
So this option should be used with extreme care!
It only makes sense in special contexts like measuring the radial profile where we assume that the values at a certain radius have the same flux (see @ref{Generate radial profile}).
@item @option{--*error}
For example @option{--mag-error}, @option{--mean-error} or @option{--sum-error}.
These options should be used when when pixel values are different.
When the pixels do not have the same value (for example different parts of one galaxy), their standard deviation is meaningless.
To measure the total error in such cases, we need to know the standard deviation of each pixel separately.
Therefore, for these columns, MakeCatalog needs a separate dataset that contains the underlying sky standard deviation for those pixels.
That dataset should have the same size (number and dimension of pixels) as the values dataset.
You can use @ref{NoiseChisel} to generate such an image.
If the underlying profile and sky standard deviations is flat, then @option{--sum-error} will be the standard deviation that we discussed in this section and @option{--mean-error} will be the standard error.
When the values are different, the combined error is calculated by adding the variances (second power of the standard deviation) of each pixel, added with its value.
When the values are smaller than one, a correction is applied (that is defined in Section 3.3 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664, 2015}).
@end table
@node MakeCatalog measurements on each label, Metameasurements on full input, Standard deviation vs Standard error, MakeCatalog
@subsection MakeCatalog measurements on each label
MakeCatalog's output measurements can be classified into two types: columns (independent measurements on each label) or metadata (of the whole image, see @ref{Metameasurements on full input}).
The precise measurements that it will actually do is specified using command-line options that are described in the subsections below (@ref{Options}).
The majority of the first group are those which only produce a single-valued measurement for each label (for example, its magnitude: a single number for every label/object).
There are also multi-valued measurements per label (like spectra on 3D cubes) which keep their values in a single vector column of the same table (see @ref{Vector columns}).
Both types of measurements will be written as one column in a final table/catalog that also contains all the other requested columns/measurements.
Measurements that produce one value for the whole input (not a specific label/object) are stored as metadata in the 0-th (first) HDU of the output with predefined keywords described in @ref{Metameasurements on full input}.
The majority of this section is devoted to MakeCatalog's single-valued measurements.
However, MakeCatalog can also do measurements that produce more than one value for each label.
Currently the only such measurement is generation of spectra from 3D cubes with the @option{--spectrum} option and it is discussed in the end of this section.
Command-line options are used to identify which measurements you want in the final catalog(s) and in what order.
If any of the options below is called on the command-line or in any of the configuration files, it will be included as a column in the output catalog.
The order of the columns is in the same order as the options were seen by MakeCatalog (see @ref{Configuration file precedence}).
Some of the columns apply to both ``objects'' and ``clumps'' and some are particular to only one of them (for the definition of ``objects'' and ``clumps'', see @ref{Segment}).
Columns/options that are unique to one catalog (only objects, or only clumps), are explicitly marked with [Objects] or [Clumps] to specify the catalog they will be placed in.
@menu
* Identifier columns:: Identifying labels of each row (object/clumps).
* Position measurements in pixels:: Containing image/pixel (X/Y) measurements.
* Position measurements in WCS:: Containing WCS (for example RA/Dec) measurements.
* Brightness measurements:: Using pixel values of each label.
* Surface brightness measurements:: Various ways to measure surface brightness.
* Upper limit measurements::
* Morphology measurements nonparametric:: Non-parametric morphology.
* Morphology measurements elliptical:: Elliptical morphology measurements.
* Measurements per slice spectra:: Measurements on each slice (like spectra).
@end menu
@node Identifier columns, Position measurements in pixels, MakeCatalog measurements on each label, MakeCatalog measurements on each label
@subsubsection Identifier columns
The identifier of each row (group of measurements) is usually the first thing you will be requesting from MakeCatalog.
Without the identifier, it is not clear which measurement corresponds to which label for the input.
Since MakeCatalog can also optionally take sub-structure label (clumps; see @ref{Segment}), there are various identifiers in general that are listed below.
The most generic (and shortest and easiest to type!) is the @option{--ids} option which can be used in object-only or object-clump catalogs.
@table @option
@item --i
@itemx --ids
This is a unique option which can add multiple columns to the final catalog(s).
Calling this option will put the object IDs (@option{--obj-id}) in the objects catalog and host-object-ID (@option{--host-obj-id}) and ID-in-host-object (@option{--id-in-host-obj}) into the clumps catalog.
Hence if only object catalogs are required, it has the same effect as @option{--obj-id}.
@item --obj-id
[Objects] ID of this object.
@item -j
@itemx --host-obj-id
[Clumps] The ID of the object which hosts this clump.
@item --id-in-host-obj
[Clumps] The ID of this clump in its host object.
@end table
@node Position measurements in pixels, Position measurements in WCS, Identifier columns, MakeCatalog measurements on each label
@subsubsection Position measurements in pixels
The position of a labeled region within your input dataset (in its own units) can be measured with the options in this section.
By ``in its own units'' we mean pixels in a 2D image or voxels in a 3D cube.
For example if the flux-weighted center of a label lies 123 pixels on the horizontal and 456 pixels on the vertical, the @option{--x} and @option{--y} options will put a value of 123 and 456 in their respective columns.
As you see below, there are various ways to define the ``position'' of an object, so read the differences carefully to choose the one that corresponds best to your usage.
@table @option
@item -x
@itemx --x
The flux weighted center of all objects and clumps along the first FITS axis (horizontal when viewed in SAO DS9), see @mymath{\overline{x}} in @ref{Morphology measurements elliptical}.
The weight has to have a positive value (pixel value larger than the Sky value) to be meaningful! Specially when doing matched photometry, this might not happen: no pixel value might be above the Sky value.
For such detections, the geometric center will be reported in this column (see @option{--geo-x}).
You can use @option{--weight-area} to see which was used.
@item -y
@itemx --y
The flux weighted center of all objects and clumps along the second FITS axis (vertical when viewed in SAO DS9).
See @option{--x}.
@item -z
@itemx --z
The flux weighted center of all objects and clumps along the third FITS
axis. See @option{--x}.
@item --geo-x
The geometric center of all objects and clumps along the first FITS axis axis.
The geometric center is the average pixel positions irrespective of their pixel values.
@item --geo-y
The geometric center of all objects and clumps along the second FITS axis axis, see @option{--geo-x}.
@item --geo-z
The geometric center of all objects and clumps along the third FITS axis axis, see @option{--geo-x}.
@item --min-val-x
Position of pixel with minimum value in objects and clumps, along the first FITS axis.
@item --max-val-x
Position of pixel with maximum value in objects and clumps, along the first FITS axis.
@item --min-val-y
Position of pixel with minimum value in objects and clumps, along the first FITS axis.
@item --max-val-y
Position of pixel with maximum value in objects and clumps, along the first FITS axis.
@item --min-val-z
Position of pixel with minimum value in objects and clumps, along the first FITS axis.
@item --max-val-z
Position of pixel with maximum value in objects and clumps, along the first FITS axis.
@item --min-x
The minimum position of all objects and clumps along the first FITS axis.
@item --max-x
The maximum position of all objects and clumps along the first FITS axis.
@item --min-y
The minimum position of all objects and clumps along the second FITS axis.
@item --max-y
The maximum position of all objects and clumps along the second FITS axis.
@item --min-z
The minimum position of all objects and clumps along the third FITS axis.
@item --max-z
The maximum position of all objects and clumps along the third FITS axis.
@item --clumps-x
[Objects] The flux weighted center of all the clumps in this object along the first FITS axis.
See @option{--x}.
@item --clumps-y
[Objects] The flux weighted center of all the clumps in this object along the second FITS axis.
See @option{--x}.
@item --clumps-z
[Objects] The flux weighted center of all the clumps in this object along the third FITS axis.
See @option{--x}.
@item --clumps-geo-x
[Objects] The geometric center of all the clumps in this object along the first FITS axis.
See @option{--geo-x}.
@item --clumps-geo-y
[Objects] The geometric center of all the clumps in this object along the second FITS axis.
See @option{--geo-x}.
@item --clumps-geo-z
[Objects] The geometric center of all the clumps in this object along
the third FITS axis. See @option{--geo-z}.
@end table
@node Position measurements in WCS, Brightness measurements, Position measurements in pixels, MakeCatalog measurements on each label
@subsubsection Position measurements in WCS
The position of a labeled region within your input dataset (in the World Coordinate System, or WCS) can be measured with the options in this section.
As you see below, there are various ways to define the ``position'' of an object, so read the differences carefully to choose the one that corresponds best to your usage.
The most common WCS coordinates are Right Ascension (RA) and Declination in an equatorial system.
Therefore, to simplify their usage, we have special @option{--ra} and @option{--dec} options.
However, the WCS of datasets are in Galactic coordinates, so to be generic, you can use the @option{--w1}, @option{--w2} or @option{--w3} (if you have a 3D cube) options.
In case your dataset's WCS is not in your desired system (for example it is Galactic, but you want equatorial 2000), you can use the @option{--wcscoordsys} option of Gnuastro's Fits program on the labeled image before running MakeCatalog (see @ref{Keyword inspection and manipulation}).
@table @option
@item -r
@itemx --ra
Flux weighted right ascension of all objects or clumps, see @option{--x}.
This is just an alias for one of the lower-level @option{--w1} or @option{--w2} options.
Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which axis corresponds to the right ascension.
If no @code{CTYPE} keywords start with @code{RA}, an error will be printed when requesting this column and MakeCatalog will abort.
@item -d
@itemx --dec
Flux weighted declination of all objects or clumps, see @option{--x}.
This is just an alias for one of the lower-level @option{--w1} or @option{--w2} options.
Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which axis corresponds to the declination.
If no @code{CTYPE} keywords start with @code{DEC}, an error will be printed when requesting this column and MakeCatalog will abort.
@item --w1
Flux weighted first WCS axis of all objects or clumps, see @option{--x}.
The first WCS axis is commonly used as right ascension in images.
@item --w2
Flux weighted second WCS axis of all objects or clumps, see @option{--x}.
The second WCS axis is commonly used as declination in images.
@item --w3
Flux weighted third WCS axis of all objects or clumps, see
@option{--x}. The third WCS axis is commonly used as wavelength in integral
field unit data cubes.
@item --geo-w1
Geometric center in first WCS axis of all objects or clumps, see @option{--geo-x}.
The first WCS axis is commonly used as right ascension in images.
@item --geo-w2
Geometric center in second WCS axis of all objects or clumps, see @option{--geo-x}.
The second WCS axis is commonly used as declination in images.
@item --geo-w3
Geometric center in third WCS axis of all objects or clumps, see
@option{--geo-x}. The third WCS axis is commonly used as wavelength in
integral field unit data cubes.
@item --clumps-w1
[Objects] Flux weighted center in first WCS axis of all clumps in this object, see @option{--x}.
The first WCS axis is commonly used as right ascension in images.
@item --clumps-w2
[Objects] Flux weighted declination of all clumps in this object, see @option{--x}.
The second WCS axis is commonly used as declination in images.
@item --clumps-w3
[Objects] Flux weighted center in third WCS axis of all clumps in this object, see @option{--x}.
The third WCS axis is commonly used as wavelength in integral field unit data cubes.
@item --clumps-geo-w1
[Objects] Geometric center right ascension of all clumps in this object, see @option{--geo-x}.
The first WCS axis is commonly used as right ascension in images.
@item --clumps-geo-w2
[Objects] Geometric center declination of all clumps in this object, see @option{--geo-x}.
The second WCS axis is commonly used as declination in images.
@item --clumps-geo-w3
[Objects] Geometric center in third WCS axis of all clumps in this object, see @option{--geo-x}.
The third WCS axis is commonly used as wavelength in integral field unit data cubes.
@end table
@node Brightness measurements, Surface brightness measurements, Position measurements in WCS, MakeCatalog measurements on each label
@subsubsection Brightness measurements
Within an image, pixels have both a position and a value.
In the sections above all the measurements involved position (see @ref{Position measurements in pixels} or @ref{Position measurements in WCS}).
The measurements in this section only deal with pixel values and ignore the pixel positions completely.
In other words, for the options of this section each labeled region within the input is just a group of values (and their associated error values if necessary), and they let you do various types of measurements on the resulting distribution of values.
For more on the difference between the @option{--*error} or @option{--*std} columns see @ref{Standard deviation vs Standard error}.
@table @option
@item --sum
The sum of all pixel values associated to this label (object or clump).
Note that if a sky value or image has been given, it will be subtracted before any column measurement.
For clumps, the ambient values (average of river pixels around the clump, multiplied by the area of the clump) is subtracted, see @option{--river-mean}.
So the sum of all the clump-sums in the clump catalog of one object will be smaller than the @option{--clumps-sum} column of the objects catalog.
If no usable pixels are present over the clump or object (for example, they are all blank), the returned value will be NaN (note that zero is meaningful).
@item --sum-error
The standard deviation of the sum of values of a label (objects or clumps).
The value is calculated by using the values image (for signal above the sky level) and the sky standard deviation image (extension @option{--stdhdu} of file given to @option{--instd}); which you can derive for any image using @ref{NoiseChisel}.
This column is internally used to measure the signal-to-noise (@option{--sn}).
For objects this is calculated by adding the sky variance (second power of the sky standard deviation image) of each pixel in the label, with the value of the pixel if the value is not negative (error only increases).
This is done to account for brighter pixels which have higher noise in the Poisson distribution (its side effect is that the error will always be slightly over-estimated due to the positive values close to the noise).
A correction may be applied if the sky standard deviation is negative; see Section 3.3 of Akhlaghi & Ichikawa @url{https://arxiv.org/abs/1505.01664, 2015}.
For clumps, the variance of the rivers (which are subtracted from the value of pixels in calculating the sum) are also added to generate the final standard deviation.
The returned value will be NaN when the label covers only NaN pixels in the values image, or a pixel is NaN in the @option{--instd} image, but non-NaN in the values image.
The latter situation usually happens when there is a bug in the previous steps of your analysis.
This is because the sky standard deviation should have a value in all pixels.
In such cases, it is important to find the cause and fix it because those pixels with a NaN in the @option{--instd} image may contribute significantly to the final error.
If you want to ignore those pixels in the error measurement, set them to zero (which is a meaningful number in such scenarios).
@item --clumps-sum
[Objects] The total sum of the pixels covered by clumps (before subtracting the river) within each object.
This is simply the sum of @option{--sum-no-river} in the clumps catalog (see below).
If no usable pixels are present over the clump or object (for example, they are all blank), the stored value will be NaN (note that zero is meaningful).
@item --sum-no-river
[Clumps] The sum of Sky (not river) subtracted clump pixel values.
By definition, for the clumps, the average value of the rivers surrounding it are subtracted from it for a first order accounting for contamination by neighbors.
If no usable pixels are present over the clump or object (for example, they are all blank), the stored value will be NaN (note that zero is meaningful).
@item --mean
The mean sky subtracted value of pixels within the object or clump.
For clumps, the average river flux is subtracted from the sky subtracted mean.
@item --mean-error
The error in measuring the mean; using both the values file and the sky standard deviation image.
In case the given standard deviation or variance image already contains the contributions from the pixel values (it is not just the sky standard deviation), use @option{--novalinerror}).
@item --std
The standard deviation of the pixels within the object or clump.
For clumps, the river pixels are not subtracted because that is a constant (per pixel) value and should not affect the standard deviation.
@item --median
The median sky subtracted value of pixels within the object or clump.
For clumps, the average river flux is subtracted from the sky subtracted median.
@item --maximum
The maximum value of pixels within the object or clump.
When the label (object or clump) is larger than three pixels, the maximum is actually derived by the mean of the brightest three pixels, not the largest pixel value of the same label.
This is because noise fluctuations can be very strong in the extreme values of the objects/clumps due to Poisson noise (which gets stronger as the mean gets higher).
Simply using the maximum pixel value will create a strong scatter in results that depend on the maximum (for example, the @option{--fwhm} option also uses this value internally).
@item --sigclip-number
The number of elements/pixels in the dataset after sigma-clipping the object or clump.
The sigma-clipping parameters can be set with the @option{--sigmaclip} option described in @ref{MakeCatalog inputs and basic settings}.
For more on Sigma-clipping, see @ref{Sigma clipping}.
@item --sigclip-median
The sigma-clipped median value of the object of clump's pixel distribution.
For more on sigma-clipping and how to define it, see @option{--sigclip-number}.
@item --sigclip-mean
The sigma-clipped mean value of the object of clump's pixel distribution.
For more on sigma-clipping and how to define it, see @option{--sigclip-number}.
@item --sigclip-std
The sigma-clipped standard deviation of the object of clump's pixel distribution.
For more on sigma-clipping and how to define it, see @option{--sigclip-number}.
@item -m
@itemx --magnitude
The magnitude of clumps or objects.
It is derived through the pixel counts over the label (which you can see in the @option{--sum} column) and the value given to the @option{--zeropoint} through the definition of the magnitude described in @ref{Brightness flux magnitude}.
@item --magnitude-error
The magnitude error of clumps or objects.
The method used is described below.
As we see there, this error assumes uncorrelated pixel values and also does not include the error in estimating the aperture (or error in generating the labeled image).
See the status of implementation of such factors in @url{https://savannah.gnu.org/task/index.php?14124, Task 14124}.
The returned value will be NaN when the label covers only NaN pixels in the values image, or a pixel is NaN in the @option{--instd} image, but non-NaN in the values image.
The latter situation usually happens when there is a bug in the previous steps of your analysis, and is important because those pixels with a NaN in the @option{--instd} image may contribute significantly to the final error.
If you want to ignore those pixels in the error measurement, set them to zero (which is a meaningful number in such scenarios).
The raw error in measuring the magnitude is only meaningful when the object's magnitude is brighter than the upper-limit magnitude (see below).
As discussed in @ref{Brightness flux magnitude}, the magnitude (@mymath{M}) of an object with brightness @mymath{B} and zero point magnitude @mymath{z} can be written as:
@dispmath{M=-2.5\log_{10}(B)+z}
@noindent
Calculating the derivative with respect to @mymath{B}, we get:
@dispmath{{dM\over dB} = {-2.5\over {B\times ln(10)}}}
@noindent
From the Tailor series (@mymath{\Delta{M}=dM/dB\times\Delta{B}}), we can write:
@dispmath{\Delta{M} = \left|{-2.5\over ln(10)}\right|\times{\Delta{B}\over{B}}}
@noindent
But, @mymath{\Delta{B}/B} is just the inverse of the Signal-to-noise ratio (@mymath{S/N}), so we can write the error in magnitude in terms of the signal-to-noise ratio:
@dispmath{ \Delta{M} = {2.5\over{S/N\times ln(10)}} }
MakeCatalog uses this relation to estimate the magnitude errors.
The signal-to-noise ratio is calculated in different ways for clumps and objects, see Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}), but this single equation can be used to estimate the measured magnitude error afterwards for any type of target.
@item --clumps-magnitude
[Objects] The magnitude of all clumps in this object, see @option{--clumps-sum}.
@item --river-mean
[Clumps] The average of the river pixel values around this clump.
River pixels were defined in Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
In short they are the pixels immediately outside of the clumps.
This value is used internally to find the sum (or magnitude) and signal to noise ratio of the clumps.
It can generally also be used as a scale to gauge the base (ambient) flux surrounding the clump.
In case there was no river pixels, then this column will have the value of the Sky under the clump.
So note that this value is @emph{not} sky subtracted.
@item --river-num
[Clumps] The number of river pixels around this clump, see @option{--river-mean}.
@item --river-min
[Clumps] Minimum river value around this clump, see @option{--river-mean}.
@item --river-max
[Clumps] Maximum river value around this clump, see @option{--river-mean}.
@item --sn
The Signal to noise ratio (S/N) of all clumps or objects.
See Akhlaghi and Ichikawa (2015) for the exact equations used.
The returned value will be NaN when the label covers only NaN pixels in the values image, or a pixel is NaN in the @option{--instd} image, but non-NaN in the values image.
The latter situation usually happens when there is a bug in the previous steps of your analysis, and is important because those pixels with a NaN in the @option{--instd} image may contribute significantly to the final error.
If you want to ignore those pixels in the error measurement, set them to zero (which is a meaningful number).
@item --sky
The sky flux (per pixel) value under this object or clump.
This is actually the mean value of all the pixels in the sky image that lie on the same position as the object or clump.
@item --sky-std
The sky value standard deviation (per pixel) for this clump or object.
This is the square root of the mean variance under the object, or the root mean square.
@end table
@node Surface brightness measurements, Upper limit measurements, Brightness measurements, MakeCatalog measurements on each label
@subsubsection Surface brightness measurements
In astronomy, Surface brightness is most commonly measured in units of magnitudes per arcsec@mymath{^2} (for the formal definition, see @ref{Brightness flux magnitude}).
Therefore it involves both the values of the pixels within each input label (or output row) and their position.
@table @option
@item --sb
The surface brightness (in units of mag/arcsec@mymath{^2}) of the labeled region (objects or clumps).
For more on the definition of the surface brightness, see @ref{Brightness flux magnitude}.
@item --sb-error
Error in measuring the surface brightness (the @option{--sb} column) as derived below.
@cindex Surface brightness error
@cindex Error in surface brightness
We can derive the error in measuring the surface brightness based on the surface brightness (SB) equation of @ref{Brightness flux magnitude} and the generic magnitude error (@mymath{\Delta{M}}) that is described under @option{--magnitude-error} in @ref{Brightness measurements}.
Let's set @mymath{A} to represent the area and @mymath{\Delta{A}} to represent the error in measuring the area.
For more on @mymath{\Delta{A}}, see the description of @option{--spatialresolution} in @ref{MakeCatalog inputs and basic settings}.
@dispmath{\Delta{(SB)} = \Delta{M} + \left|{-2.5\over ln(10)}\right|\times{\Delta{A}\over{A}}}
In the surface brightness equation mentioned above, @mymath{A} is in units of arcsecond squared and the conversion between arcseconds to pixels is a multiplication factor.
Therefore as long as @mymath{A} and @mymath{\Delta{A}} have the same units, it does not matter if they are in arcseconds or pixels.
Since the measure of spatial resolution (or area error) is the FWHM of the PSF which is usually defined in terms of pixels, its more intuitive to use pixels for @mymath{A} and @mymath{\Delta{A}}.
@item --upperlimit-sb
The upper-limit surface brightness (in units of mag/arcsec@mymath{^2}) of this labeled region (object or clump).
In other words, this option measures the surface brightness of noise within the footprint of each input label.
This is just a simple wrapper over lower-level columns: setting B and A as the value in the columns @option{--upperlimit} and @option{--area-arcsec2}, we fill this column by simply use the surface brightness equation of @ref{Brightness flux magnitude}.
@item --half-sum-sb
Surface brightness (in units of mag/arcsec@mymath{^2}) within the area that contains half the total sum of the label's pixels (object or clump).
This is useful as a measure of the sharpness of an astronomical object: for example a star will have very few pixels at half the maximum, so its @option{--half-sum-sb} will be much brighter than a galaxy at the same magnitude.
Also consider @option{--half-max-sb} below.
This column just plugs in the values of half the value of the @option{--sum} column and the @option{--half-sum-area} column, into the surface brightness equation.
Therefore please see the description in @option{--half-sum-area} to understand the systematics of this column and potential biases (see @ref{Morphology measurements nonparametric}).
@item --half-max-sb
The surface brightness (in units of mag/arcsec@mymath{^2}) within the region that contains half the maximum value of the labeled region.
Like @option{--half-sum-sb} this option this is a good way to identify the ``central'' surface brightness of an object.
To know when this measurement is reasonable, see the description of @option{--fwhm} in @ref{Morphology measurements nonparametric}.
@item --sigclip-mean-sb
Surface brightness (over 1 pixel's area in arcsec@mymath{^2}) of the sigma-clipped mean value of the pixel values distribution associated to each label (object or clump).
This is useful in scenarios where your labels have approximately @emph{constant} surface brightness values @emph{after} after removing outliers: for example in a radial profile, see @ref{Invoking astscript-radial-profile}).
In other scenarios it should be used with extreme care.
For example over the full area of a galaxy/star the pixel distribution is not constant (or symmetric after adding noise), their pixel distributions are inherently skewed (with fewer pixels in the center, having a very large value and many pixels in the outer parts having lower values).
Therefore, sigma-clipping is not meaningful for such objects!
For more on the definition of the surface brightness, see @ref{Brightness flux magnitude}, for more on sigma-clipping, see @ref{Sigma clipping}.
The error in this magnitude can be retrieved from the @option{--sigclip-mean-sb-delta} column described below, and you can use the @option{--sigclip-std-sb} column to find when the magnitude has become noise-dominated (signal-to-noise ratio is roughly 1).
See the description of these two options for more.
@item --sigclip-mean-sb-delta
Scatter in the @option{--sigclip-mean-sb} without using the standard deviation of each pixel (that is given by @option{--instd} in @ref{MakeCatalog inputs and basic settings}).
The scatter here is measured from the values of the label themselves.
This measurement is therefore most meaningful when you expect the flux across one label to be constant (as in a radial profile for example).
This is calculated using surface brightness error derived under @option{--sb-error} in @ref{Surface brightness measurements}, but with @mymath{\Delta{A}=0} (since sigma-clip is calculated per pixel and there is no error in a single pixel).
Within the equation to derive @mymath{\Delta{M}} (the error in magnitude, derived in the description of @option{--magnitude-error} in @ref{Brightness measurements}), the signal-to-noise ratio is defined by dividing the sigma-clipped mean by the sigma-clipped standard deviation.
@item --sigclip-std-sb
The surface brightness of the sigma-clipped standard deviation of all the pixels with the same label.
For labels that are expected to have the same value in all their pixels (for example each annulus of a radial profile) this can be used to find the reliable (@mymath{1\sigma}) surface brightness for that label.
In other words, if @option{--sigclip-mean-sb} is fainter than the value of this column, you know that noise is becoming significant.
However, as described in @option{--sigclip-mean-sb}, the sigma-clipped measurements of MakeCatalog should only be used in certain situations like radial profiles, see the description there for more.
@end table
@node Upper limit measurements, Morphology measurements nonparametric, Surface brightness measurements, MakeCatalog measurements on each label
@subsubsection Upper limit measurements
Due to the noisy nature of data, it is possible to get arbitrarily faint magnitudes, especially when you use labels from another image (for example see @ref{Working with catalogs estimating colors}).
Given the scatter caused by the dataset's noise, values fainter than a certain level are meaningless: another similar depth observation will give a radically different value.
In such cases, measurements like the image magnitude limit are not useful because it is estimated for a certain morphology and is given for the whole image (it is a crude generalization; see @ref{Metameasurements on full input}.
You want a quality measure that is specific to each object.
For example, assume that you have done your detection and segmentation on one filter and now you do measurements over the same labeled regions, but on other filters to measure colors (as we did in the tutorial @ref{Segmentation and making a catalog}).
Some objects are not going to have any significant signal in the other filters, but for example, you measure magnitude of 36 for one of them!
This is clearly unreliable (no dataset in current astronomy is able to detect such a faint signal).
In another image with the same depth, using the same filter, you might measure a magnitude of 30 for it, and yet another might give you 33.
Furthermore, the total sum of pixel values might actually be negative in some images of the same depth (due to noise).
In these cases, no magnitude can be defined and MakeCatalog will place a NaN there (recall that a magnitude is a base-10 logarithm).
@cindex Upper limit magnitude
@cindex Magnitude, upper limit
Using such unreliable measurements will directly affect our analysis, so we must not use the raw measurements.
When approaching the limits of your detection method, it is therefore important to be able to identify such cases.
But how can we know how reliable a measurement of one object on a given dataset is?
When we confront such unreasonably faint magnitudes, there is one thing we can deduce: that if something actually exists under our labeled pixels (possibly buried deep under the noise), it's inherent magnitude is fainter than an @emph{upper limit magnitude}.
To find this upper limit magnitude, we place the object's footprint (segmentation map) over a random part of the image where there are no detections, and measure the sum of pixel values within the footprint.
Doing this a large number of times will give us a distribution of measurements of the sum.
The standard deviation (@mymath{\sigma}) of that distribution can be used to quantify the upper limit magnitude for that particular object (given its particular shape and area):
@dispmath{M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad [mag/target]}
@cindex Correlated noise
Traditionally, faint/small object photometry was done using fixed circular apertures (for example, with a diameter of @mymath{N} arc-seconds) and there was not much processing involved (to make a deep coadd).
Hence, the upper limit was synonymous with the surface brightness limit discussed above: one value for the whole image.
The problem with this simplified approach is that the number of pixels in the aperture directly affects the final distribution and thus magnitude.
Also the image correlated noise might actually create certain patterns, so the shape of the object can also affect the final result.
Fortunately, with the much more advanced hardware and software of today, we can make customized segmentation maps (footprint) for each object and have enough computing power to actually place that footprint over many random places.
As a result, the per-target upper-limit magnitude and general surface brightness limit have diverged.
When any of the upper-limit-related columns requested, MakeCatalog will randomly place each target's footprint over the undetected parts of the dataset as described above, and estimate the required properties.
The procedure is fully configurable with the options in @ref{Upper-limit settings}.
You can get the full list of upper-limit related columns of MakeCatalog with this command (the extra @code{--} before @code{--upperlimit} is necessary@footnote{Without the extra @code{--}, grep will assume that @option{--upperlimit} is one of its own options, and will thus abort, complaining that it has no option with this name.}):
@example
$ astmkcatalog --help | grep -- --upperlimit
@end example
@table @option
@item --upperlimit
The upper limit value (in units of the input image) for this object or clump.
This is the sigma-clipped standard deviation of the random distribution, multiplied by the value of @option{--upnsigma}).
This is very important for the fainter and smaller objects in the image where the measured magnitudes are not reliable.
@item --upperlimit-mag
The upper limit magnitude for this object or clump.
This is very important for the fainter and smaller objects in the image where the measured magnitudes are not reliable.
@item --upperlimit-onesigma
The @mymath{1\sigma} upper limit value (in units of the input image) for this object or clump.
When @option{--upnsigma=1}, this column's values will be the same as @option{--upperlimit}.
@item --upperlimit-sigma
The position of the label's sum measured within the distribution of randomly placed upperlimit measurements in units of the distribution's @mymath{\sigma} or standard deviation.
@item --upperlimit-quantile
The position of the label's sum within the distribution of randomly placed upperlimit measurements as a quantile (value between 0 or 1).
If the object is brighter than the brightest randomly placed profile, a value of @code{inf} is returned.
If it is less than the minimum, a value of @code{-inf} is reported.
@item --upperlimit-skew
@cindex Skewness
This column contains the non-parametric skew of the @mymath{\sigma}-clipped random distribution that was used to estimate the upper-limit magnitude.
Taking @mymath{\mu} as the mean, @mymath{\nu} as the median and @mymath{\sigma} as the standard deviation, the traditional definition of skewness is defined as: @mymath{(\mu-\nu)/\sigma}.
This can be a good measure to see how much you can trust the random measurements, or in other words, how accurately the regions with signal have been masked/detected. If the skewness is strong (and to the positive), then you can tell that you have a lot of undetected signal in the dataset, and therefore that the upper-limit measurement (and other measurements) are not reliable.
@end table
@node Morphology measurements nonparametric, Morphology measurements elliptical, Upper limit measurements, MakeCatalog measurements on each label
@subsubsection Morphology measurements (non-parametric)
Morphology defined as a way to quantify the ``shape'' of an object in your input image.
This includes both the position and value of the pixels within your input labels.
There are many ways to define the morphology of an object.
In this section, we will review the available non-parametric measures of morphology.
By non-parametric, we mean that no functional shape is assumed for the measurement.
In @ref{Morphology measurements elliptical} you can see some parametric elliptical measurements (which are only valid when the object is actually an ellipse).
@table @option
@item --num-clumps
[Objects] The number of clumps in this object.
@item --area
The raw area (number of pixels/voxels) in any clump or object independent of what pixel it lies over (if it is NaN/blank or unused for example).
@item --arcsec2-area
The used (non-blank in values image) area of the labeled region in units of arc-seconds squared.
This column is just the value of the @option{--area} column, multiplied by the area of each pixel in the input image (in units of arcsec^2).
Similar to the @option{--ra} or @option{--dec} columns, for this option to work, the objects extension used has to have a WCS structure.
@item --area-min-val
The number of pixels that are equal to the minimum value of the labeled region (clump or object).
@item --area-max-val
The number of pixels that are equal to the maximum value of the labeled region (clump or object).
@item --area-xy
@cindex IFU: Integral Field Unit
@cindex Integral Field Unit
Similar to @option{--area}, when the clump or object is projected onto the first two dimensions.
This is only available for 3-dimensional datasets.
When working with Integral Field Unit (IFU) datasets, this projection onto the first two dimensions would be a narrow-band image.
@item --fwhm
@cindex FWHM
The full width at half maximum (in units of pixels, along the semi-major axis) of the labeled region (object or clump).
The maximum value is estimated from the mean of the top-three pixels with the highest values, see the description under @option{--maximum}.
The number of pixels that have half the value of that maximum are then found (value in the @option{--half-max-area} column) and a radius is estimated from the area.
See the description under @option{--half-sum-radius} for more on converting area to radius along major axis.
Because of its non-parametric nature, this column is most reliable on clumps and should only be used in objects with great caution.
This is because objects can have more than one clump (peak with true signal) and multiple peaks are not treated separately in objects, so the result of this column will be biased.
Also, because of its non-parametric nature, this FWHM it does not account for the PSF, and it will be strongly affected by noise if the object is faint/diffuse
So when half the maximum value (which can be requested using the @option{--maximum} column) is too close to the local noise level (which can be requested using the @option{--sky-std} column), the value returned in this column is meaningless (its just noise peaks which are randomly distributed over the area).
You can therefore use the @option{--maximum} and @option{--sky-std} columns to remove, or flag, unreliable FWHMs.
For example, if a labeled region's maximum is less than 2 times the sky standard deviation, the value will certainly be unreliable (half of that is @mymath{1\sigma}!).
For a more reliable value, this fraction should be around 4 (so half the maximum is 2@mymath{\sigma}).
@item --half-max-area
The number of pixels with values larger than half the maximum flux within the labeled region.
This option is used to estimate @option{--fwhm}, so please read the notes there for the caveats and necessary precautions.
@item --half-max-radius
The radius of region containing half the maximum flux within the labeled region.
This is just half the value reported by @option{--fwhm}.
@item --half-max-sum
The sum of the pixel values containing half the maximum flux within the labeled region (or those that are counted in @option{--halfmaxarea}).
This option uses the pixels within @option{--fwhm}, so please read the notes there for the caveats and necessary precautions.
@item --half-sum-area
The number of pixels that contain half the object or clump's total sum of pixels (half the value in the @option{--sum} column).
To count this area, all the non-blank values associated with the given label (object or clump) will be sorted and summed in order (starting from the maximum), until the sum becomes larger than half the total sum of the label's pixels.
This option is thus good for clumps (which are defined to have a single peak in their morphology), but for objects you should be careful: if the object includes multiple peaks/clumps at roughly the same level, then the area reported by this option will be distributed over all the peaks.
@item --half-sum-radius
Radius (in units of pixels) derived from the area that contains half the total sum of the label's pixels (value reported by @option{--halfsumarea}).
If the area is @mymath{A_h} and the axis ratio is @mymath{q}, then the value returned in this column is @mymath{\sqrt{A_h/({\pi}q)}}.
This option is a good measure of the concentration of the @emph{observed} (after PSF convolution and noisy) object or clump,
But as described below it underestimates the effective radius.
Also, it should be used in caution with objects that may have multiple clumps.
It is most reliable with clumps or objects that have one or zero clumps, see the note under @option{--halfsumarea}.
@cindex Ellipse area
@cindex Area, ellipse
Recall that in general, for an ellipse with semi-major axis @mymath{a}, semi-minor axis @mymath{b}, and axis ratio @mymath{q=b/a} the area (@mymath{A}) is @mymath{A={\pi}ab={\pi}qa^2}.
For a circle (where @mymath{q=1}), this simplifies to the familiar @mymath{A={\pi}a^2}.
@cindex S@'ersic profile
@cindex Effective radius
This option should not be confused with the @emph{effective radius} for S@'ersic profiles, commonly written as @mymath{r_e}.
For more on the S@'ersic profile and @mymath{r_e}, please see @ref{Galaxies}.
Therefore, when @mymath{r_e} is meaningful for the target (the target is elliptically symmetric and can be parameterized as a S@'ersic profile), @mymath{r_e} should be derived from fitting the profile with a S@'ersic function which has been convolved with the PSF.
But from the equation above, you see that this radius is derived from the raw image's labeled values (after convolution, with no parametric profile), so this column's value will generally be (much) smaller than @mymath{r_e}, depending on the PSF, depth of the dataset, the morphology, or if a fraction of the profile falls on the edge of the image.
In other words, this option can only be interpreted as an effective radius if there is no noise and no PSF and the profile within the image extends to infinity (or a very large multiple of the effective radius) and it not near the edge of the image.
@item --frac-max1-area
@itemx --frac-max2-area
Number of pixels brighter than the given fraction(s) of the maximum pixel value.
For the maximum value, see the description of @option{--maximum} column.
The fraction(s) are given through the @option{--frac-max} option (that can take two values) and is described in @ref{MakeCatalog inputs and basic settings}.
Recall that in @option{--halfmaxarea}, the fraction is fixed to 0.5.
Hence, added with these two columns, you can sample three parts of the profile area.
@item --frac-max1-sum
@itemx --frac-max2-sum
Sum of pixels brighter than the given fraction(s) of the maximum pixel value.
For the maximum value, see the description of @option{--maximum} column below.
The fraction(s) are given through the @option{--frac-max} option (that can take two values) and is described in @ref{MakeCatalog inputs and basic settings}.
Recall that in @option{--halfmaxsum}, the fraction is fixed to 0.5.
Hence, added with these two columns, you can sample three parts of the profile's sum of pixels.
@item --frac-max1-radius
@itemx --frac-max2-radius
Radius (in units of pixels) derived from the area that contains the given fractions of the maximum valued pixel(s) of the label's pixels (value reported by @option{--frac-max1-area} or @option{--frac-max2-area}).
For the maximum value, see the description of @option{--maximum} column below.
The fractions are given through the @option{--frac-max} option (that can take two values) and is described in @ref{MakeCatalog inputs and basic settings}.
Recall that in @option{--fwhm}, the fraction is fixed to 0.5.
Hence, added with these two columns, you can sample three parts of the profile's radius.
@item --clumps-area
[Objects] The total area of all the clumps in this object.
@item --weight-area
The area (number of pixels) used in the flux weighted position calculations.
@item --geo-area
The area of all the pixels labeled with an object or clump.
Note that unlike @option{--area}, pixel values are completely ignored in this column.
For example, if a pixel value is blank, it will not be counted in @option{--area}, but will be counted here.
@item --geo-area-xy
Similar to @option{--geo-area}, when the clump or object is projected onto the first two dimensions.
This is only available for 3-dimensional datasets.
When working with Integral Field Unit (IFU) datasets, this projection onto the first two dimensions would be a narrow-band image.
@end table
@node Morphology measurements elliptical, Measurements per slice spectra, Morphology measurements nonparametric, MakeCatalog measurements on each label
@subsubsection Morphology measurements (elliptical)
When your target objects are sufficiently ellipse-like, you can use the measurements below to quantify the various parameters of the ellipse.
The derivation of the elliptical parameters are described in detail below and followed by the available MakeCatalog column options.
The shape or morphology of a target is one of the most commonly desired parameters of a target.
Here, we will review the derivation of the most basic/simple morphological parameters: the elliptical parameters for a set of labeled pixels.
The elliptical parameters are: the (semi-)major axis, the (semi-)minor axis and the position angle along with the central position of the profile.
The derivations below follow the SExtractor manual derivations with some added explanations for easier reading.
@cindex Moments
Let's begin with one dimension for simplicity: Assume we have a set of @mymath{N} values @mymath{B_i} (for example, showing the spatial distribution of a target's brightness), each at position @mymath{x_i}.
The simplest parameter we can define is the geometric center of the object (@mymath{x_g}) (ignoring the brightness values): @mymath{x_g=(\sum_ix_i)/N}.
@emph{Moments} are defined to incorporate both the value (brightness) and position of the data.
The first moment can be written as:
@dispmath{\overline{x}={\sum_iB_ix_i \over \sum_iB_i}}
@cindex Variance
@cindex Second moment
@noindent
This is essentially the weighted (by @mymath{B_i}) mean position.
The geometric center (@mymath{x_g}, defined above) is a special case of this with all @mymath{B_i=1}.
The second moment is essentially the variance of the distribution:
@dispmath{\overline{x^2}\equiv{\sum_iB_i(x_i-\overline{x})^2 \over
\sum_iB_i} = {\sum_iB_ix_i^2 \over \sum_iB_i} -
2\overline{x}{\sum_iB_ix_i\over\sum_iB_i} + \overline{x}^2
={\sum_iB_ix_i^2 \over \sum_iB_i} - \overline{x}^2}
@cindex Standard deviation
@noindent
The last step was done from the definition of @mymath{\overline{x}}.
Hence, the square root of @mymath{\overline{x^2}} is the spatial standard deviation (along the one-dimension) of this particular brightness distribution (@mymath{B_i}).
Crudely (or qualitatively), you can think of its square root as the distance (from @mymath{\overline{x}}) which contains a specific amount of the flux (depending on the @mymath{B_i} distribution).
Similar to the first moment, the geometric second moment can be found by setting all @mymath{B_i=1}.
So while the first moment quantified the position of the brightness distribution, the second moment quantifies how that brightness is dispersed about the first moment.
In other words, it quantifies how ``sharp'' the object's image is.
@cindex Floating point error
Before continuing to two dimensions and the derivation of the elliptical parameters, let's pause for an important implementation technicality.
You can ignore this paragraph and the next two if you do not want to implement these concepts.
The basic definition (first definition of @mymath{\overline{x^2}} above) can be used without any major problem.
However, using this fraction requires two runs over the data: one run to find @mymath{\overline{x}} and another run to find @mymath{\overline{x^2}} from @mymath{\overline{x}}, this can be slow.
The advantage of the last fraction above, is that we can estimate both the first and second moments in one run (since the @mymath{-\overline{x}^2} term can easily be added later).
The logarithmic nature of floating point number digitization creates a complication however: suppose the object is located between pixels 10000 and 10020.
Hence the target's pixels are only distributed over 20 pixels (with a standard deviation @mymath{<20}), while the mean has a value of @mymath{\sim10000}.
The @mymath{\sum_iB_i^2x_i^2} will go to very very large values while the individual pixel differences will be orders of magnitude smaller.
This will lower the accuracy of our calculation due to the limited accuracy of floating point operations.
The variance only depends on the distance of each point from the mean, so we can shift all position by a constant/arbitrary @mymath{K} which is much closer to the mean: @mymath{\overline{x-K}=\overline{x}-K}.
Hence we can calculate the second order moment using:
@dispmath{ \overline{x^2}={\sum_iB_i(x_i-K)^2 \over \sum_iB_i} -
(\overline{x}-K)^2 }
@noindent
The closer @mymath{K} is to @mymath{\overline{x}}, the better (the sums of squares will involve smaller numbers), as long as @mymath{K} is within the object limits (in the example above: @mymath{10000\leq{K}\leq10020}), the floating point error induced in our calculation will be negligible.
For the most simplest implementation, MakeCatalog takes @mymath{K} to be the smallest position of the object in each dimension.
Since @mymath{K} is arbitrary and an implementation/technical detail, we will ignore it for the remainder of this discussion.
In two dimensions, the mean and variances can be written as:
@dispmath{\overline{x}={\sum_iB_ix_i\over B_i}, \quad
\overline{x^2}={\sum_iB_ix_i^2 \over \sum_iB_i} -
\overline{x}^2}
@dispmath{\overline{y}={\sum_iB_iy_i\over B_i}, \quad
\overline{y^2}={\sum_iB_iy_i^2 \over \sum_iB_i} -
\overline{y}^2}
@dispmath{\quad\quad\quad\quad\quad\quad\quad\quad\quad
\overline{xy}={\sum_iB_ix_iy_i \over \sum_iB_i} -
\overline{x}\times\overline{y}}
If an elliptical profile's major axis exactly lies along the @mymath{x} axis, then @mymath{\overline{x^2}} will be directly proportional with the profile's major axis, @mymath{\overline{y^2}} with its minor axis and @mymath{\overline{xy}=0}.
However, in reality we are not that lucky and (assuming galaxies can be parameterized as an ellipse) the major axis of galaxies can be in any direction on the image (in fact this is one of the core principles behind weak-lensing by shear estimation).
So the purpose of the remainder of this section is to define a strategy to measure the position angle and axis ratio of some randomly positioned ellipses in an image, using the raw second moments that we have calculated above in our image coordinates.
Let's assume we have rotated the galaxy by @mymath{\theta}, the new second order moments are:
@dispmath{\overline{x_\theta^2} = \overline{x^2}\cos^2\theta +
\overline{y^2}\sin^2\theta -
2\overline{xy}\cos\theta\sin\theta }
@dispmath{\overline{y_\theta^2} = \overline{x^2}\sin^2\theta +
\overline{y^2}\cos^2\theta +
2\overline{xy}\cos\theta\sin\theta}
@dispmath{\overline{xy_\theta} = \overline{x^2}\cos\theta\sin\theta -
\overline{y^2}\cos\theta\sin\theta +
\overline{xy}(\cos^2\theta-\sin^2\theta)}
@noindent
The best @mymath{\theta} (@mymath{\theta_0}, where major axis lies along the @mymath{x_\theta} axis) can be found by:
@dispmath{\left.{\partial \overline{x_\theta^2} \over \partial \theta}\right|_{\theta_0}=0}
Taking the derivative, we get:
@dispmath{2\cos\theta_0\sin\theta_0(\overline{y^2}-\overline{x^2}) +
2(\cos^2\theta_0-\sin^2\theta_0)\overline{xy}=0} When
@mymath{\overline{x^2}\neq\overline{y^2}}, we can write:
@dispmath{\tan2\theta_0 =
2{\overline{xy} \over \overline{x^2}-\overline{y^2}}.}
@cindex Position angle
@noindent
MakeCatalog uses the standard C math library's @code{atan2} function to estimate @mymath{\theta_0}, which we define as the position angle of the ellipse.
To recall, this is the angle of the major axis of the ellipse with the @mymath{x} axis.
By definition, when the elliptical profile is rotated by @mymath{\theta_0}, then @mymath{\overline{xy_{\theta_0}}=0}, @mymath{\overline{x_{\theta_0}^2}} will be the extent of the maximum variance and @mymath{\overline{y_{\theta_0}^2}} the extent of the minimum variance (which are perpendicular for an ellipse).
Replacing @mymath{\theta_0} in the equations above for @mymath{\overline{x_\theta}} and @mymath{\overline{y_\theta}}, we can get the semi-major (@mymath{A}) and semi-minor (@mymath{B}) lengths:
@dispmath{A^2\equiv\overline{x_{\theta_0}^2}= {\overline{x^2} +
\overline{y^2} \over 2} + \sqrt{\left({\overline{x^2}-\overline{y^2} \over 2}\right)^2 + \overline{xy}^2}}
@dispmath{B^2\equiv\overline{y_{\theta_0}^2}= {\overline{x^2} +
\overline{y^2} \over 2} - \sqrt{\left({\overline{x^2}-\overline{y^2} \over 2}\right)^2 + \overline{xy}^2}}
As a summary, it is important to remember that the units of @mymath{A} and @mymath{B} are in pixels (the standard deviation of a positional distribution) and that they represent the spatial light distribution of the object in both image dimensions (rotated by @mymath{\theta_0}).
When the object cannot be represented as an ellipse, this interpretation breaks down: @mymath{\overline{xy_{\theta_0}}\neq0} and @mymath{\overline{y_{\theta_0}^2}} will not be the direction of minimum variance.
The list of options/columns to measure the elliptical properties of an object's light distribution is given below.
Those that start with @option{--geo-*} ignore the pixel values and just do the measurements on the label's ``geometric'' shape.
@table @option
@item --semi-major
The pixel-value weighted root mean square (RMS) along the semi-major axis of the profile (assuming it is an ellipse) in units of pixels.
@item --semi-minor
The pixel-value weighted root mean square (RMS) along the semi-minor axis of the profile (assuming it is an ellipse) in units of pixels.
@item --axis-ratio
The pixel-value weighted axis ratio (semi-minor/semi-major) of the object or clump.
@item --position-angle
The pixel-value weighted angle of the semi-major axis with the first FITS axis in degrees.
@item --geo-semi-major
The geometric (ignoring pixel values) root mean square (RMS) along the semi-major axis of the profile, assuming it is an ellipse, in units of pixels.
@item --geo-semi-minor
The geometric (ignoring pixel values) root mean square (RMS) along the semi-minor axis of the profile, assuming it is an ellipse, in units of pixels.
@item --geo-axis-ratio
The geometric (ignoring pixel values) axis ratio of the profile, assuming it is an ellipse.
@item --geo-position-angle
The geometric (ignoring pixel values) angle of the semi-major axis with the first FITS axis in degrees.
@end table
@node Measurements per slice spectra, , Morphology measurements elliptical, MakeCatalog measurements on each label
@subsubsection Measurements per slice (spectra)
@cindex Spectrum
@cindex 3D data-cubes
@cindex Cubes (3D data)
@cindex IFU: Integral Field Unit
@cindex Integral field unit (IFU)
@cindex Spectrum (of astronomical source)
When the input is a 3D data cube, MakeCatalog also has the following multi-valued measurements per label (as well as the single-valued measurements).
These will be stored as vector columns (@ref{Vector columns}) in the same output table as the single-valued measurements.
For a tutorial on how to use these options and interpret their values, see @ref{Detecting lines and extracting spectra in 3D data}.
These options will do measurements on each 2D slice of the input 3D cube; hence the common the format of @code{--*-in-slice}.
Each slice usually corresponds to a certain wavelength, you can also think of these measurements as spectra.
@cartouche
@noindent
@strong{No in-slice measurement for clumps:} Unfortunately due to time-constrains, the columns described in this section are not yet implemented for clumps.
For the status of this issue, see @url{https://savannah.gnu.org/bugs/?66286, bug 66286}.
@end cartouche
For each row (input label), each of the columns described here will contain multiple values as a vector column.
The number of measurements in each column is the number of slices in the cube, or the size of the cube along the third dimension.
To learn more about vector columns and how to manipulate them, see @ref{Vector columns}.
For example usage of these columns in the tutorial above, see @ref{3D measurements and spectra} and @ref{Extracting a single spectrum and plotting it}.
@noindent
There are two ways to do each measurement on a slice for each label:
@table @asis
@item Only label
The measurement will only be done on the voxels in the slice that are associated to that label.
These types of per-slice measurement therefore have the following properties:
@itemize
@item
This will only be a measurement of that label and will not be affected by any other label.
@item
The number of voxels used in each slice can be different (usually only one or two voxels at the two extremes of the label (along the third dimension), and many in the middle.
@item
Since most labels are localized along the third dimension (maybe only covering 20 slices out of thousands!), many of the measurements (on slices where the label doesn't exist) will be NaN (for the sum measurements for example) or 0 (for the area measurements).
@end itemize
@item Projected label
MakeCatalog will first project the 3D label into a 2D surface (along the third dimension) to get its 2D footprint.
Afterwards, all the voxels in that 2D footprint will be measured all slices.
All these measurements will have a @option{-proj-} component in their name.
These types of per-slice measurement therefore has the following properties:
@itemize
@item
A measurement will be done on each slice of the cube.
@item
All measurements will be done on the same surface area.
@item
Labels can overlap when they are projected onto the first two FITS dimensions (the spatial coordinates, not spectral).
As a result, other emission lines or objects may contaminate the resulting spectrum for each label.
@end itemize
To help separate other labels, MakeCatalog can do a third type of measurement on each slice: measurements on the voxels that belong to other labels but overlap with the 2D projection.
This can be used to see how much your projected measurement is affected by other emission sources (on the projected spectra) and also if multiple lines (labeled regions) belong to the same physical object.
These measurements contain @code{-other-} in their name.
@end table
@table @option
@item --sum-in-slice
[Only label] Sum of values in each slice.
@item --sum-err-in-slice
[Only label] Error in '--sum-in-slice'.
@item --area-in-slice
[Only label] Number of labeled in each slice.
@item --sum-proj-in-slice
[Projected label] Sum of projected area in each slice.
@item --area-proj-in-slice:
[Projected label] Number of voxels that are used in @option{--sum-proj-in-slice}.
@item --sum-proj-err-in-slice
[Projected label] Error of @option{--sum-proj-in-slice}.
@item --area-other-in-slice
[Projected label] Area of other label in projected area on each slice.
@item --sum-other-in-slice
[Projected label] Sum of other label in projected area on each slice.
@item --sum-other-err-in-slice:
[Projected label] Area in @option{--sum-other-in-slice}.
@end table
@node Metameasurements on full input, Manual metameasurements, MakeCatalog measurements on each label, MakeCatalog
@subsection Metameasurements on full input
The data (rows and columns) produced by MakeCatalog are independent measurements of each label and were the focus of @ref{MakeCatalog measurements on each label}.
However, to correctly interpret those measurements especially in light of other studies, we also need statistical analysis of the full input or catalog.
These metameasurements (measurements on/about independent/individual measurements) are a single value for the whole input and are therefore stored as meta-data (keywords) in the 0th (first) HDU of the MakeCatalog's output.
The most common type of such metameasurements are commonly known as ``limits'' or ``depth'' of a dataset.
These are also the only type that is currently implemented in MakeCatalog; so let's continue with them.
@cindex Depth of data
@cindex Clump magnitude limit
@cindex Object magnitude limit
@cindex Limit, object/clump magnitude
@cindex Magnitude, object/clump detection limit
No measurement on a real dataset can be perfect: you can only reach a certain level/limit of accuracy and a meaningful (scientific) analysis on the output catalog requires an understanding of these limits.
Different datasets have different noise properties and different detection methods@footnote{A method/algorithm/software that is run with a different set of parameters is considered as a different detection method} will have different abilities to detect or measure certain kinds of signal (astronomical objects) and their properties in the dataset.
Hence, quantifying the detection and measurement limitations with a particular dataset and analysis tool is the most crucial/critical aspect of any high-level analysis.
Due to their importance, we have already touched upon some of these in two tutorials on real data: see @ref{Measuring the dataset limits} and @ref{Image surface brightness limit}.
The sections below describe each of the metameasurements that MakeCatalog will calculate and report when given the @option{--meta-measures} option.
Here, we just focus on the concept behind each measurement and its derivation.
The actual keywords that are written for each is described in @ref{MakeCatalog output keywords}.
@menu
* Surface brightness limit of image:: A portable measure of the noise level.
* Noise based magnitude limit of image:: Magnitude of objects above a certain threshold of the noise.
* Confusion limit of image:: A measure of density in the image.
@end menu
@node Surface brightness limit of image, Noise based magnitude limit of image, Metameasurements on full input, Metameasurements on full input
@subsubsection Surface brightness limit of image
@cindex Surface brightness limit
@cindex SBL: surface brightness limit
@cindex Limit: surface brightness limit
As we make more observations on one region of the sky and add/combine the observations into one dataset, both the signal and the noise increase.
However, the signal increases much faster than the noise:
Assuming you add @mymath{N} datasets with equal exposure times, the signal will increases as a multiple of @mymath{N}, while noise increases as @mymath{\sqrt{N}}.
Therefore the signal-to-noise ratio increases by a factor of @mymath{\sqrt{N}}.
Visually, fainter (per pixel) parts of the objects/signal in the image will become more visible/detectable.
The noise-level is known as the dataset's surface brightness limit.
You can think of the noise as muddy water in a pond (the ``sky'' will be the bottom of the pond).
The signal (coming from astronomical objects in real data) will be the rocks and their peaks can sometimes reach above the muddy water.
Let's assume that in your first observation the muddy water has just been stirred and except a few small peaks, you cannot see anything through the mud.
As you wait (and make more observations/exposures), the mud settles down and the @emph{depth} of the transparent water increases.
As a result, more and more summits become visible and the lower parts of the rocks (parts with lower surface brightness) can be seen more clearly.
In this analogy@footnote{Note that this muddy water analogy is not perfect, because while the water-level remains the same all over a peak, in data analysis, the Poisson noise increases with the level of data.}, height (from the ground) is the @emph{surface brightness} and the height of the muddy water at the moment you do your measurements, is your @emph{surface brightness limit}.
@cindex ACS camera
@cindex Surface brightness limit
@cindex Limit, surface brightness
On different instruments, pixels cover different spatial angles over the sky.
For example, the width of each pixel on the ACS camera on the Hubble Space Telescope (HST) is roughly 0.05 seconds of arc, while the pixels of SDSS are each 0.396 seconds of arc (almost eight times wider@footnote{Ground-based instruments like the SDSS suffer from strong smoothing due to the atmosphere.
Therefore, increasing the pixel resolution (or decreasing the width of a pixel) will not increase the received information).}).
Nevertheless, irrespective of its sky coverage, a pixel is our unit of data collection.
To start with, we define the low-level Surface brightness limit or @emph{depth}, in units of magnitude/pixel with the equation below (assuming the image has zero point magnitude @mymath{z} and we want the @mymath{n}th multiple of @mymath{\sigma_p}).
Where @mymath{\sigma_p} is the per-pixel standard deviation (for example median of the measured sky standard deviation in the image).
@dispmath{SB_{n\sigma,\rm pixel}=-2.5\times\log_{10}{(n\sigma_p)}+z \quad\quad [mag/pixel]}
@cindex XDF survey
@cindex CANDELS survey
@cindex eXtreme Deep Field (XDF) survey
As an example, the XDF survey covers part of the sky that the HST has observed the most (for 85 orbits) and is consequently very small (@mymath{\sim4} minutes of arc, squared).
On the other hand, the CANDELS survey, is one of the widest multi-color surveys done by the HST covering several fields (about 720 arcmin@mymath{^2}) but its deepest fields have only 9 orbits observation.
The @mymath{1\sigma} depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W filter are respectively 34.40 and 32.45 magnitudes/pixel.
In a single orbit image, this same field has a @mymath{1\sigma} depth of 31.32 magnitudes/pixel.
Recall that a larger magnitude corresponds to fainter objects, see @ref{Brightness flux magnitude}.
@cindex Pixel scale
The low-level magnitude/pixel measurement above is only useful when all the datasets you want to use, or compare, have the same pixel size.
However, you will often find yourself using, or comparing, datasets from various instruments with different pixel scales (projected pixel width, in arc-seconds).
If we know the pixel scale, we can obtain a more easily comparable surface brightness limit in units of: magnitude/arcsec@mymath{^2}.
But another complication is that astronomical objects are usually larger than 1 arcsec@mymath{^2}.
As a result, it is common to measure the surface brightness limit over a larger (but fixed, depending on context) area.
Let's assume that every pixel is @mymath{p} arcsec@mymath{^2} and we want the surface brightness limit for an object covering A arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels that cover an area of @mymath{A} arcsec@mymath{^2}).
On the other hand, noise is added in RMS@footnote{If you add three datasets with noise @mymath{\sigma_1}, @mymath{\sigma_2} and @mymath{\sigma_3}, the resulting noise level is @mymath{\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}}, so when @mymath{\sigma_1=\sigma_2=\sigma_3\equiv\sigma}, then @mymath{\sigma_t=\sigma\sqrt{3}}.
In this case, the area @mymath{A} is covered by @mymath{A/p} pixels, so the noise level is @mymath{\sigma_t=\sigma\sqrt{A/p}}.}, hence the noise level in @mymath{A} arcsec@mymath{^2} is @mymath{n\sigma_p\sqrt{A/p}}.
But we want the result in units of arcsec@mymath{^2}, so we should divide this by @mymath{A} arcsec@mymath{^2}:
@mymath{n\sigma_p\sqrt{A/p}/A=n\sigma_p\sqrt{A/(pA^2)}=n\sigma_p/\sqrt{pA}}.
Plugging this into the magnitude equation, we get the @mymath{n\sigma} surface brightness limit, over an area of A arcsec@mymath{^2}, in units of magnitudes/arcsec@mymath{^2} (see the end of this section on how to derive this using Gnuastro):
@dispmath{SB_{{n\sigma,\rm A arcsec}^2}=-2.5\times\log_{10}{\left(n\sigma_p\over \sqrt{pA}\right)+z} \quad\quad [mag/arcsec^2]}
@cindex Correlated noise
@cindex Noise, correlated
As you saw in its derivation, the calculation above extrapolates the noise in one pixel over all the input's pixels!
In other words, all pixels are treated independently in the measurement of the standard deviation.
It therefore implicitly assumes that the noise is the same in all of the pixels.
But this only happens in individual exposures: reduced data will have correlated noise because they are a coadd of many individual exposures that have been warped (thus mixing the pixel values).
A more accurate measure which will provide a realistic value for every labeled region is known as the @emph{upper-limit magnitude}, which is discussed in the next section (@ref{Upper limit surface brightness of image}).
Within Gnuastro, measuring the surface brightness limit of the image is very easy: the outputs of NoiseChisel include the Sky standard deviation (@mymath{\sigma}) on every group of pixels (a tile) that were calculated from the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel output}.
The @mymath{\sigma_p} used above is recorded in the @code{MEDSTD} keyword of the @code{SKY_STD} extension of NoiseChisel's output.
Therefore, the first thing you need to do is to run NoiseChisel on the image.
@cindex World Coordinate System (WCS)
When MakeCatalog is called with @option{--meta-measurements}, it will calculate the input dataset's @mymath{SB_{{n\sigma,\rm A arcsec}^2}} and will write it @code{SBL} keywords the 0th HDU of the output, see @ref{MakeCatalog output keywords}.
You can set your desired @mymath{n}-th multiple of @mymath{\sigma} and the @mymath{A} arcsec@mymath{^2} area using the following two options respectively: @option{--sbl-sigma} and @option{--sbl-area} (see @ref{MakeCatalog output keywords}).
Just note that @mymath{SB_{{n\sigma,\rm A arcsec}^2}} is only calculated if the input has World Coordinate System (WCS).
Without WCS, the pixel scale cannot be derived.
In case you just want the surface brightness limit of an image without separating the objects and clumps within the image or doing any measurements, you can use the series of commands below.
They assume your image is called @file{image.fits} and its zero point magnitude is 25.
For a more detailed hands-on understanding of the surface brightness limit, see @ref{Image surface brightness limit} (which is part of a larger tutorial).
@example
$ zp=25.0
$ astnoisechisel image.fits -o nc.fits
$ astmkcatalog nc.fits -hDETECTIONS --sn --zeropoint=$zp \
--output=cat.fits --meta-measures
$ astfits cat.fits --keyvalue=SBL --quiet | asttable -Y
@end example
Because the surface brightness limit is just an extrapolation of the noise level, we can extend the concept to use a reference surface brightness limit to find the expected surface brightness limit of another telescope (in a similar location), or a different exposure time for the same telescope.
This is discussed more fully in @ref{Expected surface brightness limit}.
@node Noise based magnitude limit of image, Confusion limit of image, Surface brightness limit of image, Metameasurements on full input
@subsubsection Noise based magnitude limit of image
@cindex Limit: detection (magnitude)
@cindex Magnitude limit (noise-based)
@cindex Limit: magnitude (noise-based)
@cindex Detection limit (magnitude limit)
Assuming @mymath{\sigma_p} to be per-pixel noise level, and we apply a threshold of @mymath{n\sigma_p} on the image, what would be the faintest magnitude of an object that covers more than @mymath{A} square arcseconds?
Answering this question is the goal of the noise-based magnitude limit (nML or NML) is defined for (for more on @mymath{\sigma_p}, see @ref{Surface brightness limit of image}).
The commonly used parameters are: @mymath{n=5} and @mymath{A=7} square arcseconds (approximate area of a circle with radius 1.5 arcseconds).
Let's show the area of a pixel (in square arcseconds) as @mymath{p}.
The arbitrary area @mymath{A} covers @mymath{A/p} pixels (which is just a counter; without any units).
On the other hand, @mymath{n\sigma_p} (the threshold we want to apply) is the minimum flux that each pixel should have.
Assuming all the pixels above the threshold have the same @mymath{n\sigma} flux, then the total flux above the threshold would be @mymath{n\sigma\times A/p}.
Plugging this into the definition of magnitudes (see @ref{Brightness flux magnitude}), we get the noise based magnitude limit:
@dispmath{nML = -2.5 \log_{10}\left(n\sigma_p\times\frac{A}{p}\right) + z}
@noindent
There are several caveats to this measure:
@itemize
@item
Astronomical objects are not flat! Their flux gradually sinks deeply into the noise; see Figure 1 of Akhlaghi & Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
Therefore, the centers of stars and galaxies will be (much) brighter than the threshold and major fraction of the object's flux will not be counted in this measure (more as @mymath{n} increases).
@item
The area used in the equation is just a number (without any shape!).
In other words, it is an extrapolated/abstract area that is shapeless.
@item
Since this depends on a per-pixel measurement of @mymath{\sigma_p}, it does not account for correlated noise.
For more on this, see @ref{Surface brightness limit of image}.
@end itemize
Given the caveats above, it may be strange to see that the noise based magnitude limit is still reported in many reports (it is sometimes called simply as ``magnitude limit'' or even ``detection limit'').
So let's review the history of this metameasure for a better understanding of its usage.
When computers started to enter astronomical data analysis (1970s) their processing powers were very low (compared to today!).
Also, images were taken on photographic plates and digitized/scanned later (not too clean noise).
Therefore, they needed to apply high thresholds (@mymath{+2\sigma_p}), find groups of pixels that were contiguously above that threshold, and define the ``real'' objects based on their contiguous area above that threshold.
Therefore, this limit was also known as ``detection limit''.
Because this lost flux was a problem, authors like Petrosian @url{https://ui.adsabs.harvard.edu/abs/1976ApJ...209L...1P,1976} or Kron @url{https://ui.adsabs.harvard.edu/abs/1980ApJS...43..305K} suggested ways of finding the object's ``total'' flux (accounting for the flux below the threshold also).
But with new paradigms to detection coming in the current century (for example Gnuastro's @ref{NoiseChisel}), the ``detection limit'' name will cause confusion and misinterpretation.
Therefore in Gnuastro we call this limit the ``noise-based magnitude limit'' and include it only for legacy/historical reasons (it is just an extrapolation).
The reason for the ``noise-based'' prefix is that the equation above is purely based on the noise (does not depend on the actual signal!).
However, different objects have different profiles: some are sharp/concentrated (like stars or elliptical galaxies), others are diffuse (like ultra diffuse galaxies) and others are clumpy (like a globular cluster) and etc.
Therefore, we classify the more robust ``magnitude limit'' (that actually takes morphology into account) as a @emph{manual} metameasurement (that you should do after calling MakeCatalog based on your desired targets/goals), it is described in @ref{Magnitude limit for certain objects}.
When MakeCatalog is called with @option{--meta-measurements} and @option{--clumpscat}, it will calculate the input dataset's noise-based magnitude limit and will write it @code{NML} keyword the 0th HDU of the output.
You can set your desired @mymath{n}-th multiple of @mymath{\sigma} and the @mymath{A} arcsec@mymath{^2} area using the following two options respectively: @option{--nml-sigma} and @option{--nml-area}.
See @ref{MakeCatalog output keywords} for more.
@node Confusion limit of image, , Noise based magnitude limit of image, Metameasurements on full input
@subsubsection Confusion limit of image
@cindex Confusion limit
@cindex Limit, confusion
The confusion limit is expressed in units of distance (pixels or arcseconds), derived from the distribution of the distances of all resolved sources with their nearest neighbor.
The confusion limit therefore depends on several factors, including the pointing on the sky.
For example, a shallow image taken on the disk of M31, or the center of a large globular cluster, or close to the disk of the Milky Way will be heavily contaminated by confusion limit (resolved sources will be very close to each other) even if it has a low exposure time on a small telescope.
But at exactly the same imaging conditions (exposure time, telescope aperture, PSF and etc), a random pointing away from the disk of the Milky Way will have a good confusion limit (resolved sources are more distant).
But as we go very deep (more exposure time or larger telescope) without a similar improvement in the pixel scale, the confusion limit will become mostly uniform (assuming a relatively uniform background galaxy population).
In wide field imaging, it is also important to account for the projection effects when calculating the projected distances (so centers have to be in RA and Dec; and distances have to be measured on a sphere, not on a plane).
Therefore, unlike the previously discussed automatic meta-measurements like @ref{Surface brightness limit of image} and @ref{Noise based magnitude limit of image} the confusion limit depends on the pointing on sky.
This is because those are defined purely based on the noise of the image, but the confusion limit comes from the resolved signal's distribution in the image.
But as we go very deep in extra-galactic fields (high galactic latitudes: away from the disk of the milky way) the density of sources as you go deeper should become mostly uniform (assuming a relatively uniform background galaxy population).
The confusion limit is found by finding the nearest resolved source to each source (Segment's clumps by default, see @ref{Segment}).
Technically (when calculating the distance between the clump and its nearest neighbor), it is also important to account for the spherical nature of RA and Dec and to use spherical distance measures (see description of @code{gal_wcs_angular_distance_deg} in @ref{World Coordinate System}).
Otherwise (if using pixel coordinates/distances) the confusion limit will be affected by projection/distortion issues.
@cindex Skewness
The distribution of distances to the nearest neighbor (in arcsec) will naturally be heavily skewed (since distance is always positive).
The distribution relies heavily on the depth, pixel scale and PSF of the image (which can be very different from one image to another).
Therefore, the best measure we have found so far is to use the ``width'' of the distribution (difference between the 0.75 and 0.25 quantiles).
However, this distance is most relevant in pixel units, not arcseconds.
This is because the confusion limit depends on detection and segmentation which are done on the pixel grid not in the abstract, continuous or instrument-agnostic units of arcseconds.
When MakeCatalog is called with @option{--meta-measurements} and @option{--clumpscat}, it will measure the confusion limit and store it in the @code{CNL} output keyword (see @ref{MakeCatalog output keywords}).
Further information (five percentiles) about the distribution of distances to the nearest neighbor are provided in the @code{CNLP*}.
The percentile information is necessary because the distribution to the nearest label is by nature a very non-symmetric distribution (skewed to the positive).
As a result, it cannot be simply parameterized as in the other metameasures here.
Therefore as described in , MakeCatalog doesn't just return the median (at quantile of 0.5 or percentile of 50), but several other quantiles.
Ultimately, you can see each clump's nearest neighbor as well as the spherical distance to its nearest clump by activating the @option{--cnl-check} option (see @ref{MakeCatalog output HDUs}).
@node Manual metameasurements, Adding new columns to MakeCatalog, Metameasurements on full input, MakeCatalog
@subsection Manual metameasurements
The concept of metameasurements, and the automatically calculated metameasurements of MakeCatalog were introduced in @ref{Metameasurements on full input}.
There are other metameasures that cannot (as of this writing) be done automatically in the same run of MakeCatalog and need a customized execution of MakeCatalog.
In this section these types of metameasures are introduced.
@menu
* Expected surface brightness limit::
* Upper limit surface brightness of image:: Necessary for diffuse regions of image.
* Magnitude limit for certain objects::
* Completeness limit for certain objects::
@end menu
@node Expected surface brightness limit, Upper limit surface brightness of image, Manual metameasurements, Manual metameasurements
@subsubsection Expected surface brightness limit
The Surface brightness limit was defined in @ref{Surface brightness limit of image}.
As we saw there, it is ultimately a theoretical extrapolation of one pixel's noise level.
Therefore, we can use a reference image's surface brightness limit (@mymath{SB_r}) to derive an expected surface brightness limit of another image (on another telescope with another exposure time, but with @emph{the same} filter; let's call it @mymath{SB_i}).
Assuming no correlated noise (which is only valid on a single exposure!), the @mymath{\sigma_p} described above is purely due to the Poisson noise of the background signal (for example Zodiacal light, light pollution or etc).
Therefore, taking @mymath{S} to be the exposed surface of the telescope's primary mirror (or lens) and @mymath{t} to be the exposure time, the signal of the background will increase with @mymath{S\times t} and therefore @mymath{\sigma_p\propto \sqrt{St}}.
Plugging this into two instances of the equation above, allows us to derive @mymath{SB_i} from @mymath{SB_r}
@dispmath{SB_i = SB_r + 2.5\log_{10}\left( {n_i \over n_r} \sqrt{{S_i \over S_r}{t_i \over t_r}{A_i \over A_r}} \right)}
@cindex LSST
@cindex Vera C. Rubin Observatory
@cindex Observatory: Vera C. Rubin
@cindex Rubin (Vera C.) Observatory
Since almost all mirrors are circular, we can simplify the relation above by replacing the exposed surface with the exposed radius (accounting for the non-exposed area in most reflective telescopes due to the secondary mirror) as shown in the equation below.
Note that we didn't simply say ``radius'' (and the equation does not have @mymath{r}), but ``exposed radius'' (@mymath{r_e} in the equation).
This is a very important factor to have in mind.
For example in the Vera C. Rubin Observatory (that will be conducting the LSST survey) the primary mirror has a diameter of 8.4 meters, however, the inner circular area of radius 5 meters is not used (due to the second and third mirrors)@footnote{@url{https://commons.wikimedia.org/wiki/File:LSSToptics.jpg}}.
Therefore, the useful surface of the Vera C. Rubin telescope is @mymath{\pi(8.4^2-5.0^2)=\pi45.56=\pi6.75^2}, giving it an exposed radius of @mymath{r_e=6.75m} (for an easy implementation of this equation in Gnuastro, see the @code{sblim-diff} operator of @ref{Unit conversion operators}).
@dispmath{SB_i = SB_r + 2.5\log_{10}\left( {r_{ei} \over r_{er}}{n_i \over n_r} \sqrt{{t_i \over t_r}{A_i \over A_r}} \right)}
@node Upper limit surface brightness of image, Magnitude limit for certain objects, Expected surface brightness limit, Manual metameasurements
@subsubsection Upper limit surface brightness of image
@cindex Correlated noise
@cindex Noise, correlated
As mentioned in @ref{Surface brightness limit of image}, the surface brightness limit assumes independent pixels when deriving the standard deviation (the main input in the equation).
It just extrapolates the standard deviation derived from one pixel to the requested area.
But as mentioned at the end of that section, we have correlated noise in our science-ready (deep) images and the noise of the pixels are not independent.
Because of this, the surface brightness limit will always under-estimate the surface brightness (give fainter values than what is statistically possible in the data for the requested area).
To account for the correlated noise in the images, we need to derive the standard deviation over a group of pixels that fall within a certain footprint/shape.
For example over a circular aperture of radius 5.6419 arcsec, or a square with a side length of @mymath{10} arcsec.
Depending on the correlated noise systematics, the limit can be (very) different for different shapes, even if they have the same area (as in the circle and square mentioned in the previous sentence: both have an area of 100 arcsec@mymath{^2}).
Therefore we need to derive the standard deviation that goes into the surface brightness limit equation over a certain footprint/shape.
To do that, we should:
@enumerate
@item
Place the desired footprint many times randomly over all the undetected pixels in an image.
In MakeCatalog, the number of these random positions can be configured with @option{--upnum} and you can check their positions with @option{--checkuplim}.
@item
Calculate the sum of pixel values in each randomly placed footprint.
@item
Calculate the sigma-clipped standard deviation of the resulting distribution (of sum of pixel values in the randomly placed apertures).
Therefore, each footprint's measurement is independent of the other.
@item
Calculate the surface brightness of that standard deviation (after multiplying it with your desired multiple of sigma).
For the definition of surface brightness, see @ref{Brightness flux magnitude}.
@end enumerate
The measurements over randomly placed apertures may remind you of @ref{Upper limit measurements}.
Generally, the ``upper limit'' prefix is given to all measurements with this way of measurement.
Therefore this limit is called ``Upper limit surface brightness'' of an image (for a multiple of sigma, over a certain shape).
Traditionally a circular aperture of a fixed radius (in arcseconds) has been used.
In Gnuastro, a labeled image containing the desired shape/aperture can be generated with MakeProfiles.
The position of the label is irrelevant because the upper limit measurements are done on the many randomly placed footprints in undetected regions (independent of where the label is positioned).
That labeled image should then be given to MakeCatalog, while requesting @option{--upperlimit-sb}.
Of course, all detected signal in the image needs to be masked (set to blank/NaN) so MakeCatalog doesn't use randomly placed apertures that overlap with detected signal in the image.
Going into the implementation details can get pretty hard to follow in English, so a full hands-on tutorial is available in the second half of @ref{Image surface brightness limit}.
Read that tutorial with the same input images and run the commands, and see each output image to get a good understanding of how to properly measure the upper limit surface brightness of your images.
@node Magnitude limit for certain objects, Completeness limit for certain objects, Upper limit surface brightness of image, Manual metameasurements
@subsubsection Magnitude limit for certain objects
@cindex Magnitude limit
Suppose we have taken two images of the same field of view with the same CCD, once with a smaller telescope, and once with a larger one.
Because we used the same CCD, the noise will be very similar.
However, the larger telescope has gathered more light, therefore the same star or galaxy will have a higher signal-to-noise ratio (S/N) in the image taken with the larger one.
The same applies for a coadded image of the field compared to a single-exposure image of the same telescope.
This concept is used by some researchers to define the ``magnitude limit'' or ``detection limit'' at a certain S/N (sometimes 10, 5 or 3 for example, also written as @mymath{10\sigma}, @mymath{5\sigma} or @mymath{3\sigma}).
To do this, they measure the magnitude and signal-to-noise ratio of all the objects that have similar within an image and measure the mean (or median) magnitude of objects at the desired S/N.
A fully working example of deriving the magnitude limit is available in the tutorials section: @ref{Measuring the dataset limits}.
However, this method should be used with extreme care!
This is because the shape of the object becomes important in this method: a sharper object will have a higher @emph{measured} S/N compared to a more diffuse object at the same original magnitude.
Besides the inherent shape/sharpness of the object, issues like the PSF also become important in this method (because the finally observed shapes of objects are important here): two surveys with the same surface brightness limit (see @ref{Surface brightness limit of image}) will have different magnitude limits if one is taken from space and the other from the ground.
@node Completeness limit for certain objects, , Magnitude limit for certain objects, Manual metameasurements
@subsubsection Completeness limit for certain objects
@cindex Completeness
As the surface brightness of the objects decreases, the ability to detect them will also decrease.
An important statistic is thus the fraction of objects of similar morphology and magnitude that will be detected with our detection algorithm/parameters in a given image.
This fraction is known as @emph{completeness}.
-For brighter objects, completeness is 1: all bright objects that might exist over the image will be detected.
However, as we go to objects of lower overall surface brightness, we will fail to detect a fraction of them, and fainter than a certain surface brightness level (for each morphology), nothing will be detectable in the image: you will need more data to construct a ``deeper'' image.
For a given profile and dataset, the magnitude where the completeness drops below a certain level (usually above @mymath{90\%}) is known as the completeness limit.
@cindex Purity
@cindex False detections
@cindex Detections false
Another important parameter in measuring completeness is purity: the fraction of true detections to all detections.
In effect purity is the measure of contamination by false detections: the higher the purity, the lower the contamination.
Completeness and purity are anti-correlated: if we can allow a large number of false detections (that we might be able to remove by other means), we can significantly increase the completeness limit.
One traditional way to measure the completeness and purity of a given sample is by embedding mock profiles in regions of the image with no detection.
However in such a study we must be really careful to choose model profiles as similar to the target of interest as possible.
@node Adding new columns to MakeCatalog, Invoking astmkcatalog, Manual metameasurements, MakeCatalog
@subsection Adding new columns to MakeCatalog
MakeCatalog is designed to allow easy addition of different measurements over a labeled image; see Akhlaghi @url{https://arxiv.org/abs/1611.06387v1,2016}.
A check-list style description of necessary steps to do that is described in this section.
The common development characteristics of MakeCatalog and other Gnuastro programs is explained in @ref{Developing}.
We strongly encourage you to have a look at that chapter to greatly simplify your navigation in the code.
After adding and testing your column, you are most welcome (and encouraged) to share it with us so we can add to the next release of Gnuastro for everyone else to also benefit from your efforts.
MakeCatalog will first pass over each label's pixels two times and do necessary raw/internal calculations.
Once the passes are done, it will use the raw information for filling the final catalog's columns.
In the first pass it will gather mainly object information and in the second run, it will mainly focus on the clumps, or any other measurement that needs an output from the first pass.
These two passes are designed to be raw summations: no extra processing.
This will allow parallel processing and simplicity/clarity.
So if your new calculation, needs new raw information from the pixels, then you will need to also modify the respective @code{mkcatalog_first_pass} and @code{mkcatalog_second_pass} functions (both in @file{bin/mkcatalog/mkcatalog.c}) and define new raw table columns in @file{main.h} (hopefully the comments in the code are clear enough).
In all these different places, the final columns are sorted in the same order (same order as @ref{Invoking astmkcatalog}).
This allows a particular column/option to be easily found in all steps.
Therefore in adding your new option, be sure to keep it in the same relative place in the list in all the separate places (it does not necessarily have to be in the end), and near conceptually similar options.
@table @file
@item main.h
The @code{objectcols} and @code{clumpcols} enumerated variables (@code{enum}) define the raw/internal calculation columns.
If your new column requires new raw calculations, add a row to the respective list.
If your calculation requires any other settings parameters, you should add a variable to the @code{mkcatalogparams} structure.
@item ui.c
If the new column needs raw calculations (an entry was added in @code{objectcols} and @code{clumpcols}), specify which inputs it needs in @code{ui_necessary_inputs}, similar to the other options.
Afterwards, if your column includes any particular settings (you needed to add a variable to the @code{mkcatalogparams} structure in @file{main.h}), you should do the sanity checks and preparations for it here.
@item ui.h
The @code{option_keys_enum} associates a unique value for each option to MakeCatalog.
The options that have a short option version, the single character short comment is used for the value.
Those that do not have a short option version, get a large integer automatically.
You should add a variable here to identify your desired column.
@cindex GNU C library
@item args.h
This file specifies all the parameters for the GNU C library, Argp structure that is in charge of reading the user's options.
To define your new column, just copy an existing set of parameters and change the first, second and 5th values (the only ones that differ between all the columns), you should use the macro you defined in @file{ui.h} here.
@item columns.c
This file contains the main definition and high-level calculation of your new column through the @code{columns_define_alloc} and @code{columns_fill} functions.
In the first, you specify the basic information about the column: its name, units, comments, type (see @ref{Numeric data types}) and how it should be printed if the output is a text file.
You should also specify the raw/internal columns that are necessary for this column here as the many existing examples show.
Through the types for objects and rows, you can specify if this column is only for clumps, objects or both.
The second main function (@code{columns_fill}) writes the final value into the appropriate column for each object and clump.
As you can see in the many existing examples, you can define your processing on the raw/internal calculations here and save them in the output.
@item mkcatalog.c
This file contains the low-level parsing functions.
To be optimized, the parsing is done in parallel through the @code{mkcatalog_single_object} function.
This function initializes the necessary arrays and calls the lower-level @code{parse_objects} and @code{parse_clumps} for actually going over the pixels.
They are all heavily commented, so you should be able to follow where to add your necessary low-level calculations.
@item doc/gnuastro.texi
Update this manual and add a description for the new column.
@end table
@node Invoking astmkcatalog, , Adding new columns to MakeCatalog, MakeCatalog
@subsection Invoking MakeCatalog
MakeCatalog will do measurements and produce a catalog from a labeled dataset and optional values dataset(s).
The executable name is @file{astmkcatalog} with the following general template
@example
$ astmkcatalog [OPTION ...] InputImage.fits
@end example
@noindent
One line examples:
@example
## Create catalog with RA, Dec, Magnitude and Magnitude error,
## from Segment's output:
$ astmkcatalog --ra --dec --magnitude seg-out.fits
## Same catalog as above (using short options):
$ astmkcatalog -rdm seg-out.fits
## Write the catalog to a text table:
$ astmkcatalog -rdm seg-out.fits --output=cat.txt
## Output columns specified in `columns.conf':
$ astmkcatalog --config=columns.conf seg-out.fits
## Use object and clump labels from a K-band image, but pixel values
## from an i-band image.
$ astmkcatalog K_segmented.fits --hdu=DETECTIONS --clumpscat \
--clumpsfile=K_segmented.fits --clumpshdu=CLUMPS \
--valuesfile=i_band.fits
@end example
@cindex Gaussian
@noindent
If MakeCatalog is to do processing (not printing help or option values), an input labeled image should be provided.
The options described in this section are those that are particular to MakeProfiles.
For operations that MakeProfiles shares with other programs (mainly involving input/output or general processing steps), see @ref{Common options}.
Also see @ref{Common program behavior} for some general characteristics of all Gnuastro programs including MakeCatalog.
The various measurements/columns of MakeCatalog are requested as options, either on the command-line or in configuration files, see @ref{Configuration files}.
The full list of available columns is available in @ref{MakeCatalog measurements on each label}.
Depending on the requested columns, MakeCatalog needs more than one input dataset, for more details, please see @ref{MakeCatalog inputs and basic settings}.
The upper-limit measurements in particular need several configuration options which are thoroughly discussed in @ref{Upper-limit settings}.
Finally, in @ref{MakeCatalog output HDUs} and @ref{MakeCatalog output keywords} the output HDUs and keywords that are written by MakeCatalog are discussed.
@menu
* MakeCatalog inputs and basic settings:: Input files and basic settings.
* Upper-limit settings:: Settings for upper-limit measurements.
* MakeCatalog output HDUs:: Options that specify the output HDUs.
* MakeCatalog output keywords:: Meta data and metameasures and related options.
@end menu
@node MakeCatalog inputs and basic settings, Upper-limit settings, Invoking astmkcatalog, Invoking astmkcatalog
@subsubsection MakeCatalog inputs and basic settings
MakeCatalog works by using a localized/labeled dataset (see @ref{MakeCatalog}).
This dataset maps/labels pixels to a specific target (row number in the final catalog) and is thus the only necessary input dataset to produce a minimal catalog in any situation.
Because it only has labels/counters, it must have an integer type (see @ref{Numeric data types}), see below if your labels are in a floating point container.
When the requested measurements only need this dataset (for example, @option{--geo-x}, @option{--geo-y}, or @option{--geo-area}), MakeCatalog will not read any more datasets.
Low-level measurements that only use the labeled image are rarely sufficient for any high-level science case.
Therefore necessary input datasets depend on the requested columns in each run.
For example, let's assume you want the brightness/magnitude and signal-to-noise ratio of your labeled regions.
For these columns, you will also need to provide an extra dataset containing values for every pixel of the labeled input (to measure magnitude) and another for the Sky standard deviation (to measure error).
All such auxiliary input files have to have the same size (number of pixels in each dimension) as the input labeled image.
Their numeric data type is irrelevant (they will be converted to 32-bit floating point internally).
For the full list of available measurements, see @ref{MakeCatalog measurements on each label}.
The ``values'' dataset is used for measurements like brightness/magnitude, or flux-weighted positions.
If it is a real image, by default it is assumed to be already Sky-subtracted prior to running MakeCatalog.
If it is not, you should use the @option{--subtractsky} option so MakeCatalog reads and subtracts the Sky dataset before any processing.
To obtain the Sky value, you can use the @option{--sky} option of @ref{Statistics}, but the best recommended method is @ref{NoiseChisel}, see @ref{Sky value}.
MakeCatalog can also do measurements on sub-structures of detections.
In other words, it can produce two catalogs.
Following the nomenclature of Segment (see @ref{Segment}), the main labeled input dataset is known as ``object'' labels and the (optional) sub-structure input dataset is known as ``clumps''.
If MakeCatalog is run with the @option{--clumpscat} option, it will also need a labeled image containing clumps, similar to what Segment produces (see @ref{Segment output}).
Since clumps are defined within detected regions (they exist over signal, not noise), MakeCatalog uses their boundaries to subtract the level of signal under them.
There are separate options to explicitly request a file name and HDU/extension for each of the required input datasets as fully described below (with the @option{--*file} format).
When each dataset is in a separate file, these options are necessary.
However, one great advantage of the FITS file format (that is heavily used in astronomy) is that it allows the storage of multiple datasets in one file.
So in most situations (for example, if you are using the outputs of @ref{NoiseChisel} or @ref{Segment}), all the necessary input datasets can be in one file.
When none of the @option{--*file} options are given (for example @option{--clumpsfile} or @option{--valuesfile}), MakeCatalog will assume the necessary input datasets are available as HDUs in the file given as its argument (without any option).
When the Sky or Sky standard deviation datasets are necessary and the only @option{--*file} option called is @option{--valuesfile}, MakeCatalog will search for these datasets (with the default/given HDUs) in the file given to @option{--valuesfile} (before looking into the main argument file).
It may happen that your labeled objects image was created with a program that only outputs floating point files.
However, you know it only has integer valued pixels that are stored in a floating point container.
In such cases, you can use Gnuastro's Arithmetic program (see @ref{Arithmetic}) to change the numerical data type of the image (@file{float.fits}) to an integer type image (@file{int.fits}) with a command like below:
@example
@command{$ astarithmetic float.fits int32 --output=int.fits}
@end example
To summarize: if the input file to MakeCatalog is the default/full output of Segment (see @ref{Segment output}) you do not have to worry about any of the @option{--*file} options below.
You can just give Segment's output file to MakeCatalog as described in @ref{Invoking astmkcatalog}.
To feed NoiseChisel's output into MakeCatalog, just change the labeled dataset's header (with @option{--hdu=DETECTIONS}).
The full list of input dataset options and general setting options are described below.
@table @option
@item -l FITS
@itemx --clumpsfile=FITS
The FITS file containing the labeled clumps dataset when @option{--clumpscat} is called (see @ref{MakeCatalog output HDUs}).
When @option{--clumpscat} is called, but this option is not, MakeCatalog will look into the main input file (given as an argument) for the required extension/HDU (value to @option{--clumpshdu}).
@item --clumpshdu=STR
The HDU/extension of the clump labels dataset.
Only pixels with values above zero will be considered.
The clump labels dataset has to be an integer data type (see @ref{Numeric data types}) and only pixels with a value larger than zero will be used.
See @ref{Segment output} for a description of the expected format.
@item -v FITS
@itemx --valuesfile=FITS
The file name of the (sky-subtracted) values dataset.
When any of the columns need values to associate with the input labels (for example, to measure the sum of pixel values or magnitude of a galaxy, see @ref{Brightness flux magnitude}), MakeCatalog will look into a ``values'' for the respective pixel values.
In most common processing, this is the actual astronomical image that the labels were defined, or detected, over.
The HDU/extension of this dataset in the given file can be specified with @option{--valueshdu}.
If this option is not called, MakeCatalog will look for the given extension in the main input file.
@item --valueshdu=STR/INT
The name or number (counting from zero) of the extension containing the ``values'' dataset, see the descriptions above and those in @option{--valuesfile} for more.
@item -s FITS/FLT
@itemx --insky=FITS/FLT
Sky value as a single number, or the file name containing a dataset (different values per pixel or tile).
The Sky dataset is only necessary when @option{--subtractsky} is called or when a column directly related to the Sky value is requested (currently @option{--sky}).
This dataset may be a tessellation, with one element per tile (see @option{--oneelempertile} of NoiseChisel's @ref{Processing options}).
When the Sky dataset is necessary but this option is not called, MakeCatalog will assume it is an HDU/extension (specified by @option{--skyhdu}) in one of the already given files.
First it will look for it in the @option{--valuesfile} (if it is given) and then the main input file (given as an argument).
By default the values dataset is assumed to be already Sky subtracted, so
this dataset is not necessary for many of the columns.
@item --skyhdu=STR
HDU/extension of the Sky dataset, see @option{--skyfile}.
@item --subtractsky
Subtract the sky value or dataset from the values file prior to any
processing.
@item -t STR/FLT
@itemx --instd=STR/FLT
Sky standard deviation value as a single number, or the file name containing a dataset (different values per pixel or tile).
With the @option{--variance} option you can tell MakeCatalog to interpret this value/dataset as a variance image, not standard deviation.
@strong{Important note:} This must only be the SKY standard deviation or variance (not including the signal's contribution to the error).
In other words, the final standard deviation of a pixel depends on how much signal there is in it.
MakeCatalog will find the amount of signal within each pixel (while subtracting the Sky, if @option{--subtractsky} is called) and account for the extra error due to it's value (signal).
Therefore if the input standard deviation (or variance) image also contains the contribution of signal to the error, then the final error measurements will be over-estimated.
@item --stdhdu=STR
The HDU of the Sky value standard deviation image.
@item --variance
The value/file given to @option{--instd} (and @option{--stdhdu} has the Sky variance of every pixel, not the Sky standard deviation.
@item --novalinerror
The value/file given to @option{--instd} is not just due to the sky (noise), but also contains the contribution of the signal to each pixel's standard deviation or variance.
If this option is given, the pixel values will be ignored when measuring the @option{--*-error} columns.
@item -z FLT
@itemx --zeropoint=FLT
The zero point magnitude for the input image, see @ref{Brightness flux magnitude}.
@item --sigmaclip FLT,FLT
The sigma-clipping parameters when any of the sigma-clipping related columns are requested (for example, @option{--sigclip-median} or @option{--sigclip-number}).
This option takes two values: the first is the multiple of @mymath{\sigma}, and the second is the termination criteria.
If the latter is larger than 1, it is read as an integer number and will be the number of times to clip.
If it is smaller than 1, it is interpreted as the tolerance level to stop clipping.
See @ref{Sigma clipping} for a complete explanation.
@item --frac-max=FLT[,FLT]
The fractions (one or two) of maximum value in objects or clumps to be used in the related columns, for example, @option{--frac-max1-area}, @option{--frac-max1-sum} or @option{--frac-max1-radius}.
For the maximum value, see the description of @option{--maximum} column below.
The value(s) of this option must be larger than 0 and smaller than 1 (they are a fraction).
When only @option{--frac-max1-area} or @option{--frac-max1-sum} is requested, one value must be given to this option, but if @option{--frac-max2-area} or @option{--frac-max2-sum} are also requested, two values must be given to this option.
The values can be written as simple floating point numbers, or as fractions, for example, @code{0.25,0.75} and @code{0.25,3/4} are the same.
@item --spatialresolution=FLT
The error in measuring spatial properties (for example, the area) in units of pixels.
You can think of this as the FWHM of the dataset's PSF and is used in measurements like the error in surface brightness (@option{--sb-error} described in @ref{Surface brightness measurements}).
@cindex Cirrus (Galactic)
@cindex Interstellar dust filaments
But because most objects in astronomy are either concentrated (like stars or galaxies) or very large (like interstellar dust filaments, also known as Galactic cirrus), our @url{https://savannah.gnu.org/task/?16586,tests} showed that this factor should be set to zero to give a reasonable error measurement.
In summary, this is because in both situations the error in the number of pixels used in the measurement only effects the few pixels in the circumference of the label and they usually have the lowest values (contribute least to the total sum or average).
Therefore, the default value to this option is 0.
But in case your targets are very small (a handful of pixels wide) and their light profiles are flat (for example a cosmic ray!), it may be necessary to increase the value to this option.
@item --inbetweenints
Output will contain one row for all integers between 1 and the largest label in the input (irrespective of their existance in the input image).
By default, MakeCatalog's output will only contain rows with integers that actually corresponded to at least one pixel in the input dataset.
For example, if the input's only labeled pixel values are 11 and 13, MakeCatalog's default output will only have two rows.
If you use this option, it will have 13 rows and all the columns corresponding to integer identifiers that did not correspond to any pixel will be 0 or NaN (depending on context).
@end table
@node Upper-limit settings, MakeCatalog output HDUs, MakeCatalog inputs and basic settings, Invoking astmkcatalog
@subsubsection Upper-limit settings
The upper-limit magnitude was discussed in @ref{Metameasurements on full input}.
Unlike other measured values/columns in MakeCatalog, the upper limit magnitude needs several extra parameters which are discussed here.
All the options specific to the upper-limit measurements start with @option{up} for ``upper-limit''.
The only exception is @option{--envseed} that is also present in other programs and is general for any job requiring random number generation in Gnuastro (see @ref{Generating random numbers}).
@cindex Reproducibility
One very important consideration in Gnuastro is reproducibility.
Therefore, the values to all of these parameters along with others (like the random number generator type and seed) are also reported in the comments of the final catalog when the upper limit magnitude column is desired.
The random seed that is used to define the random positions for each object or clump is unique and set based on the (optionally) given seed, the total number of objects and clumps and also the labels of the clumps and objects.
So with identical inputs, an identical upper-limit magnitude will be found.
However, even if the seed is identical, when the ordering of the object/clump labels differs between different runs, the result of upper-limit measurements will not be identical.
MakeCatalog will randomly place the object/clump footprint over the dataset.
When the randomly placed footprint does not fall on any object or masked region (see @option{--upmaskfile}) it will be used in the final distribution.
Otherwise that particular random position will be ignored and another random position will be generated.
Finally, when the distribution has the desired number of successfully measured random samples (@option{--upnum}) the distribution's properties will be measured and placed in the catalog.
When the profile is very large or the image is significantly covered by detections, it might not be possible to find the desired number of samplings in a reasonable time.
MakeProfiles will continue searching until it is unable to find a successful position (since the last successful measurement@footnote{The counting of failed positions restarts on every successful measurement.}), for a large multiple of @option{--upnum} (currently@footnote{In Gnuastro's source, this constant number is defined as the @code{MKCATALOG_UPPERLIMIT_MAXFAILS_MULTIP} macro in @file{bin/mkcatalog/main.h}, see @ref{Downloading the source}.} this is 10).
If @option{--upnum} successful samples cannot be found until this limit is reached, MakeCatalog will set the upper-limit magnitude for that object to NaN (blank).
MakeCatalog will also print a warning if the range of positions available for the labeled region is smaller than double the size of the region.
In such cases, the limited range of random positions can artificially decrease the standard deviation of the final distribution.
If your dataset can allow it (it is large enough), it is recommended to use a larger range if you see such warnings.
@table @option
@item --upmaskfile=FITS
File name of mask image to use for upper-limit calculation.
In some cases (especially when doing matched photometry), the object labels specified in the main input and mask image might not be adequate.
In other words they do not necessarily have to cover @emph{all} detected objects: the user might have selected only a few of the objects in their labeled image.
This option can be used to ignore regions in the image in these situations when estimating the upper-limit magnitude.
All the non-zero pixels of the image specified by this option (in the @option{--upmaskhdu} extension) will be ignored in the upper-limit magnitude measurements.
For example, when you are using labels from another image, you can give NoiseChisel's objects image output for this image as the value to this option.
In this way, you can be sure that regions with data do not harm your distribution.
See @ref{Metameasurements on full input} for more on the upper limit magnitude.
@item --upmaskhdu=STR
The extension in the file specified by @option{--upmask}.
@item --upnum=INT
The number of random samples to take for all the objects.
A larger value to this option will give a more accurate result (asymptotically), but it will also slow down the process.
When a randomly positioned sample overlaps with a detected/masked pixel it is not counted and another random position is found until the object completely lies over an undetected region.
So you can be sure that for each object, this many samples over undetected objects are made.
See the upper limit magnitude discussion in @ref{Metameasurements on full input} for more.
@item --uprange=INT,INT
The range/width of the region (in pixels) to do random sampling along each dimension of the input image around each object's position.
This is not a mandatory option and if not given (or given a value of zero in a dimension), the full possible range of the dataset along that dimension will be used.
This is useful when the noise properties of the dataset vary gradually.
In such cases, using the full range of the input dataset is going to bias the result.
However, note that decreasing the range of available positions too much will also artificially decrease the standard deviation of the final distribution (and thus bias the upper-limit measurement).
@item --envseed
@cindex Seed, Random number generator
@cindex Random number generator, Seed
Read the random number generator type and seed value from the environment (see @ref{Generating random numbers}).
Random numbers are used in calculating the random positions of different samples of each object.
@item --upsigmaclip=FLT,FLT
The raw distribution of random values will not be used to find the upper-limit magnitude, it will first be @mymath{\sigma}-clipped (see @ref{Sigma clipping}) to avoid outliers in the distribution (mainly the faint undetected wings of bright/large objects in the image).
This option takes two values: the first is the multiple of @mymath{\sigma}, and the second is the termination criteria.
If the latter is larger than 1, it is read as an integer number and will be the number of times to clip.
If it is smaller than 1, it is interpreted as the tolerance level to stop clipping. See @ref{Sigma clipping} for a complete explanation.
@item --upnsigma=FLT
The multiple of the final (@mymath{\sigma}-clipped) standard deviation (or @mymath{\sigma}) used to measure the upper-limit sum or magnitude.
@item --checkuplim=INT[,INT]
Write a table of positions and measured values for the full random distribution used in the upper-limit positions for one particular object or clump.
The table will be placed as a HDU in the output file with a name of @code{CHECK-UPPERLIMIT}.
If only one integer is given to this option, it is interpreted to be an object's label.
If two values are given, the first is the object label and the second is the ID of requested clump within it.
The output is a table with three columns on a 2D image input (and four columns on a 3D cube input).
The first two columns are the pixel X,Y positions of the center of each label's tile (see next paragraph), in each random sampling of this particular object/clump (on a 3D cube, the third column will be the position along the third dimension).
The final column is the measured flux over that region.
If a randomly placed region overlapped with a detection or masked pixel, its measured value will be a NaN (not-a-number).
The total number of rows is thus unknown before running.
However, if an upper-limit measurement was made in the main output of MakeCatalog, you can be sure that the number of rows with non-NaN measurements is the number given to the @option{--upnum} option.
The ``tile'' of each label is defined by the minimum and maximum positions of each label: values of the @option{--min-x}, @option{--max-x}, @option{--min-y} and @option{--max-y} columns in the main output table for each label.
Therefore, the tile center position that is recorded in the output of this column ignores the distribution of labeled pixels within the tile.
Note that this is only about the center position, not the measurement.
Precise interpretation of the position is only relevant when the footprint of your label is highly un-symmetrical and you want to use this catalog to insert your object into the image.
In such a case, you can also ask for @option{--min-x} and @option{--min-y} and manually calculate their difference with the following two positional measurements of your desired label: @option{--geo-x} and @option{--geo-y} (which report the label's ``geometric'' center; only using the label positions ignoring any ``values'') or @option{--x} and @option{--y} (which report the value-weighted center of the label).
Adding the difference with the position reported by this column, will let you define alternative ``center''s for your label in particular situations (this will usually not be necessary!).
For more on these positional columns, see @ref{Position measurements in pixels}.
@end table
@node MakeCatalog output HDUs, MakeCatalog output keywords, Upper-limit settings, Invoking astmkcatalog
@subsubsection MakeCatalog output HDUs
After it has completed all the requested measurements (see @ref{MakeCatalog measurements on each label}), MakeCatalog writes them in table(s) that are stored into a FITS file.
This is necessary because the number of output tables and the metadata that MakeCatalog provides can be numerous.
If any of the output tables are necessary in another format (for example plain-text), you can use Gnuastro's Table program (with executable name @code{asttable}, see @ref{Table}).
This section focuses on the HDUs of the output, for the keywords, see @ref{MakeCatalog output keywords}.
The name of the output FITS table can be given to the @option{--output} option, with a recognized FITS suffix (as defined in @ref{Arguments}).
When it is not given, the input name will be appended with a @file{-cat.fits} suffix (see @ref{Automatic output}) and its format (ASCII or Binary FITS table) will be determined from the @option{--tableformat} option, which is also discussed in @ref{Input output options}.
By default (when @option{--spectrum}, @option{--checkuplim} or @option{--clumpscat} are not called) only a single catalog/table will be created for the labeled objects.
@itemize
@item
When @option{--clumpscat} is called, a secondary catalog/table HDU will also be created for ``clumps'' (one of the outputs of the Segment program, for more on ``objects'' and ``clumps'', see @ref{Segment}).
In short, if you only have one labeled image, you do not have to worry about clumps and just ignore this.
@item
When @option{--checkuplim} is called, a HDU is added to the output FITS file and is fully described in the description of this option in @ref{Upper-limit settings}.
@end itemize
The full list of MakeCatalog's options relating to the output file format and keywords are listed below.
See @ref{MakeCatalog measurements on each label} for specifying which columns you want in the final catalog.
@table @option
@item -C
@itemx --clumpscat
Do measurements on clumps and produce a second catalog (only devoted to clumps).
When this option is given, MakeCatalog will also look for a secondary labeled dataset (identifying substructure) and produce a catalog from that.
For more on the definition on ``clumps'', see @ref{Segment}.
When the output is a FITS file, the objects and clumps catalogs/tables will be stored as multiple extensions of one FITS file.
You can use @ref{Table} to inspect the column meta-data and contents in this case.
However, in plain text format (see @ref{Gnuastro text table format}), it is only possible to keep one table per file.
Therefore, if the output is a text file, two output files will be created, ending in @file{_o.txt} (for objects) and @file{_c.txt} (for clumps).
@item --noclumpsort
Do not sort the clumps catalog based on object ID (only relevant with @option{--clumpscat}).
This option will benefit the performance@footnote{The performance boost due to @option{--noclumpsort} can only be felt when there are a huge number of objects.
Therefore, by default the output is sorted to avoid miss-understandings or bugs in the user's scripts when the user forgets to sort the outputs.} of MakeCatalog when it is run on multiple threads @emph{and} the position of the rows in the clumps catalog is irrelevant (for example, you just want the number-counts).
MakeCatalog does all its measurements on each @emph{object} independently and in parallel.
As a result, while it is writing the measurements on each object's clumps, it does not know how many clumps there were in previous objects.
Each thread will just fetch the first available row and write the information of clumps (in order) starting from that row.
After all the measurements are done, by default (when this option is not called), MakeCatalog will reorder/permute the clumps catalog to have both the object and clump ID in an ascending order.
If you would like to order the catalog later (when it is a plain text file), you can run the following command to sort the rows by object ID (and clump ID within each object), assuming they are respectively the first and second columns:
@example
$ awk '!/^#/' out_c.txt | sort -g -k1,1 -k2,2
@end example
@item --cnl-check
Add an extra HDU to the output of MakeCatalog that contains the positions of every clump as well as the positions of that clump's nearest clump and the distance between the two (in both units of arcseconds and pixels).
@end table
@node MakeCatalog output keywords, , MakeCatalog output HDUs, Invoking astmkcatalog
@subsubsection MakeCatalog output keywords
The columns and rows that include the various measurements for the various labels of the input in its HDUs were described separately in @ref{MakeCatalog output HDUs}.
But those raw numbers are not the only thing that MakeCatalog writes in its output!
MakeCatalog will also write metadata (header keywords) in the 0th (first) HDU of the output FITS file which add a lot of value and help to interpret the raw numbers.
The only keywords in the other HDUs are the column names, units and comments; generic metadata are written in the 0th HDU.
You can see the full list of keywords written by MakeCatalog in its output with Gnuastro's Fits program, for example @command{astfits out.fits -h0}.
If you only want the value of certain keywords (in a script/pipeline for example), its @option{--keyvalue} option is pretty convenient see @ref{Keyword inspection and manipulation}.
The keywords are grouped in the output based on context with a title above each group.
We'll follow the same structure here, skipping the first three groups (that are generic to all Gnuastro's programs, see @ref{Output FITS files}).
Some of the values reported below will be repeated (are the same as the respective option keyword in the ``Option values'' group of keywords).
The ``Option value'' group contains the raw option names and values: the option (keyword) names are just written in full-caps according to the FITS standard.
This is done for the following reason:
Human readability is important for option names, so they tend to be long and force a @code{HIERARCH} string at the start of the line.
On the contrary, for automatic extraction and human readability, it helps a lot to have all keywords related to a certain metameasure start with the same characters and because the keyword description is written just after it, there is no problem if the name is cryptic.
@table @asis
@item Input file(s) and HDUs
@table @asis
@item @code{INLAB} and @code{INLABHDU}: main (object) label dataset.
@itemx @code{INCLU} and @code{INCLUHDU}: clump label dataset.
@itemx @code{INVAL} and @code{INVALHDU}: value dataset.
@itemx @code{INSTD} and @code{INSTDHDU}: standard deviation dataset.
@itemx @code{INVAR} and @code{INVARHDU}: variance dataset.
@itemx @code{INUPM} and @code{INUPMHDU}: upper-limit mask dataset.
The file name and HDU of all the possible inputs to MakeCatalog.
Only the keywords that correspond to inputs which are actually used in a given run of MakeCatalog will be written in the output.
This is based on the columns or metameasurements that you request, not if you gave them on the command-line (described in @ref{MakeCatalog inputs and basic settings}), or the HDUs that exist in the main argument.
For the standard deviation or variance, in case a single number was given (instead of a dataset), that number will be written for @code{INSTD} or @code{INVAR} (without any @code{INSTDHDU} and @code{INVALHDU}).
@end table
@item Input pixel grid and value properties
Basic information about the pixel values and grid properties of the input:
@table @code
@item PIXWIDTH
The width of one pixel on the sky (in units of arcseconds).
@item PIXAREA
The area of one pixel on the sky (in units of arcseconds squared) on the reference point.
The difference in pixel area across the image will be negligible in most science images.
In case you would like to check this for your input images, use the @option{--pixelareaonwcs} option of the Fits program (see @ref{Pixel information images}).
@item ZEROPNT
The zero point of the values image (used to convert
This is the same value you gave to the @option{--zeropoint} option of @ref{MakeCatalog inputs and basic settings}.
@item STDUSED
Per-pixel standard deviation (used in noise-based metameasurements like @ref{Surface brightness limit of image} and @ref{Noise based magnitude limit of image}).
This keyword will only be present when a standard deviation image has been loaded (done automatically for any column measurement that involves noise, for example, @option{--sn}, @option{--magnitude-error} or @option{--sky-std}).
In case your catalog does not include any such columns and you want this keyword, you can use the @option{--meta-measures} option (see @ref{Metameasurements on full input}).
If the @code{MEDSTD} keyword is present in the standard deviation dataset (see @ref{NoiseChisel output}), it will be used.
Otherwise, the median of the standard deviation input is calculated, used for the metameasures and written in this keyword.
@end table
@item Upper-limit parameters
When any of the upper-limit measurements are requested, the input parameters for the upper-limit measurement are stored in the following keywords (see @ref{Upper limit measurements}).
@table @code
@item UPSIGMA
The multiple of sigma to measure the upper-limit.
This is the same value given to the @option{--upnsigma} option of @ref{Upper-limit settings}.
@item UPNUMBER
The number of random positions with a successful reading.
This is the same value given to the @option{--upnum} option of @ref{Upper-limit settings}.
@item UPRNGNAM
Name of the random number generator used for finding the random positions; see @ref{Generating random numbers}.
@item UPRNGSEE
Seed used for the random number generator.
This will be different on every run, unless @option{--envseed} is called.
For more details, see @ref{Generating random numbers}
@item UPSCMLTP
@mymath{\sigma}-clipping parameter: multiple of sigma.
Clipping is necessary to reject strong outliers that can affect the statistics.
This is the first value given to the @option{--upsigmaclip} option of @ref{Upper-limit settings}.
@item UPSCTOL
@mymath{\sigma}-clipping parameter: tolerance level.
Clipping is necessary to reject strong outliers that can affect the statistics.
This is the second value given to the @option{--upsigmaclip} option of @ref{Upper-limit settings}.
@end table
@item Noise-based metameasures
The following metameasurement are calculated purely based on the measured noise level.
But they are not written by default, if you want them run with @option{--meta-measure}.
@table @code
@item SBL
Measured surface brightness limit (in units of mag/arcsec@mymath{^2}); as described in @ref{Surface brightness limit of image}.
@item NML
Measured surface brightness limit (in units of mag/arcsec@mymath{^2}); as described in @ref{Noise based magnitude limit of image}.
@end table
@item Confusion limit
The confusion limit (CNL) is a measure of density of resolved sources in the input image.
For a complete review on its goals and how to interpret the values to these keywords, see @ref{Confusion limit of image}.
In particular, if you would like to see the full distribution of nearest neighbors and their distances, you can use @option{--cnl-check} as described in @ref{MakeCatalog output HDUs}.
By default, the clumps catalog is used for the distribution, however if you would like to use the objects for any reason, you can use @option{--cnl-with-objects} as described below in this section (not recommended unless you understand the risks).
@table @code
@item CNL
The confusion limit: difference between @code{CNLP75} and @code{CNLP25}.
@item CNLP05
@itemx CNLP25
@itemx CNLP50
@itemx CNLP75
@itemx CNLP95
Various percentiles of the distribution of distances to the nearest neighbor in units of pixels.
The @code{P} before each number is for percentile, so @code{CNLP50} shows the median value of the distribution to nearest neighbors.
Note that a quantile is just the percentile after division by 100.
Percentiles are used in the keyword names because they are simple integers and do not need a floating point.
@end table
@end table
@noindent
The following MakeCatalog options are specifically related to the various keywords above.
@table @option
@item --sbl-sigma=FLT
Value to multiply with the median standard deviation (from a @command{MEDSTD} keyword, if it is present in the Sky standard deviation image) for the measured surface brightness limit.
Note that the surface brightness limit is only reported when metameasurements are requested with @option{--meta-measures}.
See @ref{Metameasurements on full input} for more on the basics of various metameasurements and @ref{Image surface brightness limit} on a practical usage example.
@item --sbl-area=FLT
Shape-agnostic area (in arc-seconds squared) to use for the measured surface brightness limit.
Note that the surface brightness limit is only reported when meta-measurements are requested with @option{--meta-measures}.
See @ref{Metameasurements on full input} for more on the basics of various metameasurements and @ref{Image surface brightness limit} on a practical usage example.
@item --cnl-with-objects
Use the object positions instead of clumps for measuring the distance to the nearest label.
This is only useful if you have generated your labels image with something other than Gnuastro's @ref{Segment}, and that program doesn't have the capacity to identify individual peaks and extended signal at the same time.
Therefore, in case you have generated the input labels with Segment, we do not recommend using this option.
@end table
@node Match, , MakeCatalog, Data analysis
@section Match
High-level data (catalogs) of a single astronomical source can come come from different telescopes, filters, software (detection, segmentation and cataloging tools on the same image) and even different configurations for a single software.
As a result, one of the first things we usually do after generating or querying catalog data (for example, with @ref{MakeCatalog} or @ref{Query}), is to find which sources in one catalog correspond to which in the other(s).
In other words, to `match' the two catalogs with each other.
Within Gnuastro, the Match program is in charge of such operations.
The matching rows are defined within an aperture, which can be a circle or an ellipse with any orientation.
Before digging into the usage and command-line execution details (in @ref{Invoking astmatch}), it is important to discuss several concepts in detail.
If this is the first time you are using Match, please take the time to read these three sections (before @ref{Invoking astmatch}) in detail to optimize your usage of Match and understanding its outputs.
The first is @ref{Arranging match output} which introduces how the matched rows should be arranged in the outputs and is determined by the purpose/context of your analysis (there are examples in that section).
When you want an ``inner'' or ``full'' match, some matches can become ambiguous; in @ref{Unambiguous matching} Gnuastro's Match strategy for those is discussed.
We will then review the @ref{Matching algorithms}.
@menu
* Arranging match output:: Various meanings of matching.
* Unambiguous matching:: How we avoid multiple matches.
* Matching algorithms:: Different ways to find the match.
* Invoking astmatch:: Inputs, outputs and options of Match.
@end menu
@node Arranging match output, Unambiguous matching, Match, Match
@subsection Arranging match output
Once the unambiguous match (see @ref{Unambiguous matching}) between two catalogs is found the thing you want to do with the matched rows can be different based on context.
For example, sometimes you exclusively want the matched rows of the two input catalogs.
In other times you want to find the nearest row in a reference catalog for every row of a query catalog.
Yet other times, you want to merge two catalogs (having all the matched and non-matched rows in one table).
Within Match, these are defined as different ``arrangements'' of the output.
The core matching algorithm (to find the nearest points between two catalogs @ref{Matching algorithms}) is the same in all the arrangements.
The thing that is different is how you want to use that information in constructing/arranging the output.
The various arrangements (that should be given to the @option{--arrange} option) are formally defined below.
@table @code
@item inner
The output of an inner match will contain unambiguously/exclusively matched rows in both catalogs within a given aperture.
In other words, the number of rows in the output will be fewer (or equal) to the input with fewest rows, @emph{and} that no two rows of any input can be repeated.
Another way to say this is that the order of the inputs does not matter for the final number of matched rows in an inner match.
An important aspect of an inner match is the necessity of an aperture to define the maximum/largest acceptable distance to have a match.
Without an aperture to define a reasonable match, there is always bound to be a ``nearest'' item in both catalogs!
A real-world example (in astrophysics) of inner matching is when you have two catalogs of galaxies from different filters/telescopes and want to find the same galaxy in both catalogs.
The simplified example below shows inner matching in action.
The values in each row are intentionally abstract/ideal to help understanding the concept (for example, the first two columns can be the RA and Dec in a real-world catalog).
First, let's look at the contents of the two tables we want to match:
@example
$ cat a.txt
# Column 1: X [pix, f32] X axis position
# Column 2: Y [pix, f32] Y axis position
# Column 3: MAGa [unit, f32] Electron counts
1.2 1.2 18.2
3.2 3.2 20.8
8.2 8.2 15.2
$ cat b.txt
# Column 1: X [pix, f32] X axis position
# Column 2: Y [pix, f32] Y axis position
# Column 3: MAGb [pix, f32] Radius of object in pixels
1 1 18.5
2 2 23.1
3 3 16.2
4 4 22.7
5 5 23.4
@end example
In this hypothetical example, the first catalog has the magnitude of three objects in filter a, and the second catalog has the magnitude of 5 objects in filter b.
But the coordinates are not identical (which is natural when the catalogs come from different source) and not all of the galaxies in one catalog match the other!
With the first command below, we will find the matching rows with an inner arrangement, and put the radius and magnitude of the matched rows in the same row (using the @option{--outcols}, see @ref{Invoking astmatch}).
The second command simply prints the contents of the table on the command-line in easy-to-read format (simplified here by removing trailing zeros to help visual comparison with above).
@example
$ astmatch a.txt --ccol1=X,Y b.txt --ccol2=X,Y \
--aperture=0.5 --outcols=a1,b1,a2,b2,a3,b3 \
--arrange=inner --output=inner.fits
$ asttable inner.fits -Y
1.2 1 1.2 1 18.2 18.5
3.2 3 3.2 3 20.8 16.2
@end example
@cindex PSF
As you see, while both inputs had more than two rows, the output of an inner arrangement only contains the rows that matched in both catalogs (two in this case).
For inner matching, we rarely need both coordinate columns; instead, we can simply only select the coordinates of the catalog that had a better precision (in other words, a better point spread function, or PSF).
For example, assuming that @file{a.fits} had better precision, we could simplify the output above with @option{--outcols=a1,a2,a3,b3} (which has removed @code{b1} and @code{b2} compared to the one in the example).
Try this out for yourself as an exercise to visually understand this important feature of inner arrangements.
@item full
The output of a full match will contain matching and non-matching rows in both catalogs.
In other words, when there is a match between the two catalogs, it is unambiguous and exclusive (as in the ``Inner'' mode above).
However, if a row from any of the two inputs does not match, it is still present in the output catalog, but rows that belong to the other input are given blank values for it.
When there is no matching rows between the two tables, the output will effectively be a concatenation of the two inputs.
In @url{https://www.w3schools.com/sql/sql_join.asp, SQL jargon} this is known as ``full-outer joining''.
A ``full'' output arrangement is therefore effectively an inner arrangement, but with the non-matching rows from both catalogs appended to it.
Therefore, similar to an inner match, an aperture is mandatory.
Let's use the same two example demo inputs of the inner arrangement above, but call the Match program with a @code{--arrange=full} this time:
@example
$ astmatch a.txt --ccol1=X,Y b.txt --ccol2=X,Y \
--aperture=0.5 --outcols=a1,b1,a2,b2,a3,b3 \
--arrange=full --output=full-raw.fits
$ asttable full-raw.fits -Y
1.2 1 1.2 1 18.2 18.5
3.2 3 3.2 3 20.8 16.2
8.2 nan 8.2 nan 15.2 nan
nan 2 nan 2 nan 23.1
nan 4 nan 4 nan 22.7
nan 5 nan 5 nan 23.4
@end example
You see that first two rows are the same as the inner example above.
But now we have all the non-matching rows of both catalogs also.
In a real world example, the problem with these non-matching rows is that the coordinates are NaN/blank!
To fix that, you can use the @code{where} operator and @ref{Column arithmetic} like the command below (where the coordinates of @file{b.fits} are put in the first and third columns and the coordinate columns of @file{b.fits} are removed).
With the last two @option{--colmetadata} options, we are giving names, units and comments to the newly synthesized columns.
This is because after column arithmetic, the metadata is lost by default and it is up to you to set it: always do so (data without metadata is not too useful!):
@example
$ asttable full-raw.fits --output=full.fits \
-c'arith $1 set-i i i isblank $2 where' \
-c'arith $3 set-i i i isblank $4 where' \
-cMAGa,MAGb \
--colmetadata=1,X,pix,"X axis position (merged)" \
--colmetadata=2,Y,pix,"Y axis position (merged)"
$ asttable full.fits -Y
1.2 1.2 18.2 18.5
3.2 3.2 20.8 16.2
8.2 8.2 15.2 nan
2.0 2.0 nan 23.1
4.0 4.0 nan 22.7
5.0 5.0 nan 23.4
@end example
The @file{full.fits} table above now has the matching rows; but it also has entries for the non-matching objects of both catalogs and only one X and Y position.
To find objects that were only detected in one or the other filter, the user can simply use the @option{--noblank} option like the first example below.
To get the inner match output, the user of the catalog can simply use the same option with @code{_all}, like the second example below.
@example
$ asttable full.fits --noblank=MAGa
$ asttable full.fits --noblank=_all
@end example
@item outer
An outer row arrangement will produce a table with the same number of rows as the second input to the Match program: every row of the second input is matched with the nearest row of the first.
An outer match is useful when you want to find the nearest row of a @emph{reference} catalog (first input) for each row of a @emph{query} catalog's entries (second input).
In this scenario, if there are multiple entries of the reference catalog in the output, there is no problem (in other words, the match is not exclusive).
Also, since you simply want the nearest entry, no aperture is necessary.
@cindex SQL
@cindex Join (SQL)
In @url{https://www.w3schools.com/sql/sql_join.asp, SQL jargon}, there is two types of outer arrangements (or ``joining'' as SQL calls it): left-outer and right-outer.
But in Gnuastro's Match program, we current only have the right-outer join.
This is because unlike SQL (that does many more things than matching/joining) Gnuastro's Match program is not a programming language!
So the user can always change the order of their inputs to achieve left-outer matching!
In other words, within the scope of Gnuastro's Match, there is no need for the extra complexity.
@cindex Galactic extinction
@cindex Extinction (Galactic)
For example, let's assume you want to find the Galactic extinction for the galaxies in your catalog.
In this scenario, the reference catalog is a table of Galactic extinction values for a grid of positions in the sky and the check catalog contains one row for each of your target galaxies.
If two or more galaxies are near the same entry in the reference catalog, you expect them to have the same extinction column value.
To see it in practice, let's use the @file{b.fits} of the inner arrangement example above as our query catalog.
For the reference catalog, let's make a new @file{c.fits}; assuming that it has galactic extinction for some points you have already measured:
@example
$ cat c.txt
# Column 1: X [pix, f32] X axis position
# Column 2: Y [pix, f32] Y axis position
# Column 3: EXT [mag, f32] Galactic extinction at (X,Y)
1 1 0.04
4 4 0.05
$ astmatch c.txt --ccol1=X,Y b.txt --ccol2=X,Y \
--outcols=bX,bY,bMAGb,aEXT --arrange=outer \
--kdtree=internal --output=outer.fits
$ asttable outer.fits -Y
1 1 18.5 0.04
2 2 23.1 0.04
3 3 16.2 0.05
4 4 22.7 0.05
5 5 23.4 0.05
@end example
The output has the same number of rows as the query catalog, and rows from the reference catalog are repeated when they are the nearest to each query item.
@item outer-within-aperture
Similar to the outer match described above, with the difference that if the nearest point is farther than @option{--aperture}, all reference table (first input) columns will be NaN in the output.
For more on @option{--aperture}, see @ref{Invoking astmatch}.
This is useful in scenarios where the distance matters and you do not want reference points that are too distant from the query catalog.
As an example, let's repeat the command from the example for the outer arrangement, but use this arrangement instead:
@example
$ astmatch c.txt --ccol1=X,Y b.txt --ccol2=X,Y \
--aperture=0.5 --outcols=bX,bY,bMAGb,aEXT \
--arrange=outer-within-aperture --kdtree=internal \
--output=outer-wa.fits
$ asttable outer-wa.fits -Y
1 1 18.5 0.040
2 2 23.1 nan
3 3 16.2 nan
4 4 22.7 0.050
5 5 23.4 nan
@end example
You can confirm that only the reference rows that were within the given aperture, are not NaN/blank.
@end table
@node Unambiguous matching, Matching algorithms, Arranging match output, Match
@subsection Unambiguous matching
When you want an ``inner'' or ``full'' match (discussed previously in @ref{Arranging match output}), it is important that the matches are unambiguous.
Within Gnuastro's Match program, an ambiguous match is defined when there is more than one ``match'' within the given aperture or the output depends on the order of the inputs.
In other words, to be unambiguous, there should only a single row/entry in both catalogs within the given aperture and the match should not depend on the order of the inputs (which one is given first on the command-line).
If there is more than one match in either catalog to the other, Gnuastro's Match program will remove the match and put all the entries of the second catalog that have multiple matches in a HDU of the output FITS file called @code{FLAGGED-IN-2ND}.
These are necessary for the match output to be reliable: simply reporting the nearest match when there is more than one match within the given aperture will lead to an ambiguous result.
The issue is best displayed in practice.
Let's assume you have two catalogs called @file{x.fits} and @file{o.fits} and the points are distributed as follows (in four ``clusters''):
@example
$ asttable x.fits
-2 -1
1 4
7 0
8 4
8 -1
2 1
9 0
$ asttable o.fits
-2 -2
2 3
4 1
7 4
8 0
9 4
## ASCII visualization of the points in the two tables
|
| x o x o
| o
|
| x o
| x o x
| x x
| o
--------------------------------------->
@end example
When you run Gnuastro's Match program on these two tables (and your aperture is large enough to fit the various ``clusters'' of points) only the first two points of the two catalogs (on the bottom-left side of the chart) will match.
The rest will be flagged and the row-counter of the second input for those matches will be saved in the @code{FLAGGED-IN-2ND} HDU of the output.
This is very important, especially when you consider the matching aperture to be analogous to your error in the two positions.
In other words, while the other points do indeed have a closest match, they have other points that are also within the error/aperture of the match and therefore they will contaminate/pollute any higher-level analysis you want to do with the match.
You can check if your match command produced flagged/ambiguous matches:
@itemize
@item
On the lines that Match prints on the terminal, the number of flagged sources will be printed just under the number of unambiguous matches.
But this will require human checking (which is not easy in large pipelines!) and is primarily good when you are developing/debugging your higher-level scripts or just understanding your data.
@item
Read the number of rows of the @code{FLAGGED-IN-2ND} HDU of the output and put that in a conditional, like the minimal working example below (you can replace the @code{echo GOOD} and @code{echo BAD} parts with any series of complex operations):
@example
$ nflag=$(asttable red-first.fits -hFLAGGED-IN-2ND \
--info-num-rows)
$ if [ $nflag = 0 ]; then echo GOOD; else echo BAD; fi
@end example
@end itemize
As mentioned above, the @code{FLAGGED-IN-2ND} HDU will only contain the row counter (starting from 1) of the ambiguous matches of the second input.
To get the full row of the second input from that, you can use the commands below.
It uses Table's @ref{Column arithmetic} and a second call on Match with the column counters (a one-dimensional match):
@example
$ asttable second.fits --output=2nd-with-counter.fits
-c'arith $1 counter swap free',_all
$ astmatch original-output.fits --hdu=FLAGGED-IN-2ND --ccol1=1 \
2nd-with-counter.fits --hdu2=1 --ccol2=1 \
--aperture=0.1 --outcols=b_all \
--output=flagged-in-2nd.fits
@end example
The solution to having flagged matches depends on the context, so there is not one easy solution unfortunately.
For example one situation where this can occur is the following:
You have two catalogs of galaxies and one is from a much deeper image (with a higher spatial resolution) than the other.
Due to its depth and higher resolution, the deeper catalog's points will be much more dense in the same region of the sky and the probability of such ambiguous matches will increase.
@cindex Completeness limit
@cindex Limit (Completeness)
To fix the problem above, before doing the match, you can remove all the points in the deep catalog that would not be detectable in the shallow one.
To find the points of the deep catalog that should be removed, you can generate the number-counts of your two catalogs (histogram of their magnitudes) and plot them together.
This will show your shallower catalog's completeness limit (the magnitude where the shallower catalog start to deviate (drop) compared to the deeper catalog.
You can then remove the sources from the deeper catalog that are fainter than the completeness limit of the shallower catalog (for example, when the limit is on the 24th magnitude, you can use @code{asttable deep.fits --range=MAG,-inf,24}).
Finally, you can do the match on the the resulting sub-set of the deeper catalog.
@node Matching algorithms, Invoking astmatch, Unambiguous matching, Match
@subsection Matching algorithms
Matching involves two catalogs, let's call them catalog A (with N rows) and catalog B (with M rows).
The most basic matching algorithm that immediately comes to mind is this:
for each row in A (let's call it @mymath{A_i}), go over all the rows in B (@mymath{B_j}, where @mymath{0<j<M}) and calculate the distance @mymath{d(A_i,B_j)}.
Find the @mymath{B_j} that has the smallest distance to @mymath{A_i}, and if that distance is smaller than the acceptable distance threshold (or radius, or aperture), consider @mymath{A_i} and @mymath{B_j} as a match.
However, this basic parsing algorithm is very computationally expensive:
@mymath{N\times M} distances have to measured, and calculating the distance requires a square root and power of 2: in 2 dimensions it would be @mymath{d(A_i,B_j)=\sqrt{(B_{ix}-A_{ix})^2+(B_{iy}-A_{iy})^2}}.
If an elliptical aperture is necessary, it can even get more complicated (see @ref{Defining an ellipse and ellipsoid}).
Such operations are not simple, and will consume many cycles of your CPU!
As a result, this basic algorithm will become terribly slow as your datasets grow in size.
For example, when N or M exceed hundreds of thousands (which is common in the current days with datasets like the European Space Agency's Gaia mission).
Therefore that basic parsing algorithm will take too much time and more efficient ways to @emph{find the nearest neighbor} need to be found.
Gnuastro's Match currently has the following algorithms for finding the nearest neighbor:
@table @asis
@item Sort-based
In this algorithm, we will use a moving window over the sorted datasets:
@enumerate
@item
Sort the two datasets by their first coordinate.
Therefore @mymath{A_i<A_j} (when @mymath{i<j}; only in first coordinate), and similarly, sort the elements of B based on the first coordinate.
@item
Use the radial distance threshold to define the width of a moving interval over both A and B.
Therefore, with a single parsing of both simultaneously, for each A-point, we can find all the elements in B that are sufficiently near to it (within the requested aperture).
@end enumerate
You can use this method by disabling the default algorithm that is described next with @option{--kdtree=disable}.
The reason the sort-based algorithm is not the default is that it has some caveats:
@itemize
@item
It requires sorting, which can again be slow on large numbers.
@item
It can only be done on a single thread! So it cannot benefit from the modern CPUs with many threads, or GPUs that have hundreds/thousands of computing units.
@item
There is no way to preserve intermediate information for future matches (and not have to repeat them).
@end itemize
@item k-d tree
The k-d tree concept is much more abstract, but powerful (addressing all the caveats of the sort-based method described above.).
In short a k-d tree is a partitioning of a k-dimensional space (``k'' is just a place-holder, so together with ``d'' for dimension, ``k-d'' means ``any number of dimensions''!).
The k-d tree of table A is another table with the same number of rows, but only two integer columns: the integers contain the row indexs (counting from zero) of the left and right ``branch'' (in the ``tree'') of that row.
With a k-d tree we can find the nearest point with much fewer checks (statistically: compared to always parsing everything from the top-down).
We won't go deeper into the concept of k-d trees here and will focus on the high-level usage in of k-d trees in Match.
In case you are interested to learn more on the k-d tree concept and Gnuastro's implementation, please see its @url{https://en.wikipedia.org/wiki/K-d_tree, Wikipedia page} and @ref{K-d tree}.
When given two catalogs (like the command below), Gnuastro's Match will internally construct a k-d tree for catalog A (the first catalog given to it) and use the k-d tree of A, for finding the nearest row in B to each row in A.
This is done in parallel on all available threads (unless you specify a certain number of threads to use with @option{--numthreads}, see @ref{Multi-threaded operations})
@example
$ astmatch A.fits --ccol1=ra,dec B.fits --ccol2=RA,DEC \
--aperture=1/3600
@end example
In scenarios where your reference (A) catalog is the same (and it is large!), you can save time by building the k-d tree of A and saving it into a file once, and simply use that k-d tree in all future matches.
The command below shows how to do the first step (to build the k-d tree and keep it in a file).
@example
$ astmatch A.fits --ccol1=ra,dec --kdtree=build \
--output=A-kdtree.fits
@end example
This external k-d tree (@file{A-kdtree.fits}) can be fed to Match later (to avoid having to reconstruct it every time you want to match a new catalog with A).
The commands below show how to do this by matching both @file{B.fits} and @file{C.fits} with @file{A.fits} using its pre-built k-d tree.
Note that the same @option{--kdtree} option above (which has a value of @code{build}), is now given the file name of the already-built k-d tree.
@example
$ astmatch A.fits --ccol1=ra,dec --kdtree=A-kdtree.fits \
B.fits --ccol2=RA,DEC --aperture=1/3600 \
--output=A-B.fits
$ astmatch A.fits --ccol1=ra,dec --kdtree=A-kdtree.fits \
C.fits --ccol2=RA,DEC --aperture=1/3600 \
--output=A-C.fits
@end example
There is just one technical issue however: when there is no neighbor within the acceptable distance of the k-d tree, it is forced to parse all elements to confirm that there is no match!
Therefore if one catalog only covers a small portion (in the coordinate space) of the other catalog, the k-d tree algorithm will be forced to parse the full k-d tree for the majority of points!
This will dramatically decrease the running speed of Match.
To mitigate this, Match first divides the range of the first input in all its dimensions into bins that have a width of the requested aperture (similar to a histogram), and will only do the k-d tree based search when the point in catalog B actually falls within a bin that has at least one element in A.
In summary, here are the points to consider when selecting an algorithm, or the order of your inputs (for optimal speed, the match will be the same):
@itemize
@item
For larger datasets, the k-d tree based method (when running on all threads possible) is much more faster than the classical sort-based method.
@item
If you always need to match against one catalog (that is large!), the k-d tree construction itself can take a significant fraction of the running time.
In such cases, save the k-d tree into a file and simply give it to later calls (as shown above)
@item
For the @emph{inner} or @emph{full} arrangement of the output (described in @ref{Arranging match output}), the order of inputs does not matter.
But if you put the table with @emph{fewer rows} as the first input, you will gain a lot in processing time (depending on the size of the other table and the number of threads).
This is because of the following facts:
@itemize
@item
The k-d tree is constructed for the first input table and the construction of a larger dataset's k-d tree will take longer (as a non-linear function of the number of rows).
@item
Multi-threading is done on the rows of the second input table and it scales in a linear way with more rows.
@end itemize
@end itemize
@end table
@node Invoking astmatch, , Matching algorithms, Match
@subsection Invoking Match
Match will find the rows that are nearest to each other in two catalogs (given some coordinate columns) and can arrange the outputs in different ways (see @ref{Arranging match output}).
When an ``inner'' or ``full'' match is requested, Match will flag ambiguous matches, see @ref{Unambiguous matching} for more on those.
To understand the inner working of Match and its algorithms, see @ref{Matching algorithms}.
When you need a k-d tree based match, Match can also construct the k-d tree of one catalog to save in a FITS file for future matching of the same catalog with many others.
@noindent
The executable name is @file{astmatch} with the following general template
@example
$ astmatch [OPTION ...] input-1 input-2
@end example
@noindent
One line examples:
@example
## 1D wavelength match (within 5 angstroms) of the two inputs.
## The wavelengths are in the 5th and 10th columns respectively.
$ astmatch --aperture=5e-10 --ccol1=5 --ccol2=10 in1.fits in2.txt
## Find the row that is closest to (RA,DEC) of (12.3456,6.7890)
## with a maximum distance of 1 arcseconds (1/3600 degrees).
## The coordinates can also be given in sexagesimal.
$ astmatch input1.txt --ccol1=ra,dec --coord=12.3456,6.7890 \
--aperture=1/3600
## Find matching rows of two catalogs with a circular aperture
## of width 2 (same unit as position columns: pixels in this case).
$ astmatch input1.txt input2.fits --aperture=2 \
--ccol1=X,Y --ccol2=IMG_X,IMG_Y
## Similar to before, but the output is created by merging various
## columns from the two inputs: columns 1, RA, DEC from the first
## input, followed by all columns starting with `MAG' and the `BRG'
## column from second input and the 10th column from first input.
$ astmatch input1.txt input2.fits --aperture=1/3600 \
--ccol1=ra,dec --ccol2=RAJ2000,DEJ2000 \
--outcols=a1,aRA,aDEC,b/^MAG/,bBRG,a10
## Assuming both inputs have the same column metadata (same name
## and numeric type), the output will contain all the rows of the
## first input, appended with the non-matching rows of the second
## input (good when you need to merge multiple catalogs that
## may have matching items, which you do not want to repeat).
$ astmatch input1.fits input2.fits --ccol1=RA,DEC --ccol2=RA,DEC \
--aperture=1/3600 --notmatched --outcols=_all
## Match the two catalogs within an elliptical aperture of 1 and 2
## arc-seconds along RA and Dec respectively.
$ astmatch --aperture=1/3600,2/3600 in1.fits in2.txt
## Match the RA and DEC columns of the first input with the RA_D
## and DEC_D columns of the second within a 0.5 arcseconds aperture.
$ astmatch --ccol1=RA,DEC --ccol2=RA_D,DEC_D --aperture=0.5/3600 \
in1.fits in2.fits
## Match in 3D (RA, Dec and Wavelength).
$ astmatch --ccol1=2,3,4 --ccol2=2,3,4 -a0.5/3600,0.5/3600,5e-10 \
in1.fits in2.txt
@end example
When doing a match two input catalogs are necessary as input command-line @ref{Arguments}.
But for constructing a k-d tree, only a single catalog should be given.
The input tables can be plain text tables or FITS tables, for more see @ref{Tables}.
But other ways of feeding inputs area also supported:
@itemize
@item
The @emph{first} catalog can also come from the standard input (for example, a pipe that feeds the output of a previous command to Match, see @ref{Standard input});
@item
When you only want to match one point with another catalog, you can use the @option{--coord} option to avoid creating a file for the @emph{second} input catalog.
@end itemize
Match follows the same basic behavior of all Gnuastro programs as fully described in @ref{Common program behavior}.
If the first input is a FITS file, the common @option{--hdu} option (see @ref{Input output options}) should be used to identify the extension.
When the second input is FITS, the extension must be specified with @option{--hdu2}.
When @option{--quiet} is not called, Match will print its various processing phases (including the number of matches found) in standard output (on the command-line).
The output of Match is a single FITS file (with possibly multiple HDUs; depending on the run-time options).
If no output file name is given with the @option{--output} option, then automatic output @ref{Automatic output} will be used to determine the output name(s).
Generally, giving a filename to @option{--output} is recommended.
By default, the output will contain two tables/HDUs; each table will contain the re-arranged rows of the respective input table.
In other words, both tables will have the same number of rows, and row 10 in both corresponds to the 10th match between the two.
If no matches are found, the columns of the output table(s) will have zero rows (with proper meta-data).
For the ``inner'' or ``full'' arrangements, if there are any flagged/ambiguous rows in the second input (depending on the given aperture/error), a dedicated HDU for those rows will also be created, see @ref{Unambiguous matching}.
The output format can be changed with the following options:
@itemize
@item
@option{--outcols}: The output will be a single table with columns chosen from either of the two inputs in any order.
In other words, with this option, the main output table will be a single table, not two: it will merge columns from the two catalogs into one for the matching rows.
@item
@option{--notmatched}: The output tables will contain the rows that did not match between the two tables.
If called with @option{--outcols}, the output will be a single table with all non-matched rows of both tables.
@item
@option{--logasoutput}: The input tables will not be arranged, but only the @code{LOG} table (described below) will be present (along with possible ambiguous/flagged matches) in the output.
@end itemize
When the @option{--log} option is called (see @ref{Operating mode options}), and there was a match, Match will also add a new HDU (called @code{LOG}) to the output.
This log-table will have three columns.
The first and second columns show the matching row/record number (counting from 1) of the first and second input catalogs respectively.
The third column is the distance between the two matched positions.
The units of the distance are the same as the given coordinates (given the possible ellipticity, see description of @option{--aperture} below).
@noindent
The various run-time options of Match are described below:
@table @option
@item -H STR
@itemx --hdu2=STR
The extension/HDU of the second input if it is a FITS file.
When it is not a FITS file, this option's value is ignored.
For the first input, the common option @option{--hdu} must be used.
@item -A STR
@itemx --arrange=STR
The arrangement of rows in the output; for more details, see @ref{Arranging match output}.
@item -k STR
@itemx --kdtree=STR
Select the algorithm and/or the way to construct or import the k-d tree.
A summary of the four acceptable strings for this option are described here for completeness.
However, for a much more detailed discussion on Match's algorithms with examples, see @ref{Matching algorithms}.
@table @code
@item internal
Construct a k-d tree for the first input internally (within the same run of Match), and parallelize over the rows of the second to find the nearest points.
This is the default algorithm/method used by Match (when this option is not called).
@item build
Only construct a k-d tree of a single input and abort.
The name of the k-d tree is value to @option{--output}.
@item CUSTOM-FITS-FILE
Use the given FITS file as a k-d tree (that was previously constructed with Match itself) of the first input, and do not construct any k-d tree internally.
The FITS file should have two columns with an unsigned 32-bit integer data type and a @code{KDTROOT} keyword that contains the index of the root of the k-d tree.
For more on Gnuastro's k-d tree format, see @ref{K-d tree}.
@item disable
Do not use the k-d tree algorithm for finding the nearest neighbor, instead, use the sort-based method.
@end table
@item --kdtreehdu=STR
The HDU of the FITS file, when a FITS file is given to the @option{--kdtree} option that was described above.
@item --outcols=STR[,STR,[...]]
Columns (from both inputs) to write into a single matched table output.
The value to @code{--outcols} must be a comma-separated list of column identifiers (number or name, see @ref{Selecting table columns}), and it can be called multiple times (values to multiple invocations will be merged into one table).
The expected format depends on @option{--notmatched} and explained below.
By default (when @option{--nomatched} is not called), the number of rows in the output will be equal to the number of matches.
However, when @option{--notmatched} is called, all the rows (from the requested columns) of the first input are placed in the output, and the not-matched rows of the second input are inserted afterwards (useful when you want to merge unique entries of multiple catalogs into one).
@table @asis
@item Default (only matching rows)
The first character of each string specifies the input catalog: @option{a} for the first and @option{b} for the second.
The rest of the characters of the string will be directly used to identify the proper column(s) in the respective table.
See @ref{Selecting table columns} for how columns can be specified in Gnuastro.
For example, the output of @option{--outcols=a1,bRA,bDEC} will have three columns: the first column of the first input, along with the @option{RA} and @option{DEC} columns of the second input.
If the string after @option{a} or @option{b} is @option{_all}, then all the columns of the respective input file will be written in the output.
For example, the command below will print all the input columns from the first catalog along with the 5th column from the second:
@example
$ astmatch a.fits b.fits --outcols=a_all,b5
@end example
@code{_all} can be used multiple times, possibly on both inputs.
Tip: if an input's column is called @code{_all} (an unlikely name!) and you do not want all the columns from that table the output, use its column number to avoid confusion.
Another example is given in the one-line examples above.
Compared to the default case (where two tables with all their columns) are saved separately, using this option is much faster: it will only read and re-arrange the necessary columns and it will write a single output table.
Combined with regular expressions in large tables, this can be a very powerful and convenient way to merge various tables into one.
When @option{--coord} is given, no second catalog will be read.
The second catalog will be created internally based on the values given to @option{--coord}.
So column names are not defined and you can only request integer column numbers that are less than the number of coordinates given to @option{--coord}.
For example, if you want to find the row matching RA of 1.2345 and Dec of 6.7890, then you should use @option{--coord=1.2345,6.7890}.
But when using @option{--outcols}, you cannot give @code{bRA}, or @code{b25}.
@item With @option{--notmatched}
Only the column names/numbers should be given (for example, @option{--outcols=RA,DEC,MAGNITUDE}).
It is assumed that both input tables have the requested column(s) and that the numerical data types of each column in each input (with same name) is the same as the corresponding column in the other.
Therefore if one input has a @code{MAGNITUDE} column with a 32-bit floating point type, but the @code{MAGNITUDE} column of the other is 64-bit floating point, Match will crash with an error.
The metadata of the columns will come from the first input.
As an example, let's assume @file{input1.txt} and @file{input2.fits} each have a different number of columns and rows.
However, they both have the @code{RA} (64-bit floating point), @code{DEC} (64-bit floating point) and @code{MAGNITUDE} (32-bit floating point) columns.
If @file{input1.txt} has 100 rows and @file{input2.fits} has 300 rows (such that 50 of them match within 1 arcsec of the first), then the output of the command above will have @mymath{100+(300-50)=350} rows and only three columns.
Other columns in each catalog, which may be different, are ignored.
@example
$ astmatch input1.txt --ccol1=RA,DEC \
input2.fits --ccol2=RA,DEC \
--aperture=1/3600 \
--notmatched --outcols=RA,DEC,MAGNITUDE
@end example
@end table
@item -l
@itemx --logasoutput
The output FITS file will not rearrange the input rows, but will only contain the log file: indexes in the two catalogs that match with each other along with their distance, see description of the log file above (in this section, before list options).
@item --notmatched
Write the non-matching rows into the outputs, not the matched ones.
By default, this will produce two output tables, that will not necessarily have the same number of rows.
However, when called with @option{--outcols}, it is possible to import non-matching rows of the second into the first.
See the description of @option{--outcols} for more.
@item -c INT/STR[,INT/STR]
@itemx --ccol1=INT/STR[,INT/STR]
The coordinate columns of the first input.
The number of dimensions for the match is determined by the number of comma-separated values given to this option.
The values can be the column number (counting from 1), exact column name or a regular expression.
For more, see @ref{Selecting table columns}.
See the one-line examples above for some usages of this option.
@item -C INT/STR[,INT/STR]
@itemx --ccol2=INT/STR[,INT/STR]
The coordinate columns of the second input.
See the example in @option{--ccol1} for more.
@item -d FLT[,FLT]
@itemx --coord=FLT[,FLT]
Manually specify the coordinates to match against the given catalog.
With this option, Match will not look for a second input file/table and will directly use the coordinates given to this option.
When the coordinates are RA and Dec, the comma-separated values can either be in degrees (a single number), or sexagesimal (@code{_h_m_} for RA, @code{_d_m_} for Dec, or @code{_:_:_} for both).
When this option is called, the output changes in the following ways:
1) when @option{--outcols} is specified, for the second input, it can only accept integer numbers that are less than the number of values given to this option, see description of that option for more.
2) By default (when @option{--outcols} is not used), only the matching row of the first table will be output (a single file), not two separate files (one for each table).
This option is good when you have a (large) catalog and only want to match a single coordinate to it (for example, to find the nearest catalog entry to your desired point).
With this option, you can write the coordinates on the command-line and thus avoid the need to make a single-row file.
@item -a FLT[,FLT[,FLT]]
@itemx --aperture=FLT[,FLT[,FLT]]
Parameters of the aperture for matching.
The values given to this option can be fractions, for example, when the position columns are in units of degrees, @option{1/3600} can be used to ask for one arc-second.
The interpretation of the values depends on the requested dimensions (determined from @option{--ccol1} and @code{--ccol2}) and how many values are given to this option.
When multiple objects are found within the aperture, the match is defined
as the nearest one. In a multi-dimensional dataset, when the aperture is a
general ellipse or ellipsoid (and not a circle or sphere), the distance is
calculated in the elliptical space along the major axis. For the defintion
of this distance, see @mymath{r_{el}} in @ref{Defining an ellipse and
ellipsoid}.
@table @asis
@item 1D match
The aperture/interval can only take one value: half of the interval around each point (maximum distance from each point).
@item 2D match
In a 2D match, the aperture can be a circle, an ellipse aligned in the axes or an ellipse with a rotated major axis.
To simply the usage, you can determine the shape based on the number of free parameters for each.
@table @asis
@item 1 number
For example, @option{--aperture=2}.
The aperture will be a circle of the given radius.
The value will be in the same units as the columns in @option{--ccol1} and @option{--ccol2}).
@item 2 numbers
For example, @option{--aperture=3,4e-10}.
The aperture will be an ellipse (if the two numbers are different) with the respective value along each dimension.
The numbers are in units of the first and second axis.
In the example above, the semi-axis value along the first axis will be 3 (in units of the first coordinate) and along the second axis will be @mymath{4\times10^{-10}} (in units of the second coordinate).
Such values can happen if you are comparing catalogs of a spectra for example.
If more than one object exists in the aperture, the nearest will be found along the major axis as described in @ref{Defining an ellipse and ellipsoid}.
@item 3 numbers
For example, @option{--aperture=2,0.6,30}.
The aperture will be an ellipse (if the second value is not 1).
The first number is the semi-major axis, the second is the axis ratio and the third is the position angle (in degrees).
If multiple matches are found within the ellipse, the distance (to find the nearest) is calculated along the major axis in the elliptical space, see @ref{Defining an ellipse and ellipsoid}.
@end table
@item 3D match
The aperture (matching volume) can be a sphere, an ellipsoid aligned on the three axes or a genenral ellipsoid rotated in any direction.
To simplifythe usage, the shape can be determined based on the number of values given to this option.
@table @asis
@item 1 number
For example, @option{--aperture=3}.
The matching volume will be a sphere of the given radius.
The value is in the same units as the input coordinates.
@item 3 numbers
For example, @option{--aperture=4,5,6e-10}.
The aperture will be a general ellipsoid with the respective extent along each dimension.
The numbers must be in the same units as each axis.
This is very similar to the two number case of 2D inputs.
See there for more.
@item 6 numbers
For example, @option{--aperture=4,0.5,0.6,10,20,30}.
The numbers represent the full general ellipsoid definition (in any orientation).
For the definition of a general ellipsoid, see @ref{Defining an ellipse and ellipsoid}.
The first number is the semi-major axis.
The second and third are the two axis ratios.
The last three are the three Euler angles in units of degrees in the ZXZ order as fully described in @ref{Defining an ellipse and ellipsoid}.
@end table
@end table
@end table
@node Data modeling, High-level calculations, Data analysis, Top
@chapter Data modeling
@cindex Fitting
@cindex Modeling
In order to fully understand observations after initial analysis on the image, it is very important to compare them with the existing models to be able to further understand both the models and the data.
The tools in this chapter create model galaxies and will provide 2D fittings to be able to understand the detections.
@menu
* MakeProfiles:: Making mock galaxies and stars.
@end menu
@node MakeProfiles, , Data modeling, Data modeling
@section MakeProfiles
@cindex Checking detection algorithms
@pindex @r{MakeProfiles (}astmkprof@r{)}
MakeProfiles will create mock astronomical profiles from a catalog, either individually or together in one output image.
In data analysis, making a mock image can act like a calibration tool, through which you can test how successfully your detection technique is able to detect a known set of objects.
There are commonly two aspects to detecting: the detection of the fainter parts of bright objects (which in the case of galaxies fade into the noise very slowly) or the complete detection of an over-all faint object.
Making mock galaxies is the most accurate (and idealistic) way these two aspects of a detection algorithm can be tested.
You also need mock profiles in fitting known functional profiles with observations.
MakeProfiles was initially built for extra galactic studies, so currently the only astronomical objects it can produce are stars and galaxies.
We welcome the simulation of any other astronomical object.
The general outline of the steps that MakeProfiles takes are the following:
@enumerate
@item
Build the full profile out to its truncation radius in a possibly over-sampled array.
@item
Multiply all the elements by a fixed constant so its total magnitude equals the desired total magnitude.
@item
If @option{--individual} is called, save the array for each profile to a FITS file.
@item
If @option{--nomerged} is not called, add the overlapping pixels of all the created profiles to the output image and abort.
@end enumerate
Using input values, MakeProfiles adds the World Coordinate System (WCS) headers of the FITS standard to all its outputs (except PSF images!).
For a simple test on a set of mock galaxies in one image, there is no need for the third step or the WCS information.
@cindex Transform image
@cindex Lensing simulations
@cindex Image transformations
However in complicated simulations like weak lensing simulations, where each galaxy undergoes various types of individual transformations based on their position, those transformations can be applied to the different individual images with other programs.
After all the transformations are applied, using the WCS information in each individual profile image, they can be merged into one output image for convolution and adding noise.
@menu
* Modeling basics:: Astronomical modeling basics.
* If convolving afterwards:: Considerations for convolving later.
* Profile magnitude:: Definition of total profile magnitude.
* Invoking astmkprof:: Inputs and Options for MakeProfiles.
@end menu
@node Modeling basics, If convolving afterwards, MakeProfiles, MakeProfiles
@subsection Modeling basics
In the subsections below, first a review of some very basic information and concepts behind modeling a real astronomical image is given.
You can skip this subsection if you are already sufficiently familiar with these concepts.
@menu
* Defining an ellipse and ellipsoid:: Definition of these important shapes.
* PSF:: Radial profiles for the PSF.
* Stars:: Making mock star profiles.
* Galaxies:: Radial profiles for galaxies.
* Sampling from a function:: Sample a function on a pixelated canvas.
* Oversampling:: Oversampling the model.
@end menu
@node Defining an ellipse and ellipsoid, PSF, Modeling basics, Modeling basics
@subsubsection Defining an ellipse and ellipsoid
@cindex Ellipse
@cindex Axis ratio
@cindex Position angle
The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an ellipse.
Therefore, in this section we will start defining an ellipse on a pixelated 2D surface.
Labeling the major axis of an ellipse @mymath{a}, and its minor axis with @mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
The major axis of an ellipse can be aligned in any direction, therefore the angle of the major axis with respect to the horizontal axis of the image is defined to be the @emph{position angle} of the ellipse and in this book, we show it with @mymath{\theta}.
@cindex Radial profile on ellipse
Our aim is to put a radial profile of any functional form @mymath{f(r)} over an ellipse.
Hence we need to associate a radius/distance to every point in space.
Let's define the radial distance @mymath{r_{el}} as the distance on the major axis to the center of an ellipse which is located at @mymath{i_c} and @mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}).
We want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the image coordinate system) from the center of the ellipse with axis ratio @mymath{q} and position angle @mymath{\theta}.
First the coordinate system is rotated@footnote{Do not confuse the signs of @mymath{sin} with the rotation matrix defined in @ref{Linear warping basics}.
In that equation, the point is rotated, here the coordinates are rotated and the point is fixed.} by @mymath{\theta} to get the new rotated coordinates of that point @mymath{(i_r,j_r)}:
@dispmath{i_r(i,j)=+(i_c-i)\cos\theta+(j_c-j)\sin\theta}
@dispmath{j_r(i,j)=-(i_c-i)\sin\theta+(j_c-j)\cos\theta}
@cindex Elliptical distance
@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1} and that we defined @mymath{r_{el}\equiv{a}}.
Hence, multiplying all elements of the ellipse definition with @mymath{r_{el}^2} we get the elliptical distance at this point point located: @mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}.
To place the radial profiles explained below over an ellipse, @mymath{f(r_{el})} is calculated based on the functional radial profile desired.
@cindex Ellipsoid
@cindex Euler angles
An ellipse in 3D, or an @url{https://en.wikipedia.org/wiki/Ellipsoid, ellipsoid}, can be defined following similar principles as before.
Labeling the major (largest) axis length as @mymath{a}, the second and third (in a right-handed coordinate system) axis lengths can be labeled as @mymath{b} and @mymath{c}.
Hence we have two axis ratios: @mymath{q_1\equiv{b/a}} and @mymath{q_2\equiv{c/a}}.
The orientation of the ellipsoid can be defined from the orientation of its major axis.
There are many ways to define 3D orientation and order matters.
So to be clear, here we use the ZXZ (or @mymath{Z_1X_2Z_3}) proper @url{https://en.wikipedia.org/wiki/Euler_angles, Euler angles} to define the 3D orientation.
In short, when a point is rotated in this order, we first rotate it around the Z axis (third axis) by @mymath{\alpha}, then about the (rotated) X axis by @mymath{\beta} and finally about the (rotated) Z axis by @mymath{\gamma}.
Following the discussion in @ref{Merging multiple warpings}, we can define the full rotation with the following matrix multiplication.
However, here we are rotating the coordinates, not the point.
Therefore, both the rotation angles and rotation order are reversed.
We are also not using homogeneous coordinates (see @ref{Linear warping basics}) since we are not concerned with translation in this context:
@dispmath{\left[\matrix{i_r\cr j_r\cr k_r}\right] =
\left[\matrix{cos\gamma&sin\gamma&0\cr -sin\gamma&cos\gamma&0\cr 0&0&1}\right]
\left[\matrix{1&0&0\cr 0&cos\beta&sin\beta\cr 0&-sin\beta&cos\beta }\right]
\left[\matrix{cos\alpha&sin\alpha&0\cr -sin\alpha&cos\alpha&0\cr 0&0&1}\right]
\left[\matrix{i_c-i\cr j_c-j\cr k_c-k}\right] }
@noindent
Recall that an ellipsoid can be characterized with
@mymath{(i_r/a)^2+(j_r/b)^2+(k_r/c)^2=1}, so similar to before
(@mymath{r_{el}\equiv{a}}), we can find the ellipsoidal radius at pixel
@mymath{(i,j,k)} as: @mymath{r_{el}=\sqrt{i_r^2+(j_r/q_1)^2+(k_r/q_2)^2}}.
@cindex Breadth first search
@cindex Inside-out construction
@cindex Making profiles pixel by pixel
@cindex Pixel by pixel making of profiles
MakeProfiles builds the profile starting from the nearest element (pixel in an image) in the dataset to the profile center.
The profile value is calculated for that central pixel using Monte Carlo integration, see @ref{Sampling from a function}.
The next pixel is the next nearest neighbor to the central pixel as defined by @mymath{r_{el}}.
This process goes on until the profile is fully built upto the truncation radius.
This is done fairly efficiently using a breadth first parsing strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}} which is implemented through an ordered linked list.
Using this approach, we build the profile by expanding the circumference.
Not one more extra pixel has to be checked (the calculation of @mymath{r_{el}} from above is not cheap in CPU terms).
Another consequence of this strategy is that extending MakeProfiles to three dimensions becomes very simple: only the neighbors of each pixel have to be changed.
Everything else after that (when the pixel index and its radial profile have entered the linked list) is the same, no matter the number of dimensions we are dealing with.
@node PSF, Stars, Defining an ellipse and ellipsoid, Modeling basics
@subsubsection Point spread function
@cindex PSF
@cindex Point source
@cindex Diffraction limited
@cindex Point spread function
@cindex Spread of a point source
Assume we have a `point' source, or a source that is far smaller than the maximum resolution (a pixel).
When we take an image of it, it will `spread' over an area.
To quantify that spread, we can define a `function'.
This is how the ``point spread function'' or the PSF of an image is defined.
This `spread' can have various causes, for example, in ground-based astronomy, due to the atmosphere.
In practice we can never surpass the `spread' due to the diffraction of the telescope aperture (even in Space!).
Various other effects can also be quantified through a PSF.
For example, the simple fact that we are sampling in a discrete space, namely the pixels, also produces a very small `spread' in the image.
@cindex Blur image
@cindex Convolution
@cindex Image blurring
@cindex PSF image size
Convolution is the mathematical process by which we can apply a `spread' to an image, or in other words blur the image, see @ref{Convolution process}.
The sum of pixels of an image should remain unchanged after convolution.
Therefore, it is important that the sum of all the pixels of the PSF be unity.
The PSF image also has to have an odd number of pixels on its sides so one pixel can be defined as the center.
In MakeProfiles, the PSF can be set by the two methods explained below:
@table @asis
@item Parametric functions
@cindex FWHM
@cindex PSF width
@cindex Parametric PSFs
@cindex Full Width at Half Maximum
A known mathematical function is used to make the PSF.
In this case, only the parameters to define the functions are necessary and MakeProfiles will make a PSF based on the given parameters for each function.
In both cases, the center of the profile has to be exactly in the middle of the central pixel of the PSF (which is automatically done by MakeProfiles).
When talking about the PSF, usually, the full width at half maximum or FWHM is used as a scale of the width of the PSF.
@table @cite
@item Gaussian
@cindex Gaussian distribution
In the older papers, and to a lesser extent even today, some researchers use the 2D Gaussian function to approximate the PSF of ground based images.
In its most general form, a Gaussian function can be written as:
@dispmath{f(r)=a \exp \left( -(x-\mu)^2 \over 2\sigma^2 \right)+d}
Since the center of the profile is pre-defined, @mymath{\mu} and @mymath{d} are constrained.
@mymath{a} can also be found because the function has to be normalized.
So the only important parameter for MakeProfiles is the @mymath{\sigma}.
In the Gaussian function we have this relation between the FWHM and @mymath{\sigma}:
@cindex Gaussian FWHM
@dispmath{\rm{FWHM}_g=2\sqrt{2\ln{2}}\sigma \approx 2.35482\sigma}
@item Moffat
@cindex Moffat function
The Gaussian profile is much sharper than the images taken from stars on photographic plates or CCDs.
Therefore in 1969, Moffat proposed this functional form for the image of stars:
@dispmath{f(r)=a \left[ 1+\left( r\over \alpha \right)^2 \right]^{-\beta}}
@cindex Moffat beta
Again, @mymath{a} is constrained by the normalization, therefore two parameters define the shape of the Moffat function: @mymath{\alpha} and @mymath{\beta}.
The radial parameter is @mymath{\alpha} which is related to the FWHM by
@cindex Moffat FWHM
@dispmath{\rm{FWHM}_m=2\alpha\sqrt{2^{1/\beta}-1}}
@cindex Compare Moffat and Gaussian
@cindex PSF, Moffat compared Gaussian
@noindent
Comparing with the PSF predicted from atmospheric turbulence theory with a Moffat function, Trujillo et al.@footnote{
Trujillo, I., J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The effects of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328, pp. 977---985.}
claim that @mymath{\beta} should be 4.765.
They also show how the Moffat PSF contains the Gaussian PSF as a limiting case when @mymath{\beta\to\infty}.
@end table
@item An input FITS image
An input image file can also be specified to be used as a PSF.
If the sum of its pixels are not equal to 1, the pixels will be multiplied by a fraction so the sum does become 1.
Gnuastro has tools to extract the non-parametric (extended) PSF of any image as a FITS file (assuming there are a sufficient number of stars in it), see @ref{Building the extended PSF}.
This method is not perfect (will have noise if you do not have many stars), but it is the actual PSF of the data that is not forced into any parametric form.
@end table
While the Gaussian is only dependent on the FWHM, the Moffat function is also dependent on @mymath{\beta}.
Comparing these two functions with a fixed FWHM gives the following results:
@itemize
@item
Within the FWHM, the functions do not have significant differences.
@item
For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes sharper.
@item
The Gaussian function is much sharper than the Moffat functions, even when @mymath{\beta} is large.
@end itemize
@node Stars, Galaxies, PSF, Modeling basics
@subsubsection Stars
@cindex Modeling stars
@cindex Stars, modeling
In MakeProfiles, stars are generally considered to be a point source.
This is usually the case for extra galactic studies, where nearby stars are also in the field.
Since a star is only a point source, we assume that it only fills one pixel prior to convolution.
In fact, exactly for this reason, in astronomical images the light profiles of stars are one of the best methods to understand the shape of the PSF and a very large fraction of scientific research is preformed by assuming the shapes of stars to be the PSF of the image.
@node Galaxies, Sampling from a function, Stars, Modeling basics
@subsubsection Galaxies
@cindex Galaxy profiles
@cindex S@'ersic profile
@cindex Profiles, galaxies
@cindex Generalized de Vaucouleur profile
Today, most practitioners agree that the flux of galaxies can be modeled with one or a few generalized de Vaucouleur's (or S@'ersic) profiles.
@dispmath{I(r) = I_e \exp \left ( -b_n \left[ \left( r \over r_e \right)^{1/n} -1 \right] \right )}
@cindex Brightness
@cindex S@'ersic, J. L.
@cindex S@'ersic index
@cindex Effective radius
@cindex Radius, effective
@cindex de Vaucouleur profile
@cindex G@'erard de Vaucouleurs
G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that this function resembles the galaxy light profiles, with the only difference that he held @mymath{n} fixed to a value of 4.
Twenty years later in 1968, J. L. S@'ersic showed that @mymath{n} can have a variety of values and does not necessarily need to be 4.
This profile depends on the effective radius (@mymath{r_e}) which is defined as the radius which contains half of the profile's 2-dimensional integral to infinity (see @ref{Profile magnitude}).
@mymath{I_e} is the flux at the effective radius.
The S@'ersic index @mymath{n} is used to define the concentration of the profile within @mymath{r_e} and @mymath{b_n} is a constant dependent on @mymath{n}.
MacArthur et al.@footnote{MacArthur, L. A., S. Courteau, and J. A. Holtzman (2003). ``Structure of Disk-dominated Galaxies. I. Bulge/Disk Parameters, Simulations, and Secular Evolution''. In: ApJ 582, pp. 689---722.} show that for @mymath{n>0.35}, @mymath{b_n} can be accurately approximated using this equation:
@dispmath{b_n=2n - {1\over 3} + {4\over 405n} + {46\over 25515n^2} + {131\over 1148175n^3}-{2194697\over 30690717750n^4}}
@node Sampling from a function, Oversampling, Galaxies, Modeling basics
@subsubsection Sampling from a function
@cindex Sampling
A pixel is the ultimate level of accuracy to gather data, we cannot get any more accurate in one image, this is known as sampling in signal processing.
However, the mathematical profiles which describe our models have infinite accuracy.
Over a large fraction of the area of astrophysically interesting profiles (for example, galaxies or PSFs), the variation of the profile over the area of one pixel is not too significant.
In such cases, the elliptical radius (@mymath{r_{el}}) of the center of the pixel can be assigned as the final value of the pixel, (see @ref{Defining an ellipse and ellipsoid}).
@cindex Integration over pixel
@cindex Gradient over pixel area
@cindex Function gradient over pixel area
As you approach their center, some galaxies become very sharp (their value significantly changes over one pixel's area).
This sharpness increases with smaller effective radius and larger S@'ersic values.
Thus rendering the central value extremely inaccurate.
The first method that comes to mind for solving this problem is integration.
The functional form of the profile can be integrated over the pixel area in a 2D integration process.
However, unfortunately numerical integration techniques also have their limitations and when such sharp profiles are needed they can become extremely inaccurate.
@cindex Monte carlo integration
The most accurate method of sampling a continuous profile on a discrete space is by choosing a large number of random points within the boundaries of the pixel and taking their average value (or Monte Carlo integration).
This is also, generally speaking, what happens in practice with the photons on the pixel.
The number of random points can be set with @option{--numrandom}.
Unfortunately, repeating this Monte Carlo process would be extremely time and CPU consuming if it is to be applied to every pixel.
In order to not loose too much accuracy, in MakeProfiles, the profile is built using both methods explained below.
The building of the profile begins from its central pixel and continues (radially) outwards.
Monte Carlo integration is first applied (which yields @mymath{F_r}), then the central pixel value (@mymath{F_c}) is calculated on the same pixel.
If the fractional difference (@mymath{|F_r-F_c|/F_r}) is lower than a given tolerance level (specified with @option{--tolerance}) MakeProfiles will stop using Monte Carlo integration and only use the central pixel value.
@cindex Inside-out construction
The ordering of the pixels in this inside-out construction is based on @mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see @ref{Defining an ellipse and ellipsoid}.
When the axis ratios are large (near one) this is fine.
But when they are small and the object is highly elliptical, it might seem more reasonable to follow @mymath{r_{el}} not @mymath{r}.
The problem is that the gradient is stronger in pixels with smaller @mymath{r} (and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}.
In other words, the gradient is strongest along the minor axis.
So if the next pixel is chosen based on @mymath{r_{el}}, the tolerance level will be reached sooner and lots of pixels with large fractional differences will be missed.
Monte Carlo integration uses a random number of points.
Thus, every time you run it, by default, you will get a different distribution of points to sample within the pixel.
In the case of large profiles, this will result in a slight difference of the pixels which use Monte Carlo integration each time MakeProfiles is run.
To have a deterministic result, you have to fix the random number generator properties which is used to build the random distribution.
This can be done by setting the @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} environment variables and calling MakeProfiles with the @option{--envseed} option.
To learn more about the process of generating random numbers, see @ref{Generating random numbers}.
@cindex Seed, Random number generator
@cindex Random number generator, Seed
The seed values are fixed for every profile: with @option{--envseed}, all the profiles have the same seed and without it, each will get a different seed using the system clock (which is accurate to within one microsecond).
The same seed will be used to generate a random number for all the sub-pixel positions of all the profiles.
So in the former, the sub-pixel points checked for all the pixels undergoing Monte carlo integration in all profiles will be identical.
In other words, the sub-pixel points in the first (closest to the center) pixel of all the profiles will be identical with each other.
All the second pixels studied for all the profiles will also receive an identical (different from the first pixel) set of sub-pixel points and so on.
As long as the number of random points used is large enough or the profiles are not identical, this should not cause any systematic bias.
@node Oversampling, , Sampling from a function, Modeling basics
@subsubsection Oversampling
@cindex Oversampling
The steps explained in @ref{Sampling from a function} do give an accurate representation of a profile prior to convolution.
However, in an actual observation, the image is first convolved with or blurred by the atmospheric and instrument PSF in a continuous space and then it is sampled on the discrete pixels of the camera.
@cindex PSF over-sample
In order to more accurately simulate this process, the unconvolved image and the PSF are created on a finer pixel grid.
In other words, the output image is a certain odd-integer multiple of the desired size, we can call this `oversampling'.
The user can specify this multiple as a command-line option.
The reason this has to be an odd number is that the PSF has to be centered on the center of its image.
An image with an even number of pixels on each side does not have a central pixel.
The image can then be convolved with the PSF (which should also be oversampled on the same scale).
Finally, image can be sub-sampled to get to the initial desired pixel size of the output image.
After this, mock noise can be added as explained in the next section.
This is because unlike the PSF, the noise occurs in each output pixel, not on a continuous space like all the prior steps.
@node If convolving afterwards, Profile magnitude, Modeling basics, MakeProfiles
@subsection If convolving afterwards
In case you want to convolve the image later with a given point spread function, make sure to use a larger image size.
After convolution, the profiles become larger and a profile that is normally completely outside of the image might fall within it.
On one axis, if you want your final (convolved) image to be @mymath{m} pixels and your PSF is @mymath{2n+1} pixels wide, then when calling MakeProfiles, set the axis size to @mymath{m+2n}, not @mymath{m}.
You also have to shift all the pixel positions of the profile centers on the that axis by @mymath{n} pixels to the positive.
After convolution, you can crop the outer @mymath{n} pixels with the section crop box specification of Crop: @option{--section=n+1:*-n,n+1:*-n} (according to the FITS standard, counting is from 1 so we use @code{n+1}) assuming your PSF is a square, see @ref{Crop section syntax}.
This will also remove all discrete Fourier transform artifacts (blurred sides) from the final image.
To facilitate this shift, MakeProfiles has the options @option{--xshift}, @option{--yshift} and @option{--prepforconv}, see @ref{Invoking astmkprof}.
@node Profile magnitude, Invoking astmkprof, If convolving afterwards, MakeProfiles
@subsection Profile magnitude
@cindex Truncation radius
@cindex Sum for total flux
To find the profile's total magnitude, (see @ref{Brightness flux magnitude}), it is customary to use the 2D integration of the flux to infinity.
However, in MakeProfiles we do not follow this idealistic approach and apply a more realistic method to find the total magnitude: the sum of all the pixels belonging to a profile within its predefined truncation radius.
Note that if the truncation radius is not large enough, this can be significantly different from the total integrated light to infinity.
@cindex Integration to infinity
An integration to infinity is not a realistic condition because no galaxy extends indefinitely (important for high S@'ersic index profiles), pixelation can also cause a significant difference between the actual total pixel sum value of the profile and that of integration to infinity, especially in small and high S@'ersic index profiles.
To be safe, you can specify a large enough truncation radius for such compact high S@'ersic index profiles.
If oversampling is used then the pixel value is calculated using the over-sampled image, see @ref{Oversampling} which is much more accurate.
The profile is first built in an array completely bounding it with a normalization constant of unity (see @ref{Galaxies}).
Taking @mymath{V} to be the desired pixel value and @mymath{S} to be the sum of the pixels in the created profile, every pixel is then multiplied by @mymath{V/S} so the sum is exactly @mymath{V}.
If the @option{--individual} option is called, this same array is written to a FITS file.
If not, only the overlapping pixels of this array and the output image are kept and added to the output array.
@node Invoking astmkprof, , Profile magnitude, MakeProfiles
@subsection Invoking MakeProfiles
MakeProfiles will make any number of profiles specified in a catalog either individually or in one image.
The executable name is @file{astmkprof} with the following general template
@example
$ astmkprof [OPTION ...] [Catalog]
@end example
@noindent
One line examples:
@example
## Make an image with profiles in catalog.txt (with default size):
$ astmkprof catalog.txt
## Make the profiles in catalog.txt over image.fits:
$ astmkprof --background=image.fits catalog.txt
## Make a Moffat PSF with FWHM 3pix, beta=2.8, truncation=5
$ astmkprof --kernel=moffat,3,2.8,5 --oversample=1
## Make profiles in catalog, using RA and Dec in the given column:
$ astmkprof --ccol=RA_CENTER --ccol=DEC_CENTER --mode=wcs catalog.txt
## Make a 1500x1500 merged image (oversampled 500x500) image along
## with an individual image for all the profiles in catalog:
$ astmkprof --individual --oversample 3 --mergedsize=500,500 cat.txt
@end example
@noindent
The parameters of the mock profiles can either be given through a catalog (which stores the parameters of many mock profiles, see @ref{MakeProfiles catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output dataset}).
The catalog can be in the FITS ASCII, FITS binary format, or plain text formats (see @ref{Tables}).
A plain text catalog can also be provided using the Standard input (see @ref{Standard input}).
The columns related to each parameter can be determined both by number, or by match/search criteria using the column names, units, or comments, with the options ending in @option{col}, see below.
Without any file given to the @option{--background} option, MakeProfiles will make a zero-valued image and build the profiles on that (its size and main WCS parameters can also be defined through the options described in @ref{MakeProfiles output dataset}).
Besides the main/merged image containing all the profiles in the catalog, it is also possible to build individual images for each profile (only enclosing one full profile to its truncation radius) with the @option{--individual} option.
If an image is given to the @option{--background} option, the pixels of that image are used as the background value for every pixel hence flux value of each profile pixel will be added to the pixel in that background value.
You can disable this with the @code{--clearcanvas} option (which will initialize the background to zero-valued pixels and build profiles over that).
With the @option{--background} option, the values to all options relating to the ``canvas'' (output size and WCS) will be ignored if specified: @option{--oversample}, @option{--mergedsize}, @option{--prepforconv}, @option{--crpix}, @option{--crval}, @option{--cdelt}, @option{--cdelt}, @option{--pc}, @option{cunit} and @option{ctype}.
The sections below discuss the options specific to MakeProfiles based on context: the input catalog settings which can have many rows for different profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles profile settings}, we discuss how you can set general profile settings (that are the same for all the profiles in the catalog).
Finally @ref{MakeProfiles output dataset} and @ref{MakeProfiles log file} discuss the outputs of MakeProfiles and how you can configure them.
Besides these, MakeProfiles also supports all the common Gnuastro program options that are discussed in @ref{Common options}, so please flip through them is well for a more comfortable usage.
When building 3D profiles, there are more degrees of freedom.
Hence, more columns are necessary and all the values related to dimensions (for example, size of dataset in each dimension and the WCS properties) must also have 3 values.
To allow having an independent set of default values for creating 3D profiles, MakeProfiles also installs a @file{astmkprof-3d.conf} configuration file (see @ref{Configuration files}).
You can use this for default 3D profile values.
For example, if you installed Gnuastro with the prefix @file{/usr/local} (the default location, see @ref{Installation directory}), you can benefit from this configuration file by running MakeProfiles like the example below.
As with all configuration files, if you want to customize a given option, call it before the configuration file.
@example
$ astmkprof --config=/usr/local/etc/gnuastro/astmkprof-3d.conf \
catalog.txt
@end example
@cindex Shell alias
@cindex Alias, shell
@cindex Shell startup
@cindex Startup, shell
To further simplify the process, you can define a shell alias in any startup file (for example, @file{~/.bashrc}, see @ref{Installation directory}).
Assuming that you installed Gnuastro in @file{/usr/local}, you can add this line to the startup file (you may put it all in one line, it is broken into two lines here for fitting within page limits).
@example
alias astmkprof-3d="astmkprof \
--config=/usr/local/etc/gnuastro/astmkprof-3d.conf"
@end example
@noindent
Using this alias, you can call MakeProfiles with the name @command{astmkprof-3d} (instead of @command{astmkprof}).
It will automatically load the 3D specific configuration file first, and then parse any other arguments, options or configuration files.
You can change the default values in this 3D configuration file by calling them on the command-line as you do with @command{astmkprof}@footnote{Recall that for single-invocation options, the last command-line invocation takes precedence over all previous invocations (including those in the 3D configuration file).
See the description of @option{--config} in @ref{Operating mode options}.}.
Please see @ref{Sufi simulates a detection} for a very complete tutorial explaining how one could use MakeProfiles in conjunction with other Gnuastro's programs to make a complete simulated image of a mock galaxy.
@menu
* MakeProfiles catalog:: Required catalog properties.
* MakeProfiles profile settings:: Configuration parameters for all profiles.
* MakeProfiles output dataset:: The canvas/dataset to build profiles over.
* MakeProfiles log file:: A description of the optional log file.
@end menu
@node MakeProfiles catalog, MakeProfiles profile settings, Invoking astmkprof, Invoking astmkprof
@subsubsection MakeProfiles catalog
The catalog containing information about each profile can be in the FITS ASCII, FITS binary, or plain text formats (see @ref{Tables}).
The latter can also be provided using standard input (see @ref{Standard input}).
Its columns can be ordered in any desired manner.
You can specify which columns belong to which parameters using the set of options discussed below.
For example, through the @option{--rcol} and @option{--tcol} options, you can specify the column that contains the radial parameter for each profile and its truncation respectively.
See @ref{Selecting table columns} for a thorough discussion on the values to these options.
The value for the profile center in the catalog (the @option{--ccol} option) can be a floating point number so the profile center can be on any sub-pixel position.
Note that pixel positions in the FITS standard start from 1 and an integer is the pixel center.
So a 2D image actually starts from the position (0.5, 0.5), which is the bottom-left corner of the first pixel.
When a @option{--background} image with WCS information is provided, or you specify the WCS parameters with the respective options@footnote{The options to set the WCS are the following: @option{--crpix}, @option{--crval}, @option{--cdelt}, @option{--cdelt}, @option{--pc}, @option{cunit} and @option{ctype}.
Just recall that these options are only used if @option{--background} is not given: if the image you give to @option{--background} does not have WCS, these options will not be used and you cannot use WCS-mode coordinates like RA or Dec.}, you may also use RA and Dec to identify the center of each profile (see the @option{--mode} option below).
In MakeProfiles, profile centers do not have to be in (overlap with) the final image.
Even if only one pixel of the profile within the truncation radius overlaps with the final image size, the profile is built and included in the final image.
Profiles that are completely out of the image will not be created (unless you explicitly ask for it with the @option{--individual} option).
You can use the output log file (created with @option{--log} to see which profiles were within the image, see @ref{Common options}.
If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and the profiles are to be built in one image (when @option{--individual} is not used), it is assumed they are the PSF(s) you want to convolve your created image with.
So by default, they will not be built in the output image but as separate files.
The sum of pixels of these separate files will also be set to unity (1) so you are ready to convolve, see @ref{Convolution process}.
As a summary, the position and magnitude of PSF profile will be ignored.
This behavior can be disabled with the @option{--psfinimg} option.
If you want to create all the profiles separately (with @option{--individual}) and you want the sum of the PSF profile pixels to be unity, you have to set their magnitudes in the catalog to the zero point magnitude and be sure that the central positions of the profiles do not have any fractional part (the PSF center has to be in the center of the pixel).
The list of options directly related to the input catalog columns is shown below.
@table @option
@item --ccol=STR/INT
Center coordinate column for each dimension.
This option must be called two times to define the center coordinates in an image.
For example, @option{--ccol=RA} and @option{--ccol=DEC} (along with @option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns named @option{RA} and @option{DEC} for the Right Ascension and Declination of the profile centers.
@item --fcol=INT/STR
The functional form of the profile with one of the values below depending on the desired profile.
The column can contain either the numeric codes (for example, `@code{1}') or string characters (for example, `@code{sersic}').
The numeric codes are easier to use in scripts which generate catalogs with hundreds or thousands of profiles.
The string format can be easier when the catalog is to be written/checked by hand/eye before running MakeProfiles.
It is much more readable and provides a level of documentation.
All Gnuastro's recognized table formats (see @ref{Recognized table formats}) accept string type columns.
To have string columns in a plain text table/catalog, see @ref{Gnuastro text table format}.
@itemize
@item
S@'ersic profile with `@code{sersic}' or `@code{1}'.
@item
Moffat profile with `@code{moffat}' or `@code{2}'.
@item
Gaussian profile with `@code{gaussian}' or `@code{3}'.
@item
Point source with `@code{point}' or `@code{4}'.
@item
Flat profile with `@code{flat}' or `@code{5}'.
@item
Circumference profile with `@code{circum}' or `@code{6}'.
A fixed value will be used for all pixels less than or equal to the truncation radius (@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value to the @option{--circumwidth}).
@item
Radial distance profile with `@code{distance}' or `@code{7}'.
At the lowest level, each pixel only has an elliptical radial distance given the profile's shape and orientation (see @ref{Defining an ellipse and ellipsoid}).
When this profile is chosen, the pixel's elliptical radial distance from the profile center is written as its value.
For this profile, the value in the magnitude column (@option{--mcol}) will be ignored.
You can use this for checks or as a first approximation to define your own higher-level radial function.
In the latter case, just note that the central values are going to be incorrect (see @ref{Sampling from a function}).
@item
Custom radial profile with `@code{custom-prof}' or `@code{8}'.
The values to use for each radial interval should be in the table given to @option{--customtable}.
By default, once the profile is built with the given values, it will be scaled to have a total magnitude that you have requested in the magnitude column of the profile (in @option{--mcol}).
If you want the raw values in the 2D profile (to ignore the magnitude column), use @option{--mcolnocustprof}.
For more, see the description of @option{--customtable} in @ref{MakeProfiles profile settings}.
@item
Azimuthal angle profile with `@code{azimuth}' or `@code{9}'.
Every pixel within the truncation radius will be given its azimuthal angle (in degrees, from 0 to 360) from the major axis.
In combination with the radial distance profile, you can now create complex features in polar coordinates, such as tidal tails or tidal shocks (using the Arithmetic program to mix the radius and azimuthal angle through a function to create your desired features).
@item
Custom image with `@code{custom-img}' or `@code{10}'.
The image(s) to use should be given to the @option{--customimg} option (which can be called multiple times for multiple images).
To identify which one of the images (given to @option{--customimg}) should be used, you should specify their counter in the ``radius'' column below.
For more, see the description of @code{custom-img} in @ref{MakeProfiles profile settings}.
@end itemize
@item --rcol=STR/INT
The radius parameter of the profiles.
Effective radius (@mymath{r_e}) if S@'ersic, FWHM if Moffat or Gaussian.
For a custom image profile, this option is not interpreted as a radius, but as a counter (identifying which one of the images given to @option{--customimg} should be used for each row).
@item --ncol=STR/INT
The S@'ersic index (@mymath{n}) or Moffat @mymath{\beta}.
@item --pcol=STR/INT
The position angle (in degrees) of the profiles relative to the first FITS axis (horizontal when viewed in SAO DS9).
When building a 3D profile, this is the first Euler angle: first rotation of the ellipsoid major axis from the first FITS axis (rotating about the third axis).
See @ref{Defining an ellipse and ellipsoid}.
@item --p2col=STR/INT
Second Euler angle (in degrees) when building a 3D ellipsoid.
This is the second rotation of the ellipsoid major axis (following @option{--pcol}) about the (rotated) X axis.
See @ref{Defining an ellipse and ellipsoid}.
This column is ignored when building a 2D profile.
@item --p3col=STR/INT
Third Euler angle (in degrees) when building a 3D ellipsoid.
This is the third rotation of the ellipsoid major axis (following @option{--pcol} and @option{--p2col}) about the (rotated) Z axis.
See @ref{Defining an ellipse and ellipsoid}.
This column is ignored when building a 2D profile.
@item --qcol=STR/INT
The axis ratio of the profiles (minor axis divided by the major axis in a 2D ellipse).
When building a 3D ellipse, this is the ratio of the major axis to the semi-axis length of the second dimension (in a right-handed coordinate system).
See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
@item --q2col=STR/INT
The ratio of the ellipsoid major axis to the third semi-axis length (in a right-handed coordinate system) of a 3D ellipsoid.
See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
This column is ignored when building a 2D profile.
@item --mcol=STR/INT
The total pixelated magnitude of the profile within the truncation radius, see @ref{Profile magnitude}.
@item --tcol=STR/INT
The truncation radius of this profile.
By default it is in units of the radial parameter of the profile (the value in the @option{--rcol} of the catalog).
If @option{--tunitinp} is given, this value is interpreted in units of pixels (prior to oversampling) irrespective of the profile.
@end table
@node MakeProfiles profile settings, MakeProfiles output dataset, MakeProfiles catalog, Invoking astmkprof
@subsubsection MakeProfiles profile settings
The profile parameters that differ between each created profile are specified through the columns in the input catalog and described in @ref{MakeProfiles catalog}.
Besides those there are general settings for some profiles that do not differ between one profile and another, they are a property of the general process.
For example, how many random points to use in the monte-carlo integration, this value is fixed for all the profiles.
The options described in this section are for configuring such properties.
@table @option
@item --mode=STR
Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles catalog}) in image or WCS coordinates.
This option thus accepts only two values: @option{img} and @option{wcs}.
It is mandatory when a catalog is being used as input.
@item -r
@itemx --numrandom
The number of random points used in the central regions of the profile, see @ref{Sampling from a function}.
@item -e
@itemx --envseed
@cindex Seed, Random number generator
@cindex Random number generator, Seed
Use the value to the @code{GSL_RNG_SEED} environment variable to generate the random Monte Carlo sampling distribution, see @ref{Sampling from a function} and @ref{Generating random numbers}.
@item -t FLT
@itemx --tolerance=FLT
The tolerance to switch from Monte Carlo integration to the central pixel value, see @ref{Sampling from a function}.
@item -p
@itemx --tunitinp
The truncation column of the catalog is in units of pixels.
By default, the truncation column is considered to be in units of the radial parameters of the profile (@option{--rcol}).
Read it as `t-unit-in-p' for `truncation unit in pixels'.
@item -f
@itemx --mforflatpix
When making fixed value profiles (``flat'', ``circumference'' or ``point'' profiles, see `@option{--fcol}'), do not use the value in the column specified by `@option{--mcol}' as the magnitude.
Instead use it as the exact value that all the pixels of these profiles should have.
This option is irrelevant for other types of profiles.
This option is very useful for creating masks, or labeled regions in an image.
Any integer, or floating point value can used in this column with this option, including @code{NaN} (or `@code{nan}', or `@code{NAN}', case is irrelevant), and infinities (@code{inf}, @code{-inf}, or @code{+inf}).
For example, with this option if you set the value in the magnitude column (@option{--mcol}) to @code{NaN}, you can create an elliptical or circular mask over an image (which can be given as the argument), see @ref{Blank pixels}.
Another useful application of this option is to create labeled elliptical or circular apertures in an image.
To do this, set the value in the magnitude column to the label you want for this profile.
This labeled image can then be used in combination with NoiseChisel's output (see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see @ref{MakeCatalog}).
Alternatively, if you want to mark regions of the image (for example, with an elliptical circumference) and you do not want to use NaN values (as explained above) for some technical reason, you can get the minimum or maximum value in the image @footnote{
The minimum will give a better result, because the maximum can be too high compared to most pixels in the image, making it harder to display.}
using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude column along with this option for all the profiles.
Please note that when using MakeProfiles on an already existing image, you have to set `@option{--oversample=1}'.
Otherwise all the profiles will be scaled up based on the oversampling scale in your configuration files (see @ref{Configuration files}) unless you have accounted for oversampling in your catalog.
@item --mcolissum
The value given in the ``magnitude'' column (specified by @option{--mcol}, see @ref{MakeProfiles catalog}) must be interpreted as total sum of pixel values, not magnitude (which is measured from the total sum and zero point, see @ref{Brightness flux magnitude}).
When this option is called, the zero point magnitude (value to the @option{--zeropoint} option) is ignored and the given value must have the same units as the input dataset's pixels.
Recall that the total profile magnitude that is specified with in the @option{--mcol} column of the input catalog is not an integration to infinity, but the actual sum of pixels in the profile (until the desired truncation radius).
See @ref{Profile magnitude} for more on this point.
@item --mcolnocustprof
Do not touch (re-scale) the custom profile that should be inserted in @code{custom-prof} profile (see the description of @option{--fcol} in @ref{MakeProfiles catalog} or the description of @option{--customtable} below).
By default, MakeProfiles will scale (multiply) the custom image's pixels to have the desired magnitude (or sum of pixels if @option{--mcolissum} is called) in that row.
@item --mcolnocustimg
Do not touch (re-scale) the custom image that should be inserted in @code{custom-img} profile (see the description of @option{--fcol} in @ref{MakeProfiles catalog}).
By default, MakeProfiles will scale (multiply) the custom image's pixels to have the desired magnitude (or sum of pixels if @option{--mcolissum} is called) in that row.
@item --magatpeak
The magnitude column in the catalog (see @ref{MakeProfiles catalog}) will be used to set the value only for the profile's peak (maximum) pixel, not the full profile.
Note that this is the flux of the profile's peak (maximum) pixel in the final output of MakeProfiles.
So beware of the oversampling, see @ref{Oversampling}.
This option can be useful if you want to check a mock profile's total magnitude at various truncation radii.
Without this option, no matter what the truncation radius is, the total magnitude will be the same as that given in the catalog.
But with this option, the total magnitude will become brighter as you increase the truncation radius.
In sharper profiles, sometimes the accuracy of measuring the peak profile flux is more than the overall object sum or magnitude.
In such cases, with this option, the final profile will be built such that its peak has the given magnitude, not the total profile.
@cartouche
@strong{CAUTION:} If you want to use this option for comparing with observations, please note that MakeProfiles does not do convolution.
Unless you have deconvolved your data, your images are convolved with the instrument and atmospheric PSF, see @ref{PSF}.
Particularly in sharper profiles, the flux in the peak pixel is strongly decreased after convolution.
Also note that in such cases, besides deconvolution, you will have to set @option{--oversample=1} otherwise after resampling your profile with Warp (see @ref{Warp}), the peak flux will be different.
@end cartouche
@item --customtable FITS/TXT
The filename of the table to use in the custom radial profiles (see description of @option{--fcol} in @ref{MakeProfiles catalog}.
This can be a plain-text table, or FITS table, see @ref{Recognized table formats}, if it is a FITS table, you can use @option{--customtablehdu} to specify which HDU should be used (described below).
A custom radial profile can have any value you want for a given radial profile (including NaN/blank values).
Each interval is defined by its minimum (inclusive) and maximum (exclusive) radius, when a pixel center falls within a radius interval, the value specified for that interval will be used.
If a pixel is not in the given intervals, a value of 0.0 will be used for that pixel.
The table should have 3 columns as shown below.
If the intervals are contiguous (the maximum value of the previous interval is equal to the minimum value of an interval) and the intervals all have the same size (difference between minimum and maximum values) the creation of these profiles will be fast.
However, if the intervals are not sorted and contiguous, MakeProfiles will parse the intervals from the top of the table and use the first interval that contains the pixel center (this may slow it down).
@table @asis
@item Column 1:
The interval's minimum radius.
@item Column 2:
The interval's maximum radius.
@item Column 3:
The value to be used for pixels within the given interval (including NaN/blank).
@end table
Gnuastro's column arithmetic in the Table program has the @code{sorted-to-interval} operator that will generate the first two columns from a single column (your radial profile).
See the description of that operator in @ref{Column arithmetic} and the example below.
By default, once a 2D image is constructed for the radial profile, it will be scaled such that its total magnitude corresponds to the value in the magnitude column (@option{--mcol}) of the main input catalog.
If you want to disable the scaling and use the raw values in your custom profile (in other words: you want to ignore the magnitude column) you need to call @option{--mcolnocustprof} (see above).
In the example below, we'll start with a certain radial profile, and use this option to build its 2D representation in an image (recall that you can build radial profiles with @ref{Generate radial profile}).
But first, we will need to use the @code{sorted-to-interval} to build the necessary input format (see @ref{Column arithmetic}).
@example
$ cat radial.txt
# Column 1: RADIUS [pix ,f32,] Radial distance
# Column 2: MEAN [input-units,f32,] Mean of values.
0.0 1.00000
1.0 0.50184
1.4 0.37121
2.0 0.26414
2.2 0.23427
2.8 0.17868
3.0 0.16627
3.1 0.15567
3.6 0.13132
4.0 0.11404
## Convert the radius in each row to an interval
$ asttable radial.txt --output=interval.fits \
-c'arith RADIUS sorted-to-interval',MEAN
## Inspect the table containing intervals
$ asttable interval.fits -ffixed
-0.500000 0.500000 1.000000
0.500000 1.200000 0.501840
1.200000 1.700000 0.371210
1.700000 2.100000 0.264140
2.100000 2.500000 0.234270
2.500000 2.900000 0.178680
2.900000 3.050000 0.166270
3.050000 3.350000 0.155670
3.350000 3.800000 0.131320
3.800000 4.200000 0.114040
## Build the 2D image of the profile from the interval.
$ echo "1 7 7 8 10 2.5 0 1 1 2" \
| astmkprof --mergedsize=13,13 --oversample=1 \
--customtable=interval.fits \
--output=image.fits
## View the created FITS image.
$ astscript-fits-view image.fits --ds9scale=minmax
@end example
Recall that if you want your image pixels to have the same values as the @code{MEAN} column in your profile, you should run MakeProfiles with @option{--mcolnocustprof}.
In case you want to build the profile using @ref{Generate radial profile}, be sure to use the @option{--oversample} option of @command{astscript-radial-profile}.
The higher the oversampling, the better your result will be.
For example you can run the following script to see the effect (also see @url{https://savannah.gnu.org/bugs/?65106, bug 65106}).
But don't take the oversampling too high: both the radial profile script and MakeProfiles will become slower and the precision of your results will decrease.
@verbatim
#!/bin/bash
# Function to avoid repeating code: first generate a radial profile
# with a certain oversampling, then build a 2D profile from it):
# The first argument is the oversampling, the second is the suffix.
gen_rad_make_2dprf () {
# Generate the radial profile
radraw=$bdir/radial-profile-$2.fits
astscript-radial-profile $prof -o$radraw \
--oversample=$1 \
--zeroisnotblank
# Generate the custom table format
custraw=$bdir/customtable-$2.fits
asttable $radraw --output=interval.fits \
-c'arith RADIUS sorted-to-interval',MEAN \
-o$custraw
# Build the 2D profile.
prof2draw=$bdir/prof2d-$2.fits
echo "1 $xc $yc 8 30 0 0 1 0 1" \
| astmkprof --customtable=$custraw \
--mergedsize=$xw,$yw \
--output=$prof2draw \
--mcolnocustprof \
--oversample=1 \
--clearcanvas \
--mode=img
}
# Directory to hold built files
bdir=build
if ! [ -d $bdir ]; then mkdir $bdir; fi
# Build a Gaussian profile in the center of an image to start with.
prof=$bdir/prof.fits
astmkprof --kernel=gaussian,2,5 -o$prof
# Find the center pixel of the image
xw=$(astfits $prof --keyvalue=NAXIS1 --quiet)
yw=$(astfits $prof --keyvalue=NAXIS2 --quiet)
xc=$(echo $xw | awk '{print int($1/2)+1}')
yc=$(echo $yw| awk '{print int($1/2)+1}')
# Generate two 2D radial profiles, one with an oversampling of 1
# and another with an oversampling of 5.
gen_rad_make_2dprf 1 "raw"
gen_rad_make_2dprf 5 "oversample"
# View the two images beside each other:
astscript-fits-view $bdir/prof2d-raw.fits \
$bdir/prof2d-oversample.fits
@end verbatim
@item --customtablehdu INT/STR
The HDU/extension in the FITS file given to @option{--customtable}.
@item --customimg=STR[,STR]
A custom FITS image that should be used for the @code{custom-img} profiles (see the description of @option{--fcol} in @ref{MakeProfiles catalog}).
Multiple files can be given to this option (separated by a comma), and this option can be called multiple times itself (useful when many custom image profiles should be added).
If the HDU of the images are different, you can use @option{--customimghdu} (described below).
Through the ``radius'' column, MakeProfiles will know which one of the images given to this option should be used in each row.
For example, let's assume your input catalog (@file{cat.fits}) has the following contents (output of first command below), and you call MakeProfiles like the second command below to insert four profiles into the background @file{back.fits} image.
The first profile below is Sersic (with an @option{--fcol}, or 4-th column, code of @code{1}).
So MakeProfiles builds the pixels of the first profile, and all column values are meaningful.
However, the second, third and fourth inserted objects are custom images (with an @option{--fcol} code of @code{10}).
For the custom image profiles, you see that the radius column has values of @code{1} or @code{2}.
This tells MakeProfiles to use the first image given to @option{--customimg} (or @file{gal-1.fits}) for the second and fourth inserted objects.
The second image given to @option{--customimage} (or @file{gal-2.fits}) will be used for the third inserted object.
Finally, all three custom image profiles have different magnitudes, and the values in @option{--ncol}, @option{--pcol}, @option{--qcol} and @option{--tcol} are ignored.
@example
$ cat cat.fits
1 53.15506 -27.785165 1 20 1 20 0.6 25 5
2 53.15602 -27.777887 10 1 0 0 0 22 0
3 53.16440 -27.775876 10 2 0 0 0 24 0
4 53.16849 -27.787406 10 1 0 0 0 23 0
$ astmkprof cat.fits --mode=wcs --zeropoint=25.68 \
--background=back.fits --output=out.fits \
--customimg=gal-1.fits --customimg=gal-2.fits
@end example
@item --customimghdu=INT/STR
The HDU(s) of the images given to @option{--customimghdu}.
If this option is only called once, but @option{--customimg} is called many times, MakeProfiles will assume that all images given to @option{--customimg} have the same HDU.
Otherwise (if the number of HDUs is equal to the number of images), then each image will use its corresponding HDU.
@item -X INT,INT
@itemx --shift=INT,INT
Shift all the profiles and enlarge the image along each dimension.
To better understand this option, please see @mymath{n} in @ref{If convolving afterwards}.
This is useful when you want to convolve the image afterwards.
If you are using an external PSF, be sure to oversample it to the same scale used for creating the mock images.
If a background image is specified, any possible value to this option is ignored.
@item -c
@itemx --prepforconv
Shift all the profiles and enlarge the image based on half the width of the first Moffat or Gaussian profile in the catalog, considering any possible oversampling see @ref{If convolving afterwards}.
@option{--prepforconv} is only checked and possibly activated if @option{--xshift} and @option{--yshift} are both zero (after reading the command-line and configuration files).
If a background image is specified, any possible value to this option is ignored.
@item -z FLT
@itemx --zeropoint=FLT
The zero point magnitude of the input.
For more on the zero point magnitude, see @ref{Brightness flux magnitude}.
@item -w FLT
@itemx --circumwidth=FLT
The width of the circumference if the profile is to be an elliptical circumference or annulus.
See the explanations for this type of profile in @option{--fcol}.
@item -R
@itemx --replace
Do not add the pixels of each profile over the background, or other profiles.
But replace the values.
By default, when two profiles overlap, the final pixel value is the sum of all the profiles that overlap on that pixel.
This is the expected situation when dealing with physical object profiles like galaxies or stars/PSF.
However, when MakeProfiles is used to build integer labeled images (for example, in @ref{Aperture photometry}), this is not the expected situation: the sum of two labels will be a new label.
With this option, the pixels are not added but the largest (maximum) value over that pixel is used.
Because the maximum operator is independent of the order of values, the output is also thread-safe.
@end table
@node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile settings, Invoking astmkprof
@subsubsection MakeProfiles output dataset
MakeProfiles takes an input catalog uses basic properties that are defined there to build a dataset, for example, a 2D image containing the profiles in the catalog.
In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the catalog and profile settings were discussed.
The options of this section, allow you to configure the output dataset (or the canvas that will host the built profiles).
@table @option
@item -k FITS
@itemx --background=FITS
A background image FITS file to build the profiles on.
The extension that contains the image should be specified with the @option{--backhdu} option, see below.
When a background image is specified, it will be used to derive all the information about the output image.
Hence, the following options will be ignored: @option{--mergedsize}, @option{--oversample}, @option{--crpix}, @option{--crval} (generally, all other WCS related parameters) and the output's data type (see @option{--type} in @ref{Input output options}).
The background image will act like a canvas to build the profiles on: profile pixel values will be summed with the background image pixel values.
With the @option{--replace} option you can disable this behavior and replace the profile pixels with the background pixels.
If you want to use all the image information above, except for the pixel values (you want to have a blank canvas to build the profiles on, based on an input image), you can call @option{--clearcanvas}, to set all the input image's pixels to zero before starting to build the profiles over it (this is done in memory after reading the input, so nothing will happen to your input file).
@item -B STR/INT
@itemx --backhdu=STR/INT
The header data unit (HDU) of the file given to @option{--background}.
@item -C
@itemx --clearcanvas
When an input image is specified (with the @option{--background} option, set all its pixels to 0.0 immediately after reading it into memory.
Effectively, this will allow you to use all its properties (described under the @option{--background} option), without having to worry about the pixel values.
@option{--clearcanvas} can come in handy in many situations, for example, if you want to create a labeled image (segmentation map) for creating a catalog (see @ref{MakeCatalog}).
In other cases, you might have modeled the objects in an image and want to create them on the same frame, but without the original pixel values.
@item -E STR/INT,FLT[,FLT,[...]]
@itemx --kernel=STR/INT,FLT[,FLT,[...]]
Only build one kernel profile with the parameters given as the values to this option.
The different values must be separated by a comma (@key{,}).
The first value identifies the radial function of the profile, either through a string or through a number (see description of @option{--fcol} in @ref{MakeProfiles catalog}).
Each radial profile needs a different total number of parameters: S@'ersic and Moffat functions need 3 parameters: radial, S@'ersic index or Moffat @mymath{\beta}, and truncation radius.
The Gaussian function needs two parameters: radial and truncation radius.
The point function does not need any parameters and flat and circumference profiles just need one parameter (truncation radius).
The PSF or kernel is a unique (and highly constrained) type of profile: the sum of its pixels must be one, its center must be the center of the central pixel (in an image with an odd number of pixels on each side), and commonly it is circular, so its axis ratio and position angle are one and zero respectively.
Kernels are commonly necessary for various data analysis and data manipulation steps (for example, see @ref{Convolve}, and @ref{NoiseChisel}.
Because of this it is inconvenient to define a catalog with one row and many zero valued columns (for all the non-necessary parameters).
Hence, with this option, it is possible to create a kernel with MakeProfiles without the need to create a catalog.
Here are some examples:
@table @option
@item --kernel=moffat,3,2.8,5
A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated at 5 times the FWHM.
@item --kernel=gaussian,2,3
A circular Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
the FWHM.
@end table
This option may also be used to create a 3D kernel.
To do that, two small modifications are necessary: add a @code{-3d} (or @code{-3D}) to the profile name (for example, @code{moffat-3d}) and add a number (axis-ratio along the third dimension) to the end of the parameters for all profiles except @code{point}.
The main reason behind providing an axis ratio in the third dimension is that in 3D astronomical datasets, commonly the third dimension does not have the same nature (units/sampling) as the first and second.
For example, in IFU (optical) or Radio data cubes, the first and second dimensions are commonly spatial/angular positions (like RA and Dec) but the third dimension is wavelength or frequency (in units of Angstroms for Herz).
Because of this different nature (which also affects the processing), it may be necessary for the kernel to have a different extent in that direction.
If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will be a spheroid.
If it is smaller than @mymath{1.0}, the kernel will be button-shaped: extended less in the third dimension.
However, when it islarger than @mymath{1.0}, the kernel will be bullet-shaped: extended more in the third dimension.
In the latter case, the radial parameter will correspond to the length along the 3rd dimension.
For example, let's have a look at the two examples above but in 3D:
@table @option
@item --kernel=moffat-3d,3,2.8,5,0.5
An ellipsoid Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated at 5 times the FWHM.
The ellipsoid is circular in the first two dimensions, but in the third dimension its extent is half the first two.
@item --kernel=gaussian-3d,2,3,1
A spherical Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
the FWHM.
@end table
Of course, if a specific kernel is needed that does not fit the constraints imposed by this option, you can always use a catalog to define any arbitrary kernel.
Just call the @option{--individual} and @option{--nomerged} options to make sure that it is built as a separate file (individually) and no ``merged'' image of the input profiles is created.
@item -x INT,INT
@itemx --mergedsize=INT,INT
The number of pixels along each axis of the output, in FITS order.
This is before over-sampling.
For example, if you call MakeProfiles with @option{--mergedsize=100,150 --oversample=5} (assuming no shift due for later convolution), then the final image size along the first axis will be 500 by 750 pixels.
Fractions are acceptable as values for each dimension, however, they must reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but @option{--mergedsize=150/4,300/4} is not.
When viewing a FITS image in DS9, the first FITS dimension is in the horizontal direction and the second is vertical.
As an example, the image created with the example above will have 500 pixels horizontally and 750 pixels vertically.
If a background image is specified, this option is ignored.
@item -s INT
@itemx --oversample=INT
The scale to over-sample the profiles and final image.
If not an odd number, will be added by one, see @ref{Oversampling}.
Note that this @option{--oversample} will remain active even if an input image is specified.
If your input catalog is based on the background image, be sure to set @option{--oversample=1}.
@item --psfinimg
Build the possibly existing PSF profiles (Moffat or Gaussian) in the catalog into the final image.
By default they are built separately so you can convolve your images with them, thus their magnitude and positions are ignored.
With this option, they will be built in the final image like every other galaxy profile.
To have a final PSF in your image, make a point profile where you want the PSF and after convolution it will be the PSF.
@item -i
@itemx --individual
@cindex Individual profiles
@cindex Build individual profiles
If this option is called, each profile is created in a separate FITS file within the same directory as the output and the row number of the profile (starting from zero) in the name.
The file for each row's profile will be in the same directory as the final combined image of all the profiles and will have the final image's name as a suffix.
So for example, if the final combined image is named @file{./out/fromcatalog.fits}, then the first profile that will be created with this option will be named @file{./out/0_fromcatalog.fits}.
Since each image only has one full profile out to the truncation radius the profile is centered and so, only the sub-pixel position of the profile center is important for the outputs of this option.
The output will have an odd number of pixels.
If there is no oversampling, the central pixel will contain the profile center.
If the value to @option{--oversample} is larger than unity, then the profile center is on any of the central @option{--oversample}'d pixels depending on the fractional value of the profile center.
If the fractional value is larger than half, it is on the bottom half of the central region.
This is due to the FITS definition of a real number position: The center of a pixel has fractional value @mymath{0.00} so each pixel contains these fractions: .5 -- .75 -- .00 (pixel center) -- .25 -- .5.
@item -m
@itemx --nomerged
Do not make a merged image.
By default after making the profiles, they are added to a final image with side lengths specified by @option{--mergedsize} if they overlap with it.
@end table
@noindent
The options below can be used to define the world coordinate system (WCS) properties of the MakeProfiles outputs.
The option names are deliberately chosen to be the same as the FITS standard WCS keywords.
See Section 8 of @url{https://doi.org/10.1051/0004-6361/201015362, Pence et al [2010]} for a short introduction to WCS in the FITS standard@footnote{The world coordinate standard in FITS is a very beautiful and powerful concept to link/associate datasets with the outside world (other datasets).
The description in the FITS standard (link above) only touches the tip of the ice-burg.
To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326, Greisen and Calabretta [2002]}, @url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen [2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al. [2006]}, and @url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta et al.}}.
If you look into the headers of a FITS image with WCS for example, you will see all these names but in uppercase and with numbers to represent the dimensions, for example, @code{CRPIX1} and @code{PC2_1}.
You can see the FITS headers with Gnuastro's @ref{Fits} program using a command like this: @command{$ astfits -p image.fits}.
If the values given to any of these options does not correspond to the number of dimensions in the output dataset, then no WCS information will be added.
Also recall that if you use the @option{--background} option, all of these options are ignored.
Such that if the image given to @option{--background} does not have any WCS, the output of MakeProfiles will also not have any WCS, even if these options are given@footnote{If you want to add profiles @emph{and} WCS over the background image (to produce your output), you need more than one command:
1. You should use @option{--mergedsize} in MakeProfiles to manually set the output number of pixels equal to your desired background image (so the background is zero).
In this mode, you can use these WCS-related options to define the WCS.
2. Then use Arithmetic to add the pixels of your mock image to the background (see @ref{Arithmetic}.}.
@table @option
@item --crpix=FLT,FLT
The pixel coordinates of the WCS reference point.
Fractions are acceptable for the values of this option.
@item --crval=FLT,FLT
The WCS coordinates of the Reference point.
Fractions are acceptable for the values of this option.
The comma-separated values can either be in degrees (a single number), or sexagesimal (@code{_h_m_} for RA, @code{_d_m_} for Dec, or @code{_:_:_} for both).
In any case, the final value that will be written in the @code{CRVAL} keyword will be a floating point number in degrees (according to the FITS standard).
@item --cdelt=FLT,FLT
The resolution (size of one data-unit or pixel in WCS units) of the non-oversampled dataset.
Fractions are acceptable for the values of this option.
@item --pc=FLT,FLT,FLT,FLT
The PC matrix of the WCS rotation, see the FITS standard (link above) to better understand the PC matrix.
@item --cunit=STR,STR
The units of each WCS axis, for example, @code{deg}.
Note that these values are part of the FITS standard (link above).
MakeProfiles will not complain if you use non-standard values, but later usage of them might cause trouble.
@item --ctype=STR,STR
The type of each WCS axis, for example, @code{RA---TAN} and @code{DEC--TAN}.
Note that these values are part of the FITS standard (link above).
MakeProfiles will not complain if you use non-standard values, but later usage of them might cause trouble.
@end table
@node MakeProfiles log file, , MakeProfiles output dataset, Invoking astmkprof
@subsubsection MakeProfiles log file
Besides the final merged dataset of all the profiles, or the individual datasets (see @ref{MakeProfiles output dataset}), if the @option{--log} option is called MakeProfiles will also create a log file in the current directory (where you run MockProfiles).
See @ref{Common options} for a full description of @option{--log} and other options that are shared between all Gnuastro programs.
The values for each column are explained in the first few commented lines of the log file (starting with @command{#} character).
Here is a more complete description.
@itemize
@item
An ID (row number of profile in input catalog).
@item
The total magnitude of the profile in the output dataset.
When the profile does not completely overlap with the output dataset, this will be different from your input magnitude.
@item
The number of pixels (in the oversampled image) which used Monte Carlo integration and not the central pixel value, see @ref{Sampling from a function}.
@item
The fraction of flux in the Monte Carlo integrated pixels.
@item
If an individual image was created, this column will have a value of @code{1}, otherwise it will have a value of @code{0}.
@end itemize
@node High-level calculations, Installed scripts, Data modeling, Top
@chapter High-level calculations
After the reduction of raw data (for example, with the programs in @ref{Data manipulation}) you will have reduced images/data ready for processing/analyzing (for example, with the programs in @ref{Data analysis}).
But the processed/analyzed data (or catalogs) are still not enough to derive any scientific result.
Even higher-level analysis is still needed to convert the observed magnitudes, sizes or volumes into physical quantities that we associate with each catalog entry or detected object which is the purpose of the tools in this section.
@menu
* CosmicCalculator:: Calculate cosmological variables
@end menu
@node CosmicCalculator, , High-level calculations, High-level calculations
@section CosmicCalculator
To derive higher-level information regarding our sources in extra-galactic astronomy, cosmological calculations are necessary.
In Gnuastro, CosmicCalculator is in charge of such calculations.
Before discussing how CosmicCalculator is called and operates (in @ref{Invoking astcosmiccal}), it is important to provide a rough but mostly self sufficient review of the basics and the equations used in the analysis.
In @ref{Distance on a 2D curved space} the basic idea of understanding distances in a curved and expanding 2D universe (which we can visualize) are reviewed.
Having solidified the concepts there, in @ref{Extending distance concepts to 3D}, the formalism is extended to the 3D universe we are trying to study in our research.
The focus here is obtaining a physical insight into these equations (mainly for the use in real observational studies).
There are many books thoroughly deriving and proving all the equations with all possible initial conditions and assumptions for any abstract universe, interested readers can study those books.
@menu
* Distance on a 2D curved space:: Distances in 2D for simplicity.
* Extending distance concepts to 3D:: Going to 3D (our real universe).
* Invoking astcosmiccal:: How to run CosmicCalculator.
@end menu
@node Distance on a 2D curved space, Extending distance concepts to 3D, CosmicCalculator, CosmicCalculator
@subsection Distance on a 2D curved space
The observations to date (for example, the Planck 2015 results), have not measured@footnote{The observations are interpreted under the assumption of uniform curvature.
For a relativistic alternative to dark energy (and maybe also some part of dark matter), non-uniform curvature may be even be more critical, but that is beyond the scope of this brief explanation.} the presence of significant curvature in the universe.
However to be generic (and allow its measurement if it does in fact exist), it is very important to create a framework that allows non-zero uniform curvature.
However, this section is not intended to be a fully thorough and mathematically complete derivation of these concepts.
There are many references available for such reviews that go deep into the abstract mathematical proofs.
The emphasis here is on visualization of the concepts for a beginner.
As 3D beings, it is difficult for us to mentally create (visualize) a picture of the curvature of a 3D volume.
Hence, here we will assume a 2D surface/space and discuss distances on that 2D surface when it is flat and when it is curved.
Once the concepts have been created/visualized here, we will extend them, in @ref{Extending distance concepts to 3D}, to a real 3D spatial @emph{slice} of the Universe we live in and hope to study.
To be more understandable (actively discuss from an observer's point of view) let's assume there's an imaginary 2D creature living on the 2D space (which @emph{might} be curved in 3D).
Here, we will be working with this creature in its efforts to analyze distances in its 2D universe.
The start of the analysis might seem too mundane, but since it is difficult to imagine a 3D curved space, it is important to review all the very basic concepts thoroughly for an easy transition to a universe that is more difficult to visualize (a curved 3D space embedded in 4D).
To start, let's assume a static (not expanding or shrinking), flat 2D surface similar to @ref{flatplane} and that the 2D creature is observing its universe from point @mymath{A}.
One of the most basic ways to parameterize this space is through the Cartesian coordinates (@mymath{x}, @mymath{y}).
In @ref{flatplane}, the basic axes of these two coordinates are plotted.
An infinitesimal change in the direction of each axis is written as @mymath{dx} and @mymath{dy}.
For each point, the infinitesimal changes are parallel with the respective axes and are not shown for clarity.
Another very useful way of parameterizing this space is through polar coordinates.
For each point, we define a radius (@mymath{r}) and angle (@mymath{\phi}) from a fixed (but arbitrary) reference axis.
In @ref{flatplane} the infinitesimal changes for each polar coordinate are plotted for a random point and a dashed circle is shown for all points with the same radius.
@float Figure,flatplane
@center@image{gnuastro-figures/flatplane, 10cm, , }
@caption{Two dimensional Cartesian and polar coordinates on a flat plane.}
@end float
Assuming an object is placed at a certain position, which can be parameterized as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general infinitesimal change in its position will place it in the coordinates @mymath{(x+dx,y+dy)}, or @mymath{(r+dr,\phi+d\phi)}.
The distance (on the flat 2D surface) that is covered by this infinitesimal change in the static universe (@mymath{ds_s}, the subscript signifies the static nature of this universe) can be written as:
@dispmath{ds_s^2=dx^2+dy^2=dr^2+r^2d\phi^2}
The main question is this: how can the 2D creature incorporate the (possible) curvature in its universe when it's calculating distances? The universe that it lives in might equally be a curved surface like @ref{sphereandplane}.
The answer to this question but for a 3D being (us) is the whole purpose to this discussion.
Here, we want to give the 2D creature (and later, ourselves) the tools to measure distances if the space (that hosts the objects) is curved.
@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as the curved 2D plane for simplicity.
The 2D plane is tangent to the spherical shell and only touches it at @mymath{A}.
This idea will be generalized later.
The first step in measuring the distance in a curved space is to imagine a third dimension along the @mymath{z} axis as shown in @ref{sphereandplane}.
For simplicity, the @mymath{z} axis is assumed to pass through the center of the spherical shell.
Our imaginary 2D creature cannot visualize the third dimension or a curved 2D surface within it, so the remainder of this discussion is purely abstract for it (similar to us having difficulty in visualizing a 3D curved space in 4D).
But since we are 3D creatures, we have the advantage of visualizing the following steps.
Fortunately the 2D creature is already familiar with our mathematical constructs, so it can follow our reasoning.
With the third axis added, a generic infinitesimal change over @emph{the full} 3D space corresponds to the distance:
@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.}
@float Figure,sphereandplane
@center@image{gnuastro-figures/sphereandplane, 10cm, , }
@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light gray) tangent to it at point @mymath{A}.}
@end float
It is very important to recognize that this change of distance is for @emph{any} point in the 3D space, not just those changes that occur on the 2D spherical shell of @ref{sphereandplane}.
Recall that our 2D friend can only do measurements on the 2D surfaces, not the full 3D space.
So we have to constrain this general change to any change on the 2D spherical shell.
To do that, let's look at the arbitrary point @mymath{P} on the 2D spherical shell.
Its image (@mymath{P'}) on the flat plain is also displayed. From the dark gray triangle, we see that
@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These relations allow the 2D creature to find the value of @mymath{z} (an abstract dimension for it) as a function of r (distance on a flat 2D plane, which it can visualize) and thus eliminate @mymath{z}.
From @mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and solving for @mymath{z}, we find:
@dispmath{z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).}
The @mymath{\pm} can be understood from @ref{sphereandplane}: For each @mymath{r}, there are two points on the sphere, one in the upper hemisphere and one in the lower hemisphere.
An infinitesimal change in @mymath{r}, will create the following infinitesimal change in @mymath{z}:
@dispmath{dz={\mp r\over R}\left(1\over
\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation instead of @mymath{dz} in the @mymath{ds_s^2} equation above, we get:
@dispmath{ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.}
The derivation above was done for a spherical shell of radius @mymath{R} as a curved 2D surface.
To generalize it to any surface, we can define @mymath{K=1/R^2} as the curvature parameter.
Then the general infinitesimal change in a static universe can be written as:
@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.}
Therefore, when @mymath{K>0} (and curvature is the same everywhere), we have a finite universe, where @mymath{r} cannot become larger than @mymath{R} as in @ref{sphereandplane}.
When @mymath{K=0}, we have a flat plane (@ref{flatplane}) and a negative @mymath{K} will correspond to an imaginary @mymath{R}.
The latter two cases may be infinite in area (which is not a simple concept, but mathematically can be modeled with @mymath{r} extending infinitely), or finite-area (like a cylinder is flat everywhere with @mymath{ds_s^2={dx^2 + dy^2}}, but finite in one direction in size).
@cindex Proper distance
A very important issue that can be discussed now (while we are still in 2D and can actually visualize things) is that @mymath{\overrightarrow{r}} is tangent to the curved space at the observer's position.
In other words, it is on the gray flat surface of @ref{sphereandplane}, even when the universe if curved: @mymath{\overrightarrow{r}=P'-A}.
Therefore for the point @mymath{P} on a curved space, the raw coordinate @mymath{r} is the distance to @mymath{P'}, not @mymath{P}.
The distance to the point @mymath{P} (at a specific coordinate @mymath{r} on the flat plane) over the curved surface (thick line in @ref{sphereandplane}) is called the @emph{proper distance} and is displayed with @mymath{l}.
For the specific example of @ref{sphereandplane}, the proper distance can be calculated with: @mymath{l=R\theta} (@mymath{\theta} is in radians).
Using the @mymath{\sin\theta} relation found above, we can find @mymath{l} as a function of @mymath{r}:
@dispmath{\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
l(r)=R\sin^{-1}\left({r\over R}\right)}
@mymath{R} is just an arbitrary constant and can be directly found from @mymath{K}, so for cleaner equations, it is common practice to set @mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}.
Also note that when @mymath{R=1}, then @mymath{l=\theta}.
Generally, depending on the curvature, in a @emph{static} universe the proper distance can be written as a function of the coordinate @mymath{r} as (from now on we are assuming @mymath{R=1}):
@dispmath{l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).}With
@mymath{l}, the infinitesimal change of distance can be written in a
more simpler and abstract form of
@dispmath{ds_s^2=dl^2+r^2d\phi^2.}
@cindex Comoving distance
Until now, we had assumed a static universe (not changing with time).
But our observations so far appear to indicate that the universe is expanding (it is not static).
Since there is no reason to expect the observed expansion is unique to our particular position of the universe, we expect the universe to be expanding at all points with the same rate at the same time.
Therefore, to add a time dependence to our distance measurements, we can include a multiplicative scaling factor, which is a function of time: @mymath{a(t)}.
The functional form of @mymath{a(t)} comes from the cosmology, the physics we assume for it: general relativity, and the choice of whether the universe is uniform (`homogeneous') in density and curvature or inhomogeneous.
In this section, the functional form of @mymath{a(t)} is irrelevant, so we can avoid these issues.
With this scaling factor, the proper distance will also depend on time.
As the universe expands, the distance between two given points will shift to larger values.
We thus define a distance measure, or coordinate, that is independent of time and thus does not `move'.
We call it the @emph{comoving distance} and display with @mymath{\chi} such that: @mymath{l(r,t)=\chi(r)a(t)}.
We have therefore, shifted the @mymath{r} dependence of the proper distance we derived above for a static universe to the comoving distance:
@dispmath{\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
\chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).}
Therefore, @mymath{\chi(r)} is the proper distance to an object at a specific reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies ``reference'') when @mymath{a(t_r)=1}.
At any arbitrary moment (@mymath{t\neq{t_r}}) before or after @mymath{t_r}, the proper distance to the object can be scaled with @mymath{a(t)}.
Measuring the change of distance in a time-dependent (expanding) universe only makes sense if we can add up space and time@footnote{In other words, making our space-time consistent with Minkowski space-time geometry.
In this geometry, different observers at a given point (event) in space-time split up space-time into `space' and `time' in different ways, just like people at the same spatial position can make different choices of splitting up a map into `left--right' and `up--down'.
This model is well supported by twentieth and twenty-first century observations.}.
But we can only add bits of space and time together if we measure them in the same units: with a conversion constant (similar to how 1000 is used to convert a kilometer into meters).
Experimentally, we find strong support for the hypothesis that this conversion constant is the speed of light (or gravitational waves@footnote{The speed of gravitational waves was recently found to be very similar to that of light in vacuum, see LIGO Collaboration @url{https://arxiv.org/abs/1710.05834,2017}.}) in a vacuum.
This speed is postulated to be constant@footnote{In @emph{natural units}, speed is measured in units of the speed of light in vacuum.} and is almost always written as @mymath{c}.
We can thus parameterize the change in distance on an expanding 2D surface as
@dispmath{ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).}
@node Extending distance concepts to 3D, Invoking astcosmiccal, Distance on a 2D curved space, CosmicCalculator
@subsection Extending distance concepts to 3D
The concepts of @ref{Distance on a 2D curved space} are here extended to a 3D space that @emph{might} be curved.
We can start with the generic infinitesimal distance in a static 3D universe, but this time in spherical coordinates instead of polar coordinates.
@mymath{\theta} is shown in @ref{sphereandplane}, but here we are 3D beings, positioned on @mymath{O} (the center of the sphere) and the point @mymath{O} is tangent to a 4D-sphere.
In our 3D space, a generic infinitesimal displacement will correspond to the following distance in spherical coordinates:
@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
Like the 2D creature before, we now have to assume an abstract dimension which we cannot visualize easily.
Let's call the fourth dimension @mymath{w}, then the general change in coordinates in the @emph{full} four dimensional space will be:
@dispmath{ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.}
@noindent
But we can only work on a 3D curved space, so following exactly the same steps and conventions as our 2D friend, we arrive at:
@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
@noindent
In a non-static universe (with a scale factor a(t)), the distance can be written as:
@dispmath{ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].}
@c@dispmath{H(z){\equiv}\left(\dot{a}\over a\right)(z)=H_0E(z) }
@c@dispmath{E(z)=[ \Omega_{\Lambda,0} + \Omega_{C,0}(1+z)^2 +
@c\Omega_{m,0}(1+z)^3 + \Omega_{r,0}(1+z)^4 ]^{1/2}}
@c Let's take @mymath{r} to be the radial coordinate of the emitting
@c source, which emitted its light at redshift $z$. Then the comoving
@c distance of this object would be:
@c@dispmath{ \chi(r)={c\over H_0a_0}\int_0^z{dz'\over E(z')} }
@c@noindent
@c So the proper distance at the current time to that object is:
@c @mymath{a_0\chi(r)}, therefore the angular diameter distance
@c (@mymath{d_A}) and luminosity distance (@mymath{d_L}) can be written
@c as:
@c@dispmath{ d_A={a_0\chi(r)\over 1+z}, \quad d_L=a_0\chi(r)(1+z) }
@node Invoking astcosmiccal, , Extending distance concepts to 3D, CosmicCalculator
@subsection Invoking CosmicCalculator
CosmicCalculator will calculate cosmological variables based on the input parameters.
The executable name is @file{astcosmiccal} with the following general template
@example
$ astcosmiccal [OPTION...] ...
@end example
@noindent
One line examples:
@example
## Print basic cosmological properties at redshift 2.5:
$ astcosmiccal -z2.5
## Only print Comoving volume over 4pi stradian to z (Mpc^3):
$ astcosmiccal --redshift=0.8 --volume
## Print redshift and age of universe when Lyman-alpha line is
## at 6000 angstrom (another way to specify redshift).
$ astcosmiccal --obsline=Ly-alpha,6000 --age
## Print luminosity distance, angular diameter distance and age
## of universe in one row at redshift 0.4
$ astcosmiccal -z0.4 -LAg
## Assume Lambda and matter density of 0.7 and 0.3 and print
## basic cosmological parameters for redshift 2.1:
$ astcosmiccal -l0.7 -m0.3 -z2.1
## Print wavelength of all pre-defined spectral lines when
## Lyman-alpha is observed at 4000 Angstroms.
$ astcosmiccal --obsline=Ly-alpha,4000 --listlinesatz
@end example
The input parameters (current matter density, etc.) can be given as command-line options or in the configuration files, see @ref{Configuration files}.
For a definition of the different parameters, please see the sections prior to this.
If no redshift is given, CosmicCalculator will just print its input parameters and abort.
For a full list of the input options, please see @ref{CosmicCalculator input options}.
Without any particular output requested (and only a given redshift), CosmicCalculator will print all basic cosmological calculations (one per line) with some explanations before each.
This can be good when you want a general feeling of the conditions at a specific redshift.
Alternatively, if any specific calculation(s) are requested (its possible to call more than one), only the requested value(s) will be calculated and printed with one character space between them.
In this case, no description or units will be printed.
See @ref{CosmicCalculator basic cosmology calculations} for the full list of these options along with some explanations how when/how they can be useful.
Another common operation in observational cosmology is dealing with spectral lines at different redshifts.
CosmicCalculator also has features to help in such situations, please see @ref{CosmicCalculator spectral line calculations}.
@menu
* CosmicCalculator input options:: Options to specify input conditions.
* CosmicCalculator basic cosmology calculations:: Such as distance modulus and distances.
* CosmicCalculator spectral line calculations:: How they get affected by redshift.
@end menu
@node CosmicCalculator input options, CosmicCalculator basic cosmology calculations, Invoking astcosmiccal, Invoking astcosmiccal
@subsubsection CosmicCalculator input options
The inputs to CosmicCalculator can be specified with the following options:
@table @option
@item -z FLT
@itemx --redshift=FLT
The redshift of interest.
There are two other ways that you can specify the target redshift:
1) Spectral lines and their observed wavelengths, see @option{--obsline}.
2) Velocity, see @option{--velocity}.
Hence this option cannot be called with @option{--obsline} or @option{--velocity}.
@item -y FLT
@itemx --velocity=FLT
Input velocity in km/s.
The given value will be converted to redshift internally, and used in any subsequent calculation.
This option is thus an alternative to @code{--redshift} or @code{--obsline}, it cannot be used with them.
The conversion will be done with the more general and accurate relativistic equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified @mymath{z\approx v/c}.
@item -H FLT
@itemx --H0=FLT
Current expansion rate (in km sec@mymath{^{-1}} Mpc@mymath{^{-1}}).
@item -l FLT
@itemx --olambda=FLT
Cosmological constant density divided by the critical density in the current Universe (@mymath{\Omega_{\Lambda,0}}).
@item -m FLT
@itemx --omatter=FLT
Matter (including massive neutrinos) density divided by the critical density in the current Universe (@mymath{\Omega_{m,0}}).
@item -r FLT
@itemx --oradiation=FLT
Radiation density divided by the critical density in the current Universe (@mymath{\Omega_{r,0}}).
@item -O STR/FLT,FLT
@itemx --obsline=STR/FLT,FLT
@cindex Rest-frame wavelength
@cindex Wavelength, rest-frame
Find the redshift to use in next steps based on the rest-frame and observed wavelengths of a line.
This option is thus an alternative to @code{--redshift} or @code{--velocity}, it cannot be used with them.
The first argument identifies the line.
It can be one of the standard names, or any rest-frame wavelength in Angstroms.
The second argument is the observed wavelength of that line.
For example, @option{--obsline=Ly-alpha,6000} is the same as @option{--obsline=1215.64,6000}.
Wavelengths are assumed to be in Angstroms by default (other units can be selected with @option{--lineunit}, see @ref{CosmicCalculator spectral line calculations}).
The list of pre-defined names for the lines in Gnuastro's database is available by running
@example
$ astcosmiccal --listlines
@end example
@end table
@node CosmicCalculator basic cosmology calculations, CosmicCalculator spectral line calculations, CosmicCalculator input options, Invoking astcosmiccal
@subsubsection CosmicCalculator basic cosmology calculations
By default, when no specific calculations are requested, CosmicCalculator will print a complete set of all its calculators (one line for each calculation, see @ref{Invoking astcosmiccal}).
The full list of calculations can be useful when you do not want any specific value, but just a general view.
In other contexts (for example, in a batch script or during a discussion), you know exactly what you want and do not want to be distracted by all the extra information.
You can use any number of the options described below in any order.
When any of these options are requested, CosmicCalculator's output will just be a single line with a single space between the (possibly) multiple values.
In the example below, only the tangential distance along one arc-second (in kpc), absolute magnitude conversion, and age of the universe at redshift 2 are printed (recall that you can merge short options together, see @ref{Options}).
@example
$ astcosmiccal -z2 -sag
8.585046 44.819248 3.289979
@end example
Here is one example of using this feature in scripts: by adding the following two lines in a script to keep/use the comoving volume with varying redshifts:
@example
z=3.12
vol=$(astcosmiccal --redshift=$z --volume)
@end example
@cindex GNU Grep
@noindent
In a script, this operation might be necessary for a large number of objects (several of galaxies in a catalog for example).
So the fact that all the other default calculations are ignored will also help you get to your result faster.
If you are indeed dealing with many (for example, thousands) of redshifts, using CosmicCalculator is not the best/fastest solution.
Because it has to go through all the configuration files and preparations for each invocation.
To get the best efficiency (least overhead), we recommend using Gnuastro's cosmology library (see @ref{Cosmology library}).
CosmicCalculator also calls the library functions defined there for its calculations, so you get the same result with no overhead.
Gnuastro also has libraries for easily reading tables into a C program, see @ref{Table input output}.
Afterwards, you can easily build and run your C program for the particular processing with @ref{BuildProgram}.
If you just want to inspect the value of a variable visually, the description (which comes with units) might be more useful.
In such cases, the following command might be better.
The other calculations will also be done, but they are so fast that you will not notice on modern computers (the time it takes your eye to focus on the result is usually longer than the processing: a fraction of a second).
@example
$ astcosmiccal --redshift=0.832 | grep volume
@end example
The full list of CosmicCalculator's specific calculations is present below in two groups: basic cosmology calculations and those related to spectral lines.
In case you have forgot the units, you can use the @option{--help} option which has the units along with a short description.
@table @option
@item -e
@itemx --usedredshift
The redshift that was used in this run.
In many cases this is the main input parameter to CosmicCalculator, but it is useful in others.
For example, in combination with @option{--obsline} (where you give an observed and rest-frame wavelength and would like to know the redshift) or with @option{--velocity} (where you specify the velocity instead of redshift).
Another example is when you run CosmicCalculator in a loop, while changing the redshift and you want to keep the redshift value with the resulting calculation.
@item -Y
@itemx --usedvelocity
The velocity (in km/s) that was used in this run.
The conversion from redshift will be done with the more general and accurate relativistic equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified @mymath{z\approx v/c}.
@item -G
@itemx --agenow
The current age of the universe (given the input parameters) in Ga (Giga annum, or billion years).
@item -C
@itemx --criticaldensitynow
The current critical density (given the input parameters) in grams per cubic centimeter (@mymath{g/cm^3}).
@item -d
@itemx --properdistance
The proper distance (at the current time, i.e. in comoving units) to an object at the given redshift in Megaparsecs (Mpc).
See @ref{Distance on a 2D curved space} for a description of the proper distance.
@item -A
@itemx --angulardiamdist
The angular diameter distance to an object at a given redshift in Megaparsecs (Mpc).
@item -s
@itemx --arcsectandist
The tangential distance covered by 1 arc-second at a given redshift in physical (not comoving) kilo-parsecs (kpc).
This can be useful when trying to estimate the resolution or pixel scale of an instrument (usually in units of arc-seconds) required for a galaxy of a given physical size at a given redshift.
For an arc subtending one degree at a high redshift @mymath{z}, multiplying the result by 3600 will, of course, give the (tangential) length of an arc subtending one degree, but it will still be in physical units.
However, at one degree, the comoving separation for redshifts from 1 to 10 is about 100 Mpc give or take 50% or so for nearly @mymath{\Lambda{}}CDM models.
In other words, this is the cosmic web scale, where comoving units usually make more sense than physical units, e.g. for large-scale structure correlation functions or the baryon acoustic oscillation scale.
If comoving units are appropriate, as will typically be the case for the degree scale, you must also multiply by @mymath{(1+z)}.
@item -L
@itemx --luminositydist
The luminosity distance to object at given redshift in Megaparsecs (Mpc).
@item -u
@itemx --distancemodulus
@cindex Distance modulus
@cindex Absolute magnitude
@cindex Magnitude (absolute)
@cindex Bolometric luminosity
The bolometric (across the full electromagnetic spectrum) distance modulus (@mymath{DM}) at the given redshift assuming no intermediate absorption.
The distance modulus allows the conversion of observed (@mymath{m}, distance dependent) to absolute (@mymath{M}, independent of distance; or the same distance of 10 parsecs for all objects) magnitudes.
In other words, @mymath{DM=m-M}, or @mymath{M=m-DM}.
From the absolute magnitude, we can derive the luminosity of the source; see @code{mag-to-luminosity} in @ref{Unit conversion operators}.
The two conditions above are very important to remember when using this option:
@itemize
@item
In practice we do not observe the bolometric magnitude of an object: any instrument's hardware is limited to a certain wavelength range and incoming photons outside that range will not be measured.
Therefore, as regards the filter and spectrum, the distance modulus can be used in the following two cases:
@itemize
@item
At smaller distances (where the filter on the observed spectrum covers approximately the same region on the rest frame spectrum).
@item
The observed bolometric magnitude of the source has been calculated (based on model-fitting, or combining data from many surveys to cover the whole electromagnetic spectrum) and is being used as input
@end itemize
At higher distances, it is important to account for ``K-correction'' because the filter's rest frame coverage over the rest frame spectrum of the source will decrease (compared to the filter's observed coverage in the source's observed spectrum).
For details see Hogg et al. @url{https://arxiv.org/abs/astro-ph/0210394,2002} and Blanton and Roweis @url{https://ui.adsabs.harvard.edu/abs/2007AJ....133..734B,2007}.
A @emph{simplified} correction (assuming a flat SED) to the distance modulus is available in the @option{--absmagconv} option below.
@item
@cindex ISM (inter-stellar medium)
@cindex Inter-stellar medium (ISM)
Intermediate absorption (from the source to your telescope) can happen in multiple stages.
For example, the Earth is located inside the Milky Way which has an interstellar medium (ISM) that will absorb some of the flux coming from extra-galactic sources behind them.
After measuring the magnitude of your source, it is therefore important to find the Milky Way extinction in the direction of your source and add it to your source's flux.
Inside Gnuastro, the Query program gives you direct access to the NED Extinction Calculator as described in @ref{Available databases}.
@end itemize
@item -a
@itemx --absmagconv
Corrected distance modulus: accounting for the thinner width of the filter at higher redshift by subtracting @mymath{2.5\log{(1+z)}} from the distance modulus (assuming a flat SED for the galaxy).
However, no astronomical object has a flat SED across all wavelengths, so this should be taken as a zero-th order K-correction: just a crude statistical approximate that may under/over-estimate the actual value badly in special cases (for example when the omitted region has/misses a strong spectral feature).
See the description of @option{--distancemodulus} for more.
@item -g
@itemx --age
Age of the universe at given redshift in Ga (Giga-annum, or billion years).
@item -b
@itemx --lookbacktime
The look-back time to a given redshift in Ga (Giga annum, or billion years).
The look-back time at a given redshift is defined as the current age of the universe (@option{--agenow}) minus the age of the universe at the given redshift.
@item -c
@itemx --criticaldensity
The critical density at given redshift in grams per centimeter-cube (@mymath{g/cm^3}).
@item -v
@itemx --onlyvolume
The comoving volume in cubic Megaparsecs (Mpc@mymath{^3}) through to the desired redshift, based on the input parameters.
@end table
@node CosmicCalculator spectral line calculations, , CosmicCalculator basic cosmology calculations, Invoking astcosmiccal
@subsubsection CosmicCalculator spectral line calculations
@cindex Rest frame wavelength
At different redshifts, observed spectral lines are shifted compared to their rest frame wavelengths with this simple relation: @mymath{\lambda_{obs}=\lambda_{rest}(1+z)}.
Although this relation is very simple and can be done for one line in the head (or a simple calculator!), it slowly becomes tiring when dealing with a lot of lines or redshifts, or some precision is necessary.
The options in this section are thus provided to greatly simplify usage of this simple equation, and also helping by storing a list of pre-defined spectral line wavelengths.
For example, if you want to know the wavelength of the @mymath{H\alpha} line (at 6562.8 Angstroms in rest frame), when @mymath{Ly\alpha} is at 8000 Angstroms, you can call CosmicCalculator like the first example below.
And if you want the wavelength of all pre-defined spectral lines at this redshift, you can use the second command.
@example
$ astcosmiccal --obsline=lyalpha,8000 --lineatz=halpha
$ astcosmiccal --obsline=lyalpha,8000 --listlinesatz
@end example
Bellow you can see the printed/output calculations of CosmicCalculator that are related to spectral lines.
Note that @option{--obsline} is an input parameter, so it is discussed (with the full list of known lines) in @ref{CosmicCalculator input options}.
@table @option
@item --listlines
List the pre-defined rest frame spectral line wavelengths and their names on standard output, then abort CosmicCalculator.
The units of the displayed wavelengths for each line can be determined with @option{--lineunit} (see below).
When this option is given, other operations on the command-line will be ignored.
This is convenient when you forget the specific name of the spectral line used within Gnuastro, or when you forget the exact wavelength of a certain line.
These names can be used with the options that deal with spectral lines, for example, @option{--obsline} and @option{--lineatz} (@ref{CosmicCalculator basic cosmology calculations}).
The format of the output list is a two-column table, with Gnuastro's text table format (see @ref{Gnuastro text table format}).
Therefore, if you are only looking for lines in a specific range, you can pipe the output into Gnuastro's table program and use its @option{--range} option on the @code{wavelength} (first) column.
For example, if you only want to see the lines between 4000 and 6000 Angstroms, you can run this command:
@example
$ astcosmiccal --listlines \
| asttable --range=wavelength,4000,6000
@end example
@noindent
And if you want to use the list later and have it as a table in a file, you can easily add the @option{--output} (or @option{-o}) option to the @command{asttable} command, and specify the filename, for example, @option{--output=lines.fits} or @option{--output=lines.txt}.
@item --listlinesatz
Similar to @option{--listlines} (above), but the printed wavelength is not in the rest frame, but redshifted to the given redshift.
Recall that the redshift can be specified by @option{--redshift} directly or by @option{--obsline}, see @ref{CosmicCalculator input options}.
For an example usage of this option, see @ref{Viewing spectra and redshifted lines}.
@item -i STR/FLT
@itemx --lineatz=STR/FLT
The wavelength of the specified line at the redshift given to CosmicCalculator.
The line can be specified either by its name or directly as a number (its wavelength).
The units of the displayed wavelengths for each line can be determined with @option{--lineunit} (see below).
To get the list of pre-defined names for the lines and their wavelength, you can use the @option{--listlines} option, see @ref{CosmicCalculator input options}.
In the former case (when a name is given), the returned number is in units of Angstroms.
In the latter (when a number is given), the returned value is the same units of the input number (assuming it is a wavelength).
@item --lineunit=STR
The units to display line wavelengths above.
It can take the following four values.
If you need any other unit, please contact us at @code{bug-gnuastro@@gnu.org}.
@table @code
@item m
Meter.
@item micro-m
Micrometer or @mymath{10^{-6}m}.
@item nano-m
Nanometer, or @mymath{10^{-9}m}.
@item angstrom
Angstrom or @mymath{10^{-10}m}; the default unit when this option is not called.
@end table
@end table
@node Installed scripts, Makefile extensions, High-level calculations, Top
@chapter Installed scripts
Gnuastro's programs (introduced in previous chapters) are designed to be highly modular and thus contain lower-level operations on the data.
However, in many contexts, certain higher-level are also shared between many contexts.
For example, a sequence of calls to multiple Gnuastro programs, or a special way of running a program and treating the output.
To facilitate such higher-level data analysis, Gnuastro also installs some scripts on your system with the (@code{astscript-}) prefix (in contrast to the other programs that only have the @code{ast} prefix).
@cindex GNU Bash
@cindex Portable shell
@cindex Shell, portable
Like all of Gnuastro's source code, these scripts are also heavily commented.
They are written in portable shell scripts (command-line environments), which does not need compilation.
Therefore, if you open the installed scripts in a text editor, you can actually read them@footnote{Gnuastro's installed programs (those only starting with @code{ast}) are not human-readable.
They are written in C and need to be compiled before execution.
Compilation optimizes the steps into the low-level hardware CPU instructions/language to improve efficiency.
Because compiled programs do not need an interpreter like Bash on every run, they are much faster and more independent than scripts.
To read the source code of the programs, look into the @file{bin/progname} directory of Gnuastro's source (@ref{Downloading the source}).
If you would like to read more about why C was chosen for the programs, please see @ref{Why C}.}.
For example, with this command (just replace @code{nano} with your favorite text editor, like @command{emacs} or @command{vim}):
@example
$ nano $(which astscript-NAME)
@end example
Shell scripting is the same language that you use when typing on the command-line.
Therefore shell scripting is much more widely known and used compared to C (the language of other Gnuastro programs).
Because Gnuastro's installed scripts do higher-level operations, customizing these scripts for a special project will be more common than the programs.
These scripts also accept options and are in many ways similar to the programs (see @ref{Common options}) with some minor differences:
@itemize
@item
Currently they do not accept configuration files themselves.
However, the configuration files of the Gnuastro programs they call are indeed parsed and used by those programs.
As a result, they do not have the following options: @option{--checkconfig}, @option{--config}, @option{--lastconfig}, @option{--onlyversion}, @option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
@item
They do not directly allocate any memory, so there is no @option{--minmapsize}.
@item
They do not have an independent @option{--usage} option: when called with @option{--usage}, they just recommend running @option{--help}.
@item
The output of @option{--help} is not configurable like the programs (see @ref{--help}).
@item
@cindex GNU AWK
@cindex GNU SED
The scripts will commonly use your installed shell and other basic command-line tools (for example, AWK or SED).
Different systems have different versions and implementations of these basic tools (for example, GNU/Linux systems use GNU Bash, GNU AWK and GNU SED which are far more advanced and up to date then the minimalist AWK and SED of most other systems).
Therefore, unexpected errors in these tools might come up when you run these scripts on non-GNU/Linux operating systems.
If you do confront such strange errors, please submit a bug report so we fix it as soon as possible (see @ref{Report a bug}).
@end itemize
@menu
* Sort FITS files by night:: Sort many files by date.
* Generate radial profile:: Radial profile of an object in an image.
* SAO DS9 region files from table:: Create ds9 region file from a table.
* Viewing FITS file contents with DS9 or TOPCAT:: Open DS9 (images/cubes) or TOPCAT (tables).
* Zero point estimation:: Zero point of an image from reference catalog or image(s).
* Pointing pattern simulation:: Simulate a coadd with a given series of pointings.
* Color images with gray faint regions:: Color for bright pixels and grayscale for faint.
* PSF construction and subtraction:: Set of scripts to create extended PSF of an image.
@end menu
@node Sort FITS files by night, Generate radial profile, Installed scripts, Installed scripts
@section Sort FITS files by night
@cindex Calendar
FITS images usually contain (several) keywords for preserving important dates.
In particular, for lower-level data, this is usually the observation date and time (for example, stored in the @code{DATE-OBS} keyword value).
When analyzing observed datasets, many calibration steps (like the dark, bias or flat-field), are commonly calculated on a per-observing-night basis.
However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is based on the western (Gregorian) calendar.
Dates that are stored in this format are complicated for automatic processing: a night starts in the final hours of one calendar day, and extends to the early hours of the next calendar day.
As a result, to identify datasets from one night, we commonly need to search for two dates.
However calendar peculiarities can make this identification very difficult.
For example, when an observation is done on the night separating two months (like the night starting on March 31st and going into April 1st), or two years (like the night starting on December 31st 2018 and going into January 1st, 2019).
To account for such situations, it is necessary to keep track of how many days are in a month, and leap years, etc.
@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
Gnuastro's @file{astscript-sort-by-night} script is created to help in such important scenarios.
It uses @ref{Fits} to convert the FITS date format into the Unix epoch time (number of seconds since 00:00:00 of January 1st, 1970), using the @option{--datetosec} option.
The Unix epoch time is a single number (integer, if not given in sub-second precision), enabling easy comparison and sorting of dates after January 1st, 1970.
You can use this script as a basis for making a much more highly customized sorting script.
Here are some examples
@itemize
@item
If you need to copy the files, but only need a single extension (not the whole file), you can add a step just before the making of the symbolic links, or copies, and change it to only copy a certain extension of the FITS file using the Fits program's @option{--copy} option, see @ref{HDU information and manipulation}.
@item
If you need to classify the files with finer detail (for example, the purpose of the dataset), you can add a step just before the making of the symbolic links, or copies, to specify a file-name prefix based on other certain keyword values in the files.
For example, when the FITS files have a keyword to specify if the dataset is a science, bias, or flat-field image.
You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the created file (after the @option{--prefix}) automatically.
For example, let's assume the observing mode is stored in the hypothetical @code{MODE} keyword, which can have three values of @code{BIAS-IMAGE}, @code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
With the step below, you can generate a mode-prefix, and add it to the generated link/copy names (just correct the filename and extension of the first line to the script's variables):
@example
modepref=$(astfits infile.fits -h1 \
| sed -e"s/'/ /g" \
| awk '$1=="MODE"@{ \
if($3=="BIAS-IMAGE") print "bias-"; \
else if($3=="SCIENCE-IMAGE") print "sci-"; \
else if($3==FLAT-EXP) print "flat-"; \
else print $3, "NOT recognized"; exit 1@}')
@end example
@cindex GNU AWK
@cindex GNU Sed
Here is a description of it.
We first use @command{astfits} to print all the keywords in extension @code{1} of @file{infile.fits}.
In the FITS standard, string values (that we are assuming here) are placed in single quotes (@key{'}) which are annoying in this context/use-case.
Therefore, we pipe the output of @command{astfits} into @command{sed} to remove all such quotes (substituting them with a blank space).
The result is then piped to AWK for giving us the final mode-prefix: with @code{$1=="MODE"}, we ask AWK to only consider the line where the first column is @code{MODE}.
There is an equal sign between the key name and value, so the value is the third column (@code{$3} in AWK).
We thus use a simple @code{if-else} structure to look into this value and print our custom prefix based on it.
The output of AWK is then stored in the @code{modepref} shell variable which you can add to the link/copy name.
With the solution above, the increment of the file counter for each night will be independent of the mode.
If you want the counter to be mode-dependent, you can add a different counter for each mode and use that counter instead of the generic counter for each night (based on the value of @code{modepref}).
But we will leave the implementation of this step to you as an exercise.
@end itemize
@menu
* Invoking astscript-sort-by-night:: Inputs and outputs to this script.
@end menu
@node Invoking astscript-sort-by-night, , Sort FITS files by night, Sort FITS files by night
@subsection Invoking astscript-sort-by-night
This installed script will read a FITS date formatted value from the given keyword, and classify the input FITS files into individual nights.
For more on installed scripts please see (see @ref{Installed scripts}).
This script can be used with the following general template:
@example
$ astscript-sort-by-night [OPTION...] FITS-files
@end example
@noindent
One line examples:
@example
## Use the DATE-OBS keyword
$ astscript-sort-by-night --key=DATE-OBS /path/to/data/*.fits
## Make links to the input files with the `img-' prefix
$ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
@end example
This script will look into a HDU/extension (@option{--hdu}) for a keyword (@option{--key}) in the given FITS files and interpret the value as a date.
The inputs will be separated by "night"s (11:00a.m to next day's 10:59:59a.m, spanning two calendar days, exact hour can be set with @option{--hour}).
The default output is a list of all the input files along with the following two columns: night number and file number in that night (sorted by time).
With @option{--link} a symbolic link will be made (one for each input) that contains the night number, and number of file in that night (sorted by time), see the description of @option{--link} for more.
When @option{--copy} is used instead of a link, a copy of the inputs will be made instead of symbolic link.
Below you can see one example where all the @file{target-*.fits} files in the @file{data} directory should be separated by observing night according to the @code{DATE-OBS} keyword value in their second extension (number @code{1}, recall that HDU counting starts from 0).
You can see the output after the @code{ls} command.
@example
$ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
$ ls
img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
@end example
The outputs can be placed in a different (already existing) directory by including that directory's name in the @option{--prefix} value, for example, @option{--prefix=sorted/img-} will put them all under the @file{sorted} directory.
This script can be configured like all Gnuastro's programs (through command-line options, see @ref{Common options}), with some minor differences that are described in @ref{Installed scripts}.
The particular options to this script are listed below:
@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension to use in all the given FITS files.
All of the given FITS files must have this extension.
@item -k STR
@itemx --key=STR
The keyword name that contains the FITS date format to classify/sort by.
@item -H FLT
@itemx --hour=FLT
The hour that defines the next ``night''.
By default, all times before 11:00a.m are considered to belong to the previous calendar night.
If a sub-hour value is necessary, it should be given in units of hours, for example, @option{--hour=9.5} corresponds to 9:30a.m.
@cartouche
@noindent
@cindex Time zone
@cindex UTC (Universal time coordinate)
@cindex Universal time coordinate (UTC)
@strong{Dealing with time zones:}
The time that is recorded in @option{--key} may be in UTC (Universal Time Coordinate).
However, the organization of the images taken during the night depends on the local time.
It is possible to take this into account by setting the @option{--hour} option to the local time in UTC.
For example, consider a set of images taken in Auckland (New Zealand, UTC+12) during different nights.
If you want to classify these images by night, you have to know at which time (in UTC time) the Sun rises (or any other separator/definition of a different night).
For example, if your observing night finishes before 9:00a.m in Auckland, you can use @option{--hour=21}.
Because in Auckland the local time of 9:00 corresponds to 21:00 UTC.
@end cartouche
@item -l
@itemx --link
Create a symbolic link for each input FITS file.
This option cannot be used with @option{--copy}.
The link will have a standard name in the following format (variable parts are written in @code{CAPITAL} letters and described after it):
@example
PnN-I.fits
@end example
@table @code
@item P
This is the value given to @option{--prefix}.
By default, its value is @code{./} (to store the links in the directory this script was run in).
See the description of @code{--prefix} for more.
@item N
This is the night-counter: starting from 1.
@code{N} is just incremented by 1 for the next night, no matter how many nights (without any dataset) there are between two subsequent observing nights (its just an identifier for each night which you can easily map to different calendar nights).
@item I
File counter in that night, sorted by time.
@end table
@item -c
@itemx --copy
Make a copy of each input FITS file with the standard naming convention described in @option{--link}.
With this option, instead of making a link, a copy is made.
This option cannot be used with @option{--link}.
@item -p STR
@itemx --prefix=STR
Prefix to append before the night-identifier of each newly created link or copy.
This option is thus only relevant with the @option{--copy} or @option{--link} options.
See the description of @option{--link} for how it is used.
For example, with @option{--prefix=img-}, all the created file names in the current directory will start with @code{img-}, making outputs like @file{img-n1-1.fits} or @file{img-n3-42.fits}.
@option{--prefix} can also be used to store the links/copies in another directory relative to the directory this script is being run (it must already exist).
For example, @code{--prefix=/path/to/processing/img-} will put all the links/copies in the @file{/path/to/processing} directory, and the files (in that directory) will all start with @file{img-}.
@item --stdintimeout=INT
Number of micro-seconds to wait for standard input within this script.
This does not correspond to general inputs into the script, inputs to the script should always be given as a file.
However, within the script, pipes are often used to pass the output of one program to another.
The value given to this option will be passed to those internal pipes.
When running this script, if you confront an error, saying ``No input!'', you should be able to fix it by giving a larger number to this option (the default value is 10000000 micro-seconds or 10 seconds).
@end table
@node Generate radial profile, SAO DS9 region files from table, Sort FITS files by night, Installed scripts
@section Generate radial profile
@cindex Radial profile
@cindex Profile, profile
The 1 dimensional radial profile of an object is an important parameter in many aspects of astronomical image processing.
For example, you want to study how the light of a galaxy is distributed as a function of the radial distance from the center.
In other cases, the radial profile of a star can show the PSF (see @ref{PSF}).
Gnuastro's @file{astscript-radial-profile} script is created to obtain such radial profiles for one object within an image.
This script uses @ref{MakeProfiles} to generate elliptical apertures with the values equal to the distance from the center of the object and @ref{MakeCatalog} for measuring the values over the apertures.
@menu
* Invoking astscript-radial-profile:: How to call astscript-radial-profile
@end menu
@node Invoking astscript-radial-profile, , Generate radial profile, Generate radial profile
@subsection Invoking astscript-radial-profile
This installed script will measure the radial profile of an object within an image.
A general overview of this script has been published in Infante-Sainz et al. @url{https://arxiv.org/abs/2401.05303,2024)}; please cite it if this script proves useful in your research.
For more on installed scripts please see (see @ref{Installed scripts}).
This script can be used with the following general template:
@example
$ astscript-radial-profile [OPTION...] FITS-file
@end example
@noindent
Examples:
@example
## Generate the radial profile with default options (assuming the
## object is in the center of the image, and using the mean).
$ astscript-radial-profile image.fits
## Generate the radial profile centered at x=44 and y=37 (in pixels),
## up to a radial distance of 19 pixels, use the mean value.
$ astscript-radial-profile image.fits --center=44,37 --rmax=19
## Generate a 2D polar plot with the same properties as above:
$ astscript-radial-profile image.fits --center=44,37 --rmax=19 \
--polar
## Generate the radial profile centered at x=44 and y=37 (in pixels),
## up to a radial distance of 100 pixels, compute sigma clipped
## mean and standard deviation (sigclip-mean and sigclip-std) using
## 5 sigma and 0.1 tolerance (default is 3 sigma and 0.2 tolerance).
$ astscript-radial-profile image.fits --center=44,37 --rmax=100 \
--sigmaclip=5,0.1 \
--measure=sigclip-mean,sigclip-std
## Generate the radial profile centered at RA=20.53751695,
## DEC=0.9454292263, up to a radial distance of 88 pixels,
## axis ratio equal to 0.32, and position angle of 148 deg.
## Name the output table as `radial-profile.fits'
$ astscript-radial-profile image.fits --mode=wcs \
--center=20.53751695,0.9454292263 \
--rmax=88 --axis-ratio=0.32 \
--position-angle=148 -oradial-profile.fits
## Generate the radial profile centered at RA=40.062675270971,
## DEC=-8.1511992735126, up to a radial distance of 20 pixels,
## and calculate the SNR using the INPUT-NO-SKY and SKY-STD
## extensions of the NoiseChisel output file.
$ astscript-radial-profile image_detected.fits -hINPUT-NO-SKY \
--mode=wcs --measure=sn \
--center=40.062675270971,-8.1511992735126 \
--rmax=20 --stdhdu=SKY_STD
## Generate the radial profile centered at RA=40.062675270971,
## DEC=-8.1511992735126, up to a radial distance of 20 pixels,
## and compute the SNR with a fixed value for std, std=10.
$ astscript-radial-profile image.fits -h1 --mode=wcs --rmax=20 \
--center=40.062675270971,-8.1511992735126 \
--measure=sn --instd=10
## Generate the radial profile centered at X=1201, Y=1201 pixels, up
## to a radial distance of 20 pixels and compute the median and the
## SNR using the first extension of sky-std.fits as the dataset for std
## values.
$ astscript-radial-profile image.fits -h1 --mode=img --rmax=20 \
--center=1201,1201 --measure=median,sn \
--instd=sky-std.fits
@end example
This installed script will read a FITS image and will use it as the basis for constructing the radial profile.
The output radial profile is a table (FITS or plain-text) containing the radial distance from the center in the first row and the specified measurements in the other columns (mean, median, sigclip-mean, sigclip-median, etc.).
To measure the radial profile, this script needs to generate temporary files.
All these temporary files will be created within the directory given to the @option{--tmpdir} option.
When @option{--tmpdir} is not called, a temporary directory (with a name based on the inputs) will be created in the running directory.
If the directory does not exist at run-time, this script will create it.
After the output is created, this script will delete the directory by default, unless you call the @option{--keeptmp} option.
With the default options, the script will generate a circular radial profile using the mean value and centered at the center of the image.
In order to have more flexibility, several options are available to configure for the desired radial profile.
In this sense, you can change the center position, the maximum radius, the axis ratio and the position angle (elliptical apertures are considered), the operator for obtaining the profiles, and others (described below).
@cartouche
@noindent
@strong{Debug your profile:} to debug your results, especially close to the center of your object, you can see the radial distance associated to every pixel in your input.
To do this, use @option{--keeptmp} to keep the temporary files, and compare @file{crop.fits} (crop of your input image centered on your desired coordinate) with @file{apertures.fits} (radial distance of each pixel).
@end cartouche
@cartouche
@noindent
@strong{Finding properties of your elliptical target: } you want to measure the radial profile of a galaxy, but do not know its exact location, position angle or axis ratio.
To obtain these values, you can use @ref{NoiseChisel} to detect signal in the image, feed it to @ref{Segment} to do basic segmentation, then use @ref{MakeCatalog} to measure the center (@option{--x} and @option{--y} in MakeCatalog), axis ratio (@option{--axis-ratio}) and position angle (@option{--position-angle}).
@end cartouche
@cartouche
@noindent
@strong{Masking other sources:} The image of an astronomical object will usually have many other sources with your main target.
A crude solution is to use sigma-clipped measurements for the profile.
However, sigma-clipped measurements can easily be biased when the number of sources at each radial distance increases at larger distances.
Therefore a robust solution is to mask all other detections within the image.
You can use @ref{NoiseChisel} and @ref{Segment} to detect and segment the sources, then set all pixels that do not belong to your target to blank using @ref{Arithmetic} (in particular, its @code{where} operator).
@end cartouche
@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension of the input image to use.
@item -o STR
@itemx --output=STR
Filename of measured radial profile.
It can be either a FITS table, or plain-text table (determined from your given file name suffix).
@item -c FLT[,FLT[,...]]
@itemx --center=FLT[,FLT[,...]]
The central position of the radial profile.
This option is used for placing the center of the profiles.
This parameter is used in @ref{Crop} to center and crop the region.
The positions along each dimension must be separated by a comma (@key{,}) and fractions are also acceptable.
The number of values given to this option must be the same as the dimensions of the input dataset.
The units of the coordinates are read based on the value to the @option{--mode} option, see below.
@item -O STR
@itemx --mode=STR
Interpret the center position of the object (values given to @option{--center}) in image or WCS coordinates.
This option thus accepts only two values: @option{img} or @option{wcs}.
By default, it is @option{--mode=img}.
@item -R FLT
@itemx --rmax=FLT
Maximum radius for the radial profile (in pixels).
By default, the radial profile will be computed up to a radial distance equal to the maximum radius that fits into the image (assuming circular shape).
@item -P INT
@itemx --precision=INT
The precision (number of digits after the decimal point) in resolving the radius.
The default value is @option{--precision=0} (or @option{-P0}), and the value cannot be larger than @option{6}.
A higher precision is primarily useful when the very central few pixels are important for you.
A larger precision will over-resolve larger radial regions, causing scatter to significantly affect the measurements.
For example, in the command below, we will generate the radial profile of an imaginary source (at RA,DEC of 1.23,4.567) and check the output without setting a precision:
@example
$ astscript-radial-profile image.fits --center=1.23,4.567 \
--mode=wcs --measure=mean,area --rmax=10 \
--output=radial.fits --quiet
$ asttable radial.fits --head=10 -ffixed -p4
0.0000 0.0139 1
1.0000 0.0048 8
2.0000 0.0023 16
3.0000 0.0015 20
4.0000 0.0011 24
5.0000 0.0008 40
6.0000 0.0006 36
7.0000 0.0005 48
8.0000 0.0004 56
9.0000 0.0003 56
@end example
Let's repeat the command above, but use a precision of 3 to resolve more finer details of the radial profile, while only printing the top 10 rows of the profile:
@example
$ astscript-radial-profile image.fits --center=1.23,4.567 \
--mode=wcs --measure=mean,area --rmax=10 \
--precision=3 --output=radial.fits --quiet
$ asttable radial.fits --head=10 -ffixed -p4
0.0000 0.0139 1
1.0000 0.0056 4
1.4140 0.0040 4
2.0000 0.0027 4
2.2360 0.0024 8
2.8280 0.0018 4
3.0000 0.0017 4
3.1620 0.0016 8
3.6050 0.0013 8
4.0000 0.0011 4
@end example
Do you see how many more radii have been added?
Between 1.0 and 2.0, we now have one extra radius, between 2.0 to 3.0, we have two new radii and so on.
If you go to larger and larger radii, you will notice that they get resolved into many sub-components and the number of pixels used in each measurement will not be significant (you can already see that in the comparison above).
This has two problems:
1. statistically, the scatter in larger radii (where the signal-to-noise ratio is usually low will make it hard to interpret the profile.
2. technically, the output table will have many more rows!
@cartouche
@noindent
@strong{Use higher precision only for small radii:} If you want to look at the whole profile (or the outer parts!), don't set the precision, the default mode is usually more than enough!
But when you are targeting the very central few pixels (usually less than a pixel radius of 5), use a higher precision.
@end cartouche
@item -v INT
@itemx --oversample=INT
Oversample the input dataset to the fraction given to this option.
Therefore if you set @option{--rmax=20} for example, and @option{--oversample=5}, your output will have 100 rows (without @option{--oversample} it will only have 20 rows).
Unless the object is heavily undersampled (the pixels are larger than the actual object), this method provides a much more accurate result and there are sufficient number of pixels to get the profile accurately.
Due to the discrete nature of pixels, if you use this option to oversample your profile, set @option{--precision=0}.
Otherwise, your profile will become step-like (with several radii having a single value).
@item -u INT
@itemx --undersample=INT
Undersample the input dataset by the number given to this option.
This option is for considering larger apertures than the original pixel size (aperture size is equal to 1 pixel).
For example, if a radial profile computed by default has 100 different radii (apertures of 1 pixel width), by considering @option{--undersample=2} the radial profile will be computed over apertures of 2 pixels, so the final radial profile will have 50 different radii.
This option is good to measure over a larger number of pixels to improve the measurement.
@item -Q FLT
@itemx --axis-ratio=FLT
The axis ratio of the apertures (minor axis divided by the major axis in a 2D ellipse).
By default (when this option is not given), the radial profile will be circular (axis ratio of 1).
This parameter is used as the option @option{--qcol} in the generation of the apertures with @command{astmkprof}.
@item -p FLT
@itemx --position-angle=FLT
The position angle (in degrees) of the profiles relative to the first FITS axis (horizontal when viewed in SAO DS9).
By default, it is @option{--position-angle=0}, which means that the semi-major axis of the profiles will be parallel to the first FITS axis.
@item -a FLT,FLT
@itemx --azimuth=FLT,FLT
@cindex Wedge (radial profile)
@cindex Azimuthal range (radial profile)
Limit the profile to the given azimuthal angle range (in degrees, from 0 to 360) from the major axis (defined by @option{--position-angle}) of each call to this option.
The radial profile will therefore be created on a wedge-like shape, not the full circle/ellipse.
The pixel containing the center of the profile will always be included in the profile (because it contains all azimuthal angles!).
If the first angle is @emph{smaller} than the second (for example, @option{--azimuth=10,80}), the region between, or @emph{inside}, the two angles will be used.
Otherwise (for example, @option{--azimuth=80,10}), the region @emph{outside} the two angles will be used.
The latter case can be useful when you want to ignore part of the 2D shape (for example, due to a bright star that can be contaminating it).
This option can be called more than once; for example, @option{--azimuth=80,100 --azimuth=260,280}.
In such cases, all given azimuthal ranges will be used.
In the example above, two wedge-like shapes will be used to construct the profile: between angles of 80 to 100 degrees, and between 260 and 280 degrees.
You can see the shape of the region used by adding the @option{--keeptmp} option.
Afterwards, you can view the @file{values.fits} and @file{apertures.fits} files of the temporary directory with a FITS image viewer like @ref{SAO DS9}.
You can use @ref{Viewing FITS file contents with DS9 or TOPCAT} to open them together in one instance of DS9, with both frames matched and locked (for easy comparison in case you want to zoom-in or out).
For example, see the commands below (based on your target object, just change the image name, center, position angle, etc.):
@example
## Generate the radial profile
$ astscript-radial-profile image.fits --center=1.234,6.789 \
--mode=wcs --rmax=50 --position-angle=20 \
--axis-ratio=0.8 --azimuth=95,150 --keeptmp \
--tmpdir=radial-tmp
## Visually check the values and apertures used.
$ astscript-fits-view radial-tmp/values.fits \
radial-tmp/apertures.fits
@end example
@item -g
@itemx --polar
Generate a 2D polar-plot image in the second HDU of the output (called @code{POLAR-PLOT}).
A polar plot is a projection of the original pixel grid into polar coordinates (where the horizontal axis is the azimuthal angle and the vertical axis is the radius).
By default it assumes the full azimuthal range (from 0 to 360 degrees); if a narrower azimuthal range is desired, use @option{--azimuth} (for example @option{--azimuth=30,50} to only generate the polar plot between 30 and 50 degrees of azimuth).
The output image contains WCS information to map the pixel coordinates into the polar coordinates.
This is especially useful when the azimuthal range is not the full range: the first pixel in the horizontal axis is not 0 degrees.
Currently, the polar plot cannot to be used with @option{--oversample} and @option{--undersample} options (please get in touch with us if you need it).
Until it is implemented, you can use the @option{--scale} option of @ref{Warp} to do the oversampling of the input image yourself and generate the polar plot from that.
A comprehensive overview of this option has been published in Eskandarlou et al. @url{https://arxiv.org/abs/2406.14619,2024}.
If this script yields useful results in your research, please be sure to cite it.
@item -m STR
@itemx --measure=STR
The operator for measuring the values over each radial distance.
The values given to this option will be directly passed to @ref{MakeCatalog}.
As a consequence, all MakeCatalog measurements like the magnitude, magnitude error, median, mean, signal-to-noise ratio (S/N), std, surface brightness, sigclip-mean, and sigclip-number can be used here.
For a full list of MakeCatalog's measurements, please run @command{astmkcatalog --help} or see @ref{MakeCatalog measurements on each label}.
Multiple values can be given to this option, each separated by a comma.
This option can also be called multiple times.
@cartouche
@noindent
@strong{Masking background/foreground objects:} For crude rejection of outliers, you can use sigma-clipping using MakeCatalog measurements like @option{--sigclip-mean} or @option{--sigclip-mean-sb} (see @ref{MakeCatalog measurements on each label}).
To properly mask the effect of background/foreground objects from your target object's radial profile, you can use @command{astscript-psf-stamp} script, see @ref{Invoking astscript-psf-stamp}, and feed it the output of @ref{Segment}.
This script will mask unwanted objects from the image that is later used to measure the radial profile.
@end cartouche
Some measurements by MakeCatalog require a per-pixel sky standard deviation (for example, magnitude error or S/N).
Therefore when asking for such measurements, use the @option{--instd} option (described below) to specify the per-pixel sky standard deviation over each pixel.
For other measurements like the magnitude or surface brightness, MakeCatalog will need a Zero point, which you can set with the @option{--zeropoint} option.
For example, by setting @option{--measure=mean,sigclip-mean --measure=median}, the mean, sigma-clipped mean and median values will be computed.
The output radial profile will have 4 columns in this order: radial distance, mean, sigma-clipped and median.
By default (when this option is not given), the mean of all pixels at each radial position will be computed.
@item -s FLT,FLT
@itemx --sigmaclip=FLT,FLT
Sigma clipping parameters: only relevant if sigma-clipping operators are requested by @option{--measure}.
For more on sigma-clipping, see @ref{Sigma clipping}.
If given, the value to this option is directly passed to the @option{--sigmaclip} option of @ref{MakeCatalog}, see @ref{MakeCatalog inputs and basic settings}.
By default (when this option is not given), the default values within MakeCatalog will be used.
To see the default value of this option in MakeCatalog, you can run this command:
@example
$ astmkcatalog -P | grep " sigmaclip "
@end example
@item -z FLT
@itemx --zeropoint=FLT
The Zero point of the input dataset.
This is necessary when you request measurements like magnitude, or surface brightness.
@item -Z
@itemx --zeroisnotblank
Account for zero-valued pixels in the profile.
By default, such pixels are not considered (when this script crops the necessary region of the image before generating the profile).
The long format of this option is identical to a similarly named option in Crop (see @ref{Invoking astcrop}).
When this option is called, it is passed directly to Crop, therefore the zero-valued pixels are not considered as blank and used in the profile creation.
@item -i FLT/STR
@itemx --instd=FLT/STR
Sky standard deviation as a single number (FLT) or as the filename (STR) containing the image with the std value for each pixel (the HDU within the file should be given to the @option{--stdhdu} option mentioned below).
This is only necessary when the requested measurement (value given to @option{--measure}) by MakeCatalog needs the Standard deviation (for example, the signal-to-noise ratio or magnitude error).
If your measurements do not require a standard deviation, it is best to ignore this option (because it will slow down the script).
@item -d INT/STR
@itemx --stdhdu=INT/STR
HDU/extension of the sky standard deviation image specified with @option{--instd}.
@item -t STR
@itemx --tmpdir=STR
Several intermediate files are necessary to obtain the radial profile.
All of these temporal files are saved into a temporal directory.
With this option, you can directly specify this directory.
By default (when this option is not called), it will be built in the running directory and given an input-based name.
If the directory does not exist at run-time, this script will create it.
Once the radial profile has been obtained, this directory is removed.
You can disable the deletion of the temporary directory with the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Do not delete the temporary directory (see description of @option{--tmpdir} above).
This option is useful for debugging.
For example, to check that the profiles generated for obtaining the radial profile have the desired center, shape and orientation.
@item --cite
Give BibTeX and acknowledgment information for citing this script within your paper.
For more, see @option{Operating mode options}.
@end table
@node SAO DS9 region files from table, Viewing FITS file contents with DS9 or TOPCAT, Generate radial profile, Installed scripts
@section SAO DS9 region files from table
Once your desired catalog (containing the positions of some objects) is created (for example, with @ref{MakeCatalog}, @ref{Match}, or @ref{Table}) it often happens that you want to see your selected objects on an image for a feeling of the spatial properties of your objects.
For example, you want to see their positions relative to each other.
In this section we describe a simple installed script that is provided within Gnuastro for converting your given columns to an SAO DS9 region file to help in this process.
SAO DS9@footnote{@url{http://ds9.si.edu}} is one of the most common FITS image visualization tools in astronomy and is free software.
@menu
* Invoking astscript-ds9-region:: How to call astscript-ds9-region
@end menu
@node Invoking astscript-ds9-region, , SAO DS9 region files from table, SAO DS9 region files from table
@subsection Invoking astscript-ds9-region
This installed script will read two positional columns within an input table and generate an SAO DS9 region file to visualize the position of the given objects over an image.
For more on installed scripts please see (see @ref{Installed scripts}).
This script can be used with the following general template:
@example
## Use the RA and DEC columns of 'table.fits' for the region file.
$ astscript-ds9-region table.fits --column=RA,DEC \
--output=ds9.reg
## Select objects with a magnitude between 18 to 20, and generate the
## region file directly (through a pipe), each region with radius of
## 0.5 arcseconds.
$ asttable table.fits --range=MAG,18:20 --column=RA,DEC \
| astscript-ds9-region --column=1,2 --radius=0.5
## With the first command, select objects with a magnitude of 25 to 26
## as red regions in 'bright.reg'. With the second command, select
## objects with a magnitude between 28 to 29 as a green region and
## show both.
$ asttable cat.fits --range=MAG_F160W,25:26 -cRA,DEC \
| astscript-ds9-region -c1,2 --color=red -obright.reg
$ asttable cat.fits --range=MAG_F160W,28:29 -cRA,DEC \
| astscript-ds9-region -c1,2 --color=green \
--command="ds9 image.fits -regions bright.reg"
@end example
The input can either be passed as a named file, or from standard input (a pipe).
Only the @option{--column} option is mandatory (to specify the input table columns): two columns from the input table must be specified, either by name (recommended) or number.
You can optionally also specify the region's radius, width and color of the regions with the @option{--radius}, @option{--width} and @option{--color} options, otherwise default values will be used for these (described under each option).
The created region file will be written into the file name given to @option{--output}.
When @option{--output} is not called, the default name of @file{ds9.reg} will be used (in the running directory).
If the file exists before calling this script, it will be overwritten, unless you pass the @option{--dontdelete} option.
Optionally you can also use the @option{--command} option to give the full command that should be run to execute SAO DS9 (see example above and description below).
In this mode, the created region file will be deleted once DS9 is closed (unless you pass the @option{--dontdelete} option).
A full description of each option is given below.
@table @option
@item -h INT/STR
@item --hdu INT/STR
The HDU of the input table when a named FITS file is given as input.
The HDU (or extension) can be either a name or number (counting from zero).
For more on this option, see @ref{Input output options}.
@item -c STR,STR
@itemx --column=STR,STR
Identifiers of the two positional columns to use in the DS9 region file from the table.
They can either be in WCS (RA and Dec) or image (pixel) coordinates.
The mode can be specified with the @option{--mode} option, described below.
@item -n STR
@itemx --namecol=STR
The column containing the name (or label) of each region.
The type of the column (numeric or a character-based string) is irrelevant: you can use both types of columns as a name or label for the region.
This feature is useful when you need to recognize each region with a certain ID or property (for example, magnitude or redshift).
@item -m wcs|img
@itemx --mode=wcs|org
The coordinate system of the positional columns (can be either @option{--mode=wcs} and @option{--mode=img}).
In the WCS mode, the values within the columns are interpreted to be RA and Dec.
In the image mode, they are interpreted to be pixel X and Y positions.
This option also affects the interpretation of the value given to @option{--radius}.
When this option is not explicitly given, the columns are assumed to be in WCS mode.
@item -C STR
@itemx --color=STR
The color to use for created regions.
These will be directly interpreted by SAO DS9 when it wants to open the region file so it must be recognizable by SAO DS9.
As of SAO DS9 8.2, the recognized color names are @code{black}, @code{white}, @code{red}, @code{green}, @code{blue}, @code{cyan}, @code{magenta} and @code{yellow}.
The default color (when this option is not called) is @code{green}
@item -w INT
@itemx --width=INT
The line width of the regions.
These will be directly interpreted by SAO DS9 when it wants to open the region file so it must be recognizable by SAO DS9.
The default value is @code{1}.
@item -r FLT
@itemx --radius=FLT
The radius of all the regions.
In WCS mode, the radius is assumed to be in arc-seconds, in image mode, it is in pixel units.
If this option is not explicitly given, in WCS mode the default radius is 1 arc-seconds and in image mode it is 3 pixels.
@item -s INT
@itemx --fontsize=INT
The size of the display font for the name column (only relevant when @option{--namecol} is given).
When not given, the default font size of 12 points will be applied.
@item --dontdelete
If the output file name exists, abort the program and do not over-write the contents of the file.
This option is thus good if you want to avoid accidentally writing over an important file.
Also, do not delete the created region file when @option{--command} is given (by default, when @option{--command} is given, the created region file will be deleted after SAO DS9 closes).
@item -o STR
@itemx --output=STR
Write the created SAO DS9 region file into the name given to this option.
If not explicitly given on the command-line, a default name of @file{ds9.reg} will be used.
If the file already exists, it will be over-written, you can avoid the deletion (or over-writing) of an existing file with the @option{--dontdelete}.
@item --command="STR"
After creating the region file, run the string given to this option as a command-line command.
The SAO DS9 region command will be appended to the end of the given command.
Because the command will mostly likely contain white-space characters it is recommended to put the given string in double quotations.
For example, let's assume @option{--command="ds9 image.fits -zscale"}.
After making the region file (assuming it is called @file{ds9.reg}), the following command will be executed:
@example
ds9 image.fits -zscale -regions ds9.reg
@end example
You can customize all aspects of SAO DS9 with its command-line options, therefore the value of this option can be as long and complicated as you like.
For example, if you also want the image to fit into the window, this option will be: @command{--command="ds9 image.fits -zscale -zoom to fit"}.
You can see the SAO DS9 command-line descriptions by clicking on the ``Help'' menu and selecting ``Reference Manual''.
In the opened window, click on ``Command Line Options''.
@end table
@node Viewing FITS file contents with DS9 or TOPCAT, Zero point estimation, SAO DS9 region files from table, Installed scripts
@section Viewing FITS file contents with DS9 or TOPCAT
@cindex Multi-Extension FITS
@cindex Opening multi-extension FITS
The FITS definition allows for multiple extensions (or HDUs) inside one FITS file.
Each HDU can have a completely independent dataset inside of it.
One HDU can be a table, another can be an image and another can be another independent image.
For example, each image HDU can be one CCD of a multi-CCD camera, or in processed images one can be the deep science image and the next can be its weight map, alternatively, one HDU can be an image, and another can be the catalog/table of objects within it.
The most common software for viewing FITS images is SAO DS9 (see @ref{SAO DS9}) and for plotting tables, TOPCAT is the most commonly used tool in astronomy (see @ref{TOPCAT}).
After installing them (as described in the respective appendix linked in the previous sentence), you can open any number of FITS images or tables with DS9 or TOPCAT with the commands below:
@example
$ ds9 image-a.fits image-b.fits
$ topcat table-a.fits table-b.fits
@end example
But usually the default mode is not enough.
For example, in DS9, the window can be too small (not covering the height of your monitor), you probably want to match and lock multiple images, you have a favorite color map that you prefer to use, or you may want to open a multi-extension FITS file as a cube.
Using the simple commands above, you need to manually do all these in the DS9 window once it opens and this can take several tens of seconds (which is enough to distract you from what you wanted to inspect).
For example, if you have a multi-extension file containing 2D images, one way to load and switch between each 2D extension is to take the following steps in the SAO DS9 window: @clicksequence{``File''@click{}``Open Other''@click{}``Open Multi Ext Cube''} and then choose the Multi extension FITS file in your computer's file structure.
@cindex @option{-mecube} (DS9)
The method above is a little tedious to do every time you want view a multi-extension FITS file.
A different series of steps is also necessary if you the extensions are 3D data cubes (since they are already cubes, and should be opened as multi-frame).
Furthermore, if you have multiple images and want to ``match'' and ``lock'' them (so when you zoom-in to one, all get zoomed-in) you will need several other sequence of menus and clicks.
Fortunately SAO DS9 also provides command-line options that you can use to specify a particular behavior before/after opening a file.
One of those options is @option{-mecube} which opens a FITS image as a multi-extension data cube (treating each 2D extension as a slice in a 3D cube).
This allows you to flip through the extensions easily while keeping all the settings similar.
Just to avoid confusion, note that SAO DS9 does not follow the GNU style of separating long and short options as explained in @ref{Arguments and options}.
In the GNU style, this `long' (multi-character) option should have been called like @option{--mecube}, but SAO DS9 follows its own conventions.
For example, try running @command{$ds9 -mecube foo.fits} to see the effect (for example, on the output of @ref{NoiseChisel}).
If the file has multiple extensions, a small window will also be opened along with the main DS9 window.
This small window allows you to slide through the image extensions of @file{foo.fits}.
If @file{foo.fits} only consists of one extension, then SAO DS9 will open as usual.
On the other hand, for visualizing the contents of tables (that are also commonly stored in the FITS format), you need to call a different software (most commonly, people use TOPCAT, see @ref{TOPCAT}).
And to make things more inconvenient, by default both of these are only installed as command-line software, so while you are navigating in your GUI, you need to open a terminal there, and run these commands.
All of the issues above are the founding purpose of the installed script that is introduced in @ref{Invoking astscript-fits-view}.
@menu
* Invoking astscript-fits-view:: How to call this script
@end menu
@node Invoking astscript-fits-view, , Viewing FITS file contents with DS9 or TOPCAT, Viewing FITS file contents with DS9 or TOPCAT
@subsection Invoking astscript-fits-view
Given any number of FITS files, this script will either open SAO DS9 (for images or cubes) or TOPCAT (for tables) to visualize their contents in a graphic user interface (GUI).
For more on installed scripts please see (see @ref{Installed scripts}).
This script can be used with the following general template:
@example
$ astscript-fits-view [OPTION] input.fits [input-b.fits ...]
@end example
@noindent
One line examples
@example
## Call TOPCAT to load all the input FITS tables.
$ astscript-fits-view table-*.fits
## Call SAO DS9 to open all the input FITS images.
$ astscript-fits-view image-*.fits
@end example
This script will use Gnuastro's @ref{Fits} program to see if the file is a table or image.
If the first input file contains an image HDU, then the sequence of files will be given to @ref{SAO DS9}.
Otherwise, the input(s) will be given to @ref{TOPCAT} to visualize (plot) as tables.
When opening DS9 it will also inspect the dimensionality of the first image HDU of the first input and open it slightly differently when the input is 2D or 3D:
@table @asis
@item 2D
DS9's @option{-mecube} will be used to open all the 2D extensions of each input file as a ``Multi-extension cube''.
A ``Cube'' window will also be opened with DS9 that can be used to slide/flip through each extensions.
When multiple files are given, each file will be in one ``frame''.
@item 3D
DS9's @option{-multiframe} option will be used to open all the extensions in a separate ``frame'' (since each input is already a 3D cube, the @option{-mecube} option can be confusing).
To flip through the extensions (while keeping the slice fixed), click the ``frame'' button on the top row of buttons, then use the last four buttons of the bottom row ("first", "previous", "next" and "last") to change between the extensions.
If multiple files are given, there will be a separate frame for each HDU of each input (each HDU's name or number will be put in square brackets after its name).
@end table
@cartouche
@noindent
@strong{Double-clicking on FITS file to open DS9 or TOPCAT:} for those graphic user interface (GUI) that follow the freedesktop.org standards (including GNOME, KDS Plasma, or Xfce) Gnuastro installs a @file{fits-view.desktop} file to instruct your GUI to call this script for opening FITS files when you click on them.
To activate this feature take the following steps:
@enumerate
@item
Run the following command, while replacing @code{PREFIX}.
If you do not know what to put in @code{PREFIX}, run @command{which astfits} on the command-line, and extract @code{PREFIX} from the output (the string before @file{/bin/astfits}).
For more, see @ref{Installation directory}.
@example
ln -sf PREFIX/share/gnuastro/astscript-fits-view.desktop \
~/.local/share/applications/
@end example
@item
Right-click on a FITS file, and choose these items in order (based on GNOME, may be different in KDE or Xfce): @clicksequence{``Open with other application''@click{}``View all applications''@click{}``astscript-fits-view''}.
@end enumerate
@end cartouche
@cartouche
@noindent
@strong{TOPCAT on 4K GUI:} by default, the desktop file that you install for this script (to be able to double click on a FITS file and open it in DS9 or TOPCAT) doesn't include the @option{--topcat4k} option.
If you want to have this option when double-clicking on a FITS file, open @file{~/.local/share/applications/astscript-fits-view.desktop} in your favorite text editor, comment the default @code{Exec} file and un-comment the line with this option.
@end cartouche
@noindent
This script takes the following options
@table @option
@item -h STR
@itemx --hdu=STR
The HDU(s), or extension(s), of the input dataset(s) to display.
The value can be the HDU name (a string) or number (the first HDU is counted from 0).
If there are multiple inputs, this option needs to be called multiple times: the first input will be opened with the first call to this option, the second input with the second call and etc.
If you want to open the same HDU of all your inputs, you don't need to repeat this option, use @option{--globalhdu} instead.
@item -g STR
@itemx --globalhdu=STR
The HDU/extension name or number to use for all inputs.
Note that HDU counting starts from 0.
If @option{--hdu} called, it takes precedence over this.
@item -p STR
@itemx --prefix=STR
Directory to search for SAO DS9 or TOPCAT's executables (assumed to be @command{ds9} and @command{topcat}).
If not called they will be assumed to be present in your @file{PATH} (see @ref{Installation directory}).
If you do not have them already installed, their installation directories are available in @ref{SAO DS9} and @ref{TOPCAT} (they can be installed in non-system-wide locations that do not require administrator/root permissions).
@item -s STR
@itemx --ds9scale=STR
The string to give to DS9's @option{-scale} option.
You can use this option to use a different scaling.
The Fits-view script will place @option{-scale} before your given string when calling DS9.
If you do not call this option, the default behavior is to cal DS9 with: @option{-scale mode zscale} or @option{--ds9scale="mode zscale"} when using this script.
The Fits-view script has the following aliases to simplify the calling of this option (and avoid the double-quotations and @code{mode} in the example above):
@table @option
@item zscale
or @option{--ds9scale=zscale} equivalent to @option{--ds9scale="mode zscale"}.
@item minmax
or @option{--ds9scale=minmax} equivalent to @option{--ds9scale="mode minmax"}.
@end table
@item -c=FLT,FLT
@itemx --ds9center=FLT,FLT
The central coordinate for DS9's view of the FITS image after it opens.
This is equivalent to the ``Pan'' button in DS9.
The nature of the coordinates will be determined by the @option{--ds9mode} option that is described below.
@item -r=STR[,STR[,STR]]
@itemx --ds9region=STR[,STR[,STR]]
Name of DS9 region file(s) to load within the DS9 window once it opens.
Any number of region files can be loaded with a single call to this command (separated by comas).
@item -O img/wcs
@itemx --ds9mode=img/wcs
The coordinate system (or mode) to interpret the values given to @option{--ds9center}.
This can either be @option{img} (or DS9's ``Image'' coordinates) or @option{wcs} (or DS9's ``wcs icrs'' coordinates).
@item -g INTxINT
@itemx --ds9geometry=INTxINT
The initial DS9 window geometry (value to DS9's @option{-geometry} option).
@item -m
@itemx --ds9colorbarmulti
Do not show a single color bar for all the loaded images.
By default this script will call DS9 in a way that a single color bar is shown for any number of images.
A single color bar is preferred for two reasons: 1) when there are a lot of images, they consume a large fraction of the display area. 2) the color-bars are locked by this script, so there is no difference between!
With this option, you can have separate color bars under each image.
@item -e STR
@itemx --ds9extra=STR
Any other custom options you would like to directly pass to DS9.
This option can be called multiple times.
The string(s) given to this option will be the final component of the DS9 command that this script generates.
This allows you to over-ride/customize previously set options (for example in large scripts).
Besides enabling all DS9 options that don't have a high-level wrapper in this script, you can also use this option to re-order the way the options of this script call DS9.
This is because this script ignores the order that you have requested options and will always put the high-level functions in a fixed order.
Before calling DS9, this script will print the command that it executes so you can see the order explicitly before DS9 opens.
Since DS9 does not recognize a @key{=} between the option name and value, a @key{SPACE} is usually necessary.
Therefore, be sure to put them in a single string within double quotes (so the spaces are preserved and passed to DS9 correctly).
For example usage of this option within this manual, see @ref{Column statistics color-magnitude diagram} or @ref{Subtracting the PSF}.
@item -k
@item --topcat4k
@cindex 4K monitors
Scale the TOPCAT window by 2 so it becomes easily usable on smaller 4K monitors (for example laptops).
By default, TOPCAT will use the native pixels, which can become prohibitively small on such monitors.
@end table
@node Zero point estimation, Pointing pattern simulation, Viewing FITS file contents with DS9 or TOPCAT, Installed scripts
@section Zero point estimation
@cindex Zero point
@cindex Calibration
@cindex Astrometry
Through the ``zero point'', we are able to give physical units to the pixel values of an image (often in units of ``counts'' or ADUs) and thus compare them with other images (as well as measurements that are done on them).
The zero point is therefore an important calibration of pixel values (as astromerty is a calibration of the pixel positions).
The fundamental concepts behind the zero point are described in @ref{Brightness flux magnitude}.
We will therefore not go deeper into the basics here and stick to the practical aspects of it.
The purpose of Gnuastro’s @command{astscript-zeropoint} script is to obtain the zero point of an image using a reference image, or a reference catalog.
The reference image must have a known zero point, and the reference should contain known positions and magnitudes of stars; both should of course also contain some spatial overlap (on the sky) with the input image.
The reference data (catalog or image) serve as the anchor point to which your input image will be calibrated (a zero point will be found for it).
By comparing the instrumental magnitudes of the stars with their cataloged (standard) magnitudes, the differences can be calculated and used to determine the zero point.
For example, when using a reference image, the script will take the following steps:
@enumerate
@item
Download the Gaia catalog that overlaps with the input image using Gnuastro’s Query program (see @ref{Query}).
This is done to determine the stars within the image@footnote{Stars have an almost identical shape in the image (as opposed to galaxies for example), using confirmed stars will produce a more reliable result.}.
@item
Perform aperture photometry@footnote{For a complete tutorial on aperture photometry, see @ref{Aperture photometry}.} with @ref{MakeProfiles} @ref{MakeCatalog}.
We will assume a zero point of 0 for the input image.
If the reference is an image, then we should perform aperture photometry also in that image.
@item
Match the two catalogs@footnote{For a tutorial on matching catalogs, see @ref{Matching catalogs}).} with @ref{Match}.
@item
The difference between the input and reference magnitudes should be independent of the magnitude of the stars.
This does not hold when the stars are saturated in one/both the images (giving us a bright-limit for the magnitude range to use) or for stars fainter than a certain magnitude, where the signal-to-noise ratio drops significantly in one/both images (giving us a faint limit for the magnitude range to use).
@item
Since the input image was assigned a zero point of zero, the magnitude difference shown above (within the reliable magnitude range) corresponds to the zero point of the input image.
@end enumerate
In the ``Tutorials'' chapter of this Gnuastro book, there are two tutorials dedicated to the usage of this script.
The first uses an image as a reference (@ref{Zero point tutorial with reference image}) and the second uses a catalog (@ref{Zero point tutorial with reference catalog}).
For the full set of options an a detailed description of each, see @ref{Invoking astscript-zeropoint}.
@menu
* Invoking astscript-zeropoint:: How to call the script
@end menu
@node Invoking astscript-zeropoint, , Zero point estimation, Zero point estimation
@subsection Invoking astscript-zeropoint
This installed script will calculate the zero point of an input image to calibrate it.
A general overview of this script has been published in Eskandarlou et al. @url{https://arxiv.org/abs/2312.04263,2023}; please cite it if this script proves useful in your research.
The reference can be an image or catalog (which have been previously calibrated)
The executable name is @command{astscript-zeropoint}, with the following general template:
@example
## Using a reference image in four apertures.
$ astscript-zeropoint image.fits --hdu=1 \
--refimgs=ref-img1.fits,ref-img2.fits \
--refimgshdu=1,1 \
--refimgszp=22.5,22.5 \
--aperarcsec=1.5,2,2.5,3 \
--magnituderange=16,18 \
--output=output.fits
## Using a reference catalog
$ astscript-zeropoint image.fits --hdu=1 \
--refcat=cat.fits \
--refcathdu=1 \
--aperarcsec=1.5,2,2.5,3 \
--magnituderange=16,18 \
--output=output.fits
@end example
To learn more about the core concepts behind the zero point, please see @ref{Brightness flux magnitude}.
For a practical review of how to optimally use this script and ways to interpret its results, we have two tutorials: @ref{Zero point tutorial with reference image} and @ref{Zero point tutorial with reference catalog}.
To find the zero point of your input image, this script can use a reference image (that already has a zero point) or a reference catalog (that just has magnitudes).
In any case, it is mandatory to identify at least one aperture for aperture photometry over the image (using @option{--aperarcsec}).
If reference image(s) is(are) given, it is mandatory to specify its(their) zero point(s) using the @option{--refimgszp} option (it can take a separate value for each reference image).
When a catalog is given, it should already contain the magnitudes of the object (you can specify which column to use).
This script will not estimate the zero point based on all the objects in the reference image or catalog.
It will first query Gaia database and only select objects have a significant parallax (because Gaia's algorithms sometimes confuse galaxies and stars based on pure morphology).
You can bypass this step (which needs internet connection and can only be used on real data, not simulations) using the @option{--starcat} option described in @ref{zero point options}.
This script will then match the catalog of stars (either through Gaia or @option{--starcat}) with the reference catalog and only use them.
If the reference is an image, it will simply use the stars catalog to do aperture photometry.
By default, this script will estimate the number of available threads and run all independent steps in parallel on those threads.
To control this behavior (and allow it to only run on a certain number of threads), you can use the @option{--numthreads} option.
During its operation, this script will build a temporary file in the running directory that will be deleted once it is finished.
The @option{--tmpdir} option can be used to manually set the temporary directory's location at any location in your file system.
The @option{--keeptmp} option can be used to stop the deletion of that directory (useful for when you want to debug the script or better understand what it does).
@menu
* zero point output:: Format of the output.
* zero point options:: List and details of options.
@end menu
@node zero point output, zero point options, Invoking astscript-zeropoint, Invoking astscript-zeropoint
@subsubsection astscript-zeropoint output
The output will be a multi-extension FITS table.
The first table in the output gives the zero point and its standard deviation for all the requested apertures.
This gives you the ability to inspect them and select the best.
The other table(s) give the exact measurements for each star that was used (if you use @option{--keepzpap}, it will be for all your apertures, if not, only for the aperture with the smallest standard deviation).
For a full tutorial on how to interpret the output of this script, see @ref{Zero point tutorial with reference image}
If you just want the estimated zero point with the least standard deviation, this script will write it as a FITS keyword in the first table of the output.
@table @code
@item ZPAPER
Read as ``Zero Point APERture''.
This shows the aperture radius (in arcseconds) that had the smallest standard deviation in the estimated zero points.
@item ZPVALUE
The zero point estimation for the aperture of @code{ZPAPER}.
@item ZPSTD
The standard deviation of the zero point (for all the stars used, within the aperture of @code{ZPAPER}).
@item ZPMAGMIN
The minimum (brightest) magnitude used to estimate the zero point.
@item ZPMAGMAX
The maximum (faintest) magnitude used to estimate the zero point.
@end table
@noindent
A simple way to see these keywords, or read the value of one is shown below.
For more on viewing and manipulating FITS keywords, see @ref{Keyword inspection and manipulation}.
@example
## See all the keywords written by this script (that start with 'ZP')
$ astfits out.fits -h1 | grep ^ZP
## If you just want the zero point
$ astfits jplus-zeropoint.fits -h1 --keyvalue=ZPVALUE
@end example
@node zero point options, , zero point output, Invoking astscript-zeropoint
@subsubsection astscript-zeropoint options
All the operating phases of the this script can be customized through the options below.
@table @option
@item -h STR/INT
@itemx --hdu=STR/INT
The HDU/extension of the input image to use.
@item -o STR
@itemx --output=STR
The name of the output file produced by this script.
See @ref{zero point output} for the format of its contents.
@item -N INT
@itemx --numthreads=INT
The number of threads to use.
By default this script will attempt to find the number of available threads at run-time and will use them.
@item -a FLT,[FLT]
@itemx --aperarcsec=FLT,[FLT]
The radius/radii (in arc seconds) of aperture(s) used in aperture photometry of the input image.
This option can take many values (to check different apertures and find the best for a more accurate zero point estimation).
If a reference image is used, the same aperture radii will be used for aperture photometry there.
@item -M FLT,FLT
@itemx --magnituderange=FLT,FLT
Range of the magnitude for finding the best aperture and zero point.
Very bright stars get saturated and fainter stars are affected too much by noise.
Therefore, it is important to limit the range of magnitudes used in estimating the zero point.
A full tutorial is given in @ref{Zero point tutorial with reference image}.
@item -S STR
@itemx --starcat=STR
Name of catalog containing the RA and Dec of positions for aperture photometry in the input image and reference (catalog or image).
If not given, the Gaia database will be queried for all stars that overlap with the input image (see @ref{Available databases}).
This option is therefore useful in the following scenarios (among others):
@itemize
@item
No internet connection.
@item
Many images having a major overlap in the sky, making it inefficient to query Gaia for every image separately: you can query the larger area (containing all the images) once, and directly give the downloaded table to all the separate runs of this script.
Especially if the field is wide, the download time can be the slowest part of this script.
@item
In simulations (where you have a pre-defined list of stars).
@end itemize
Through the @option{--starcathdu}, @option{--starcatra} and @option{--starcatdec} options described below, you can specify the HDU, RA column and Dec Column within this file.
The reference image or catalog probably have many objects that are not stars.
But it is only stars that have the same shape (the PSF) across the image@footnote{The PSF itself can vary across the field of view; but that is second-order for this analysis.}.
Therefore
@item --starcathdu=STR/INT
The HDU name or number in file given to @option{--starcat} (described above) that contains the table of RA and Dec positions for aperture photometry.
If not given, it is assumed that the table is in HDU number 1 (counting from 0).
@item --starcatra=STR/INT
The column name or number (in the table given to @option{--starcat}) that contains the Right Ascension.
@item --starcatdec=STR/INT
The column name or number (in the table given to @option{--starcat}) that contains the Declination.
@item -c STR
@itemx --refcat=STR
Reference catalog used to estimate the zero point of the input image.
This option is mutually exclusive with (cannot be given at the same time as) @option{--refimgs}.
This catalog should have RA, Dec and Magnitude of the stars (that match with Gaia or @option{--starcat}).
@item -C STR/INT
@itemx --refcathdu=STR/INT
The HDU/extension of the reference catalog will be calculated.
@item -r STR
@itemx --refcatra=STR
Right Ascension column name of the reference catalog.
@item -d STR
@itemx --refcatdec=STR
Declination column name of the reference catalog.
@item -m STR
@itemx --refcatmag=STR
Magnitude column name of the reference catalog.
@item -s FLT
@itemx --matchradius=FLT
Matching radius of stars (in arc seconds) and reference catalog in arc-seconds.
By default it is 0.2 arc seconds.
@item -R STR,[STR]
@itemx --refimgs=STR,[STR]
Reference image(s) for estimating the zero point.
This option can take any number of separate file names, separated by a comma.
The HDUs of each reference image should be given to the @option{refimgshdu} option.
In case the images are in separate HDUs of the same file, you need to repeat the file name here.
This option is mutually exclusive with (cannot be given at the same time as) @option{--refimgs}.
@item -H STR/INT
@itemx --refimgshdu=STR/INT
HDU/Extension name of number of the reference files.
The number of values given to this option should be the same as the number of reference image(s).
@item -z FLT,[FLT]
@itemx --refimgszp=FLT,[FLT]
Zero point of the reference image(s).
The number of values given to this should be the same as the number of names given to @option{--refimgs}.
@item -K
@itemx --keepzpap
Keep the table of separate zero points found for each star for all apertures.
By default, this table is only present for the aperture that had the least standard deviation in the estimated zero point.
@item -t
@itemx --tmpdir
Directory to keep temporary files during the execution of the script.
If the directory does not exist at run$-$time, this script will create it.
By default, upon completion of the script, this directory will be deleted.
However, if you would like to keep the intermediate files, you can use the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Its recommended to not remove the temporary directory (see description of @option{--keeptmp}).
This option is useful for debugging and checking the outputs of internal steps.
@item --mksrc=STR
Use a non-standard Makefile for the Makefile to call.
This option is primarily useful during the development of this script and its Makefile, not for normal/regular user.
So if you are not developing this script, you can safely ignore this option.
When this option is given, the default installed Makefile will not be used: the file given to this option will be read by @command{make} (within the script) instead.
@item --cite
Give BibTeX and acknowledgment information for citing this script within your paper.
For more, see @option{Operating mode options}.
@end table
@node Pointing pattern simulation, Color images with gray faint regions, Zero point estimation, Installed scripts
@section Pointing pattern simulation
@cindex Depth of data
Astronomical images are often composed of many single exposures.
When the science topic does not depend on the time of observation (for example galaxy evolution), after completing the observations, we coadd those single exposures into one ``deep'' image.
Designing the strategy to take those single exposures is therefore a very important aspect of planning your astronomical observation.
There are many reasons for taking many short exposures instead of one long exposure:
@itemize
@item
Modern astronomical telescopes have very high precision (with pixels that are often much smaller than an arc-second or 1/3600 degrees.
However, the Earth is orbiting the Sun at a very high speed of roughly 15 degrees every hour!
Keeping the (often very large!) telescopes in track with this fast moving sky is not easy; such that most cannot continue accurate tracking more than 10 minutes.
@item
@cindex Seeing
For ground-based observations, the turbulence of the atmosphere changes very fast (on the scale of minutes!).
So if you plan to observe at 10 minutes and at the start of your observations the seeing is good, it may happen that on the 8th minute, it becomes bad.
This will affect the quality of your final exposure!
@item
@cindex Vignetting
When an exposure is taken, the instrument/environment imprint a lot of artifacts on it.
One common example that we also see in normal cameras is @url{https://en.wikipedia.org/wiki/Vignetting, vignetting}; where the center receives a larger fraction of the incoming light than the periphery).
In order to characterize and remove such artifacts (which depend on many factors at the precision that we need in astronomy!), we need to take many exposures of our science target.
@item
By taking many exposures we can build a coadd that has a higher resolution; this is often done in under-sampled data, like those in the Hubble Space Telescope (HST) or James Webb Space Telescope (JWST).
@item
The scientific target can be larger than the field of view of your telescope and camera.
@end itemize
@cindex Pointing
In the jargon of observational astronomers, each exposure is also known as a ``dither'' (literally/generally meaning ``trembling'' or ``vibration'').
This name was chosen because two exposures are not usually taken on exactly the same position of the sky (known as ``pointing'').
In order to improve all the item above, we often move the center of the field of view from one exposure to the next.
In most cases this movement is small compared to the field of view, so most of the central part of the final coadd has a fixed depth, but the edges are shallower (conveying a sense of vibration).
When the spacing between pointings is large, they are known as an ``offset''.
A ``pointing'' is used to refer to either a dither or an offset.
@cindex Exposure map
For example see Figures 3 and 4 of Illingworth et al. @url{https://arxiv.org/pdf/1305.1931.pdf,2013} which show the exposures that went into the XDF survey.
The pointing pattern can also be large compared to the field of view, for example see Figure 1 of Trujillo et al. @url{https://arxiv.org/pdf/2109.07478.pdf,2021}, which show the pointing strategy for the LIGHTS survey.
These types of images (where each pixel contains the number of exposures, or time, that were used in it) are known as exposure maps.
The pointing pattern therefore is strongly defined by the science case (high-level purpose of the observation) and your telescope's field of view.
For example in the XDF survey is focused on very high redshift (most distant!) galaxies.
These are very small objects and within that small footprint (of just 1 arcmin) we have thousands of them.
However, the LIGHTS survey is focused on the halos of large nearby galaxies (that can be more than 10 arcminutes wide!).
In @ref{Invoking astscript-pointing-simulate} of Gnuastro's @ref{Installed scripts} is described in detail.
This script is designed to simplify the process of selecting the best pointing pattern for your observation strategy.
For a practical tutorial on using this script, see @ref{Pointing pattern design}.
@menu
* Invoking astscript-pointing-simulate:: Options and running mode.
@end menu
@node Invoking astscript-pointing-simulate, , Pointing pattern simulation, Pointing pattern simulation
@subsection Invoking astscript-pointing-simulate
This installed script will simulate a final coadded image from a certain pointing pattern (given as a table).
A general overview of this script has been published in @url{https://ui.adsabs.harvard.edu/abs/2023RNAAS...7..211A,Akhlaghi (2023)}; please cite it if this script proves useful in your research.
The executable name is @file{astscript-pointing-simulate}, with the following general template:
@example
$ astscript-pointing-simulate [OPTION...] pointings.fits
@end example
@noindent
Examples (for a tutorial, see @ref{Pointing pattern design}):
@example
$ astscript-pointing-simulate pointing.fits --output=coadd.fits \
--img=image.fits --center=10,10 --width=1,1
@end example
The default output of this script is a coadded image that results from placing the given image (given to @option{--img}) in the pointings of a pointing pattern.
The Right Ascension (RA) and Declination (Dec) of each pointing is given in the main input catalog (@file{pointing.fits} in the example above).
The center and width of the final coadd (both in degrees by default) should be specified using the @option{--width} option.
Therefore, in order to successfully run, this script at least needs the following four inputs:
@table @asis
@item Pointing positions
A table containing the RA and Dec of each pointing (the only input argument).
The actual column names that contain them can be set with the @option{--racol} and @option{--deccol} options (see below).
@item An image
This is used for its distortion and rotation, its pixel values and position on the sky will be ignored.
The file containing the image should be given to the @option{--img} option.
@item Coadd's central coordinate
The central RA and Dec of the finally produced coadd (given to the @option{--center} option).
@item Coadd's width
The width (in degrees) of the final coadd (given to the @option{--width} option).
@end table
This script will ignore the pixel values of the reference image (given to @option{--img}) and the Reference coordinates (values to @code{CRVAL1} and @code{CRVAL2} in its WCS keywords).
For each pointing pointing, this script will put the given RA and Dec into the @code{CRVAL1} and @code{CRVAL2} keywords of a copy of the input (not changing the input in anyway), and reset that input's pixel values to 1.
The script will then warp the modified copy into the final pixel grid (correcting any rotation and distortions that are used from the original input).
This process is done for all the pointing points in parallel.
Finally, all the exposures in the pointing list are coadded together to produce an exposure map (showing how many exposures go into each pixel of the final coadd.
Except for the table of pointing positions, the rest of the inputs and settings are configured through @ref{Options}, just note the restrictions in @ref{Installed scripts}.
@table @option
@item -o STR
@itemx --output=STR
Name of the output.
The output is an image of the requested size (@option{--width}) and position (@option{--center}) in the sky, but each pixel will contain the number of exposures that go into it after the pointing has been done.
See description above for more.
@item -h STR/INT
@itemx --hdu=STR/INT
The name or counter (counting from zero; from the FITS standard) of HDU containing the table of pointing pointing positions (the file name of this table is the main input argument to this script).
For more, see the description of this option in @ref{Input output options}.
@item -i STR
@itemx --img=STR
The references image.
The pixel values and central location in this image will be ignored by the script.
The only relevant information within this script are the WCS properties (except for @option{CRVAL1} and @option{CRVAL2}, which connect it to a certain position on the sky) and image size.
See the description above for more.
@item -H STR/INT
@itemx --imghdu=STR/INT
The name or counter (counting from zero; from the FITS standard) of the HDU containing the reference image (file name should be given to the @option{--img} option).
If not given, a default value of @code{1} is assumed; so this is not a mandatory option.
@item -r STR/INT
@itemx --racol=STR/INT
The name or counter (counting from 1; from the FITS standard) of the column containing the Right Ascension (RA) of each pointing to be used in the pointing pattern.
The file containing the table is given to this script as its only argument.
@item -d STR/INT
@itemx --deccol=STR/INT
The name or counter (counting from 1; from the FITS standard) of the column containing the Declination (Dec) of each pointing to be used in the pointing pattern.
The file containing the table is given to this script as its only argument.
@item -C FLT,FLT
@itemx --center=FLT,FLT
The central RA and Declination of the final coadd in degrees.
@item -w FLT,FLT
@itemx --width=FLT,FLT
The width of the final coadd in degrees.
If @option{--widthinpix} is given, the two values given to this option will be interpreted as degrees.
@item --widthinpix
Interpret the values given to @option{--width} as number of pixels along each dimension), and not as degrees.
@item --ctype=STR,STR
The projection of the output coadd (@code{CTYPEi} keyword in the FITS WCS standard).
For more, see the description of the same option in @ref{Align pixels with WCS considering distortions}.
@item --hook-warp-before='STR'
Command to run before warping each exposure into the output pixel grid.
By default, the exposure is immediately warped to the final pixel grid, but in some scenarios it is necessary to do some operations on the exposure before warping (for example account for vignetting; see @ref{Accounting for non-exposed pixels}).
The warping of each exposure is done in parallel by default; therefore there are pre-defined variables that you should use for the input and output file names of your command:
@table @code
@item $EXPOSURE
Input: name of file with the same size as the reference image with all pixels having a fixed value of 1.
The WCS has also been corrected based on the pointing pattern.
@item $TOWARP
Output: name of the expected output of your hook.
If it is not created by your script, the script will complain and abort.
This file will be given to Warp to be warped into the output pixel grid.
@end table
For an example of using hooks with an extended discussion, see @ref{Pointing pattern design} and @ref{Accounting for non-exposed pixels}.
To develop your command, you can use @command{--hook-warp-before='...; echo GOOD; exit 1'} (where @code{...} can be replaced by any command) and run the script on a single thread (with @option{--numthreads=1}) to produce a single file and simplify the checking that your desired operation works as expected.
All the files will be within the temporary directory (see @option{--tmpdir}).
@item --hook-warp-after='STR'
Command to run after the warp of each exposure into the output pixel grid, but before the coadding of all exposures.
For more on hooks, see the description of @code{--hook-warp-before}, @ref{Pointing pattern design} and @ref{Accounting for non-exposed pixels}.
@table @code
@item $WARPED
Input: name of file containing the warped exposure in the output pixel grid.
@item $TOWARP
Output: name of the expected output of your hook.
If it is not created by your script, the script will complain and abort.
This file will be coadded from the same file for all exposures into the final output.
@end table
@item --coadd-operator="STR"
The operator to use for coadding the warped individual exposures into the final output of this script.
For the full list, see @ref{Coadding operators}.
By default it is the @code{sum} operator (to produce an output exposure map).
For an example usage, see the tutorial in @ref{Pointing pattern design}.
@item --mksrc=STR
Use a non-standard Makefile for the Makefile to call.
This option is primarily useful during the development of this script and its Makefile, not for normal/regular usage.
So if you are not developing this script, you can safely ignore this option.
When this option is given, the default installed Makefile will not be used: the file given to this option will be read by @command{make} (within the script) instead.
@item -t STR
@itemx --tmpdir=STR
Name of directory containing temporary files.
If not given, a temporary directory will be created in the running directory with a long name using some of the input options.
By default, this temporary directory will be deleted after the output is created.
You can disable the deletion of the temporary directory (useful for debugging!) with the @option{--keeptmp} option.
Using this option has multiple benefits in larger pipelines:
@itemize
@item
You can avoid conflicts in case the used inputs in the default name are the same.
@item
You can put this directory somewhere else in the running file system to avoid mixing output files with your source, or to use other storage hardware that are mounted on the running file system.
@end itemize
@item -k
@itemx --keeptmp
Keep the temporary directory (and do not delete it).
@item -?
@itemx --help
Print a list of all the options, along with a short description and context for the program.
For more, see @option{Operating mode options}.
@item -N INT
@itemx --numthreads=INT
The number of threads to use for parallel operations (warping the input into the different pointing points).
If not given (by default), the script will try to find the number of available threads on the running system and use that.
For more, see @option{Operating mode options}.
@item --cite
Give BibTeX and acknowledgment information for citing this script within your paper.
For more, see @option{Operating mode options}.
@item -q
@itemx --quiet
Do not print the series of commands or their outputs in the terminal.
For more, see @option{Operating mode options}.
@item -V
@itemx --version
Print the version of the running Gnuastro along with a copyright notice and list of authors that contributed to this script.
For more, see @option{Operating mode options}.
@end table
@node Color images with gray faint regions, PSF construction and subtraction, Pointing pattern simulation, Installed scripts
@section Color images with gray faint regions
Typical astronomical images have a very wide range of pixel values and generally, it is difficult to show the entire dynamical range in a color image.
For example, by using @ref{ConvertType}, it is possible to obtain a color image with three FITS images as each of the Red-Green-Blue (or RGB) color channels.
However, depending on the pixel distribution, it could be very difficult to see the different regions together (faint and bright objects at the same time).
In something like DS9, you end up changing the color map parameters to see the regions you are most interested in.
The reason is that images usually have a lot of faint pixels (near to the sky background or noise values), and few bright pixels (corresponding to the center of stars, galaxies, etc.) that can be millions of times brighter!
As a consequence, by considering the images without any modification, it is extremely hard to visualize the entire range of values in a color image.
This is because standard color formats like JPEG, TIFF or PDF are defined as 8-bit integer precision, while astronomical data are usually 32-bit floating point!
To solve this issue, it is possible to perform some transformations of the images and then obtain the color image.
This is actually what the current script does: it makes some non-linear transformations and then uses Gnuastro's ConvertType to generate the color image.
There are several parameters and options in order to change the final output that are described in @ref{Invoking astscript-color-faint-gray}.
A full tutorial describing this script with actual data is available in @ref{Color images with full dynamic range}.
A general overview of this script is published in Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024}; please cite it if this script proves useful in your research.
@menu
* Invoking astscript-color-faint-gray:: Details of options and arguments.
@end menu
@node Invoking astscript-color-faint-gray, , Color images with gray faint regions, Color images with gray faint regions
@subsection Invoking astscript-color-faint-gray
This installed script will consider several images to combine them into a single color image to visualize the full dynamic range.
The executable name is @file{astscript-color-faint-gray}, with the following general template:
@example
$ astscript-color-faint-gray [OPTION...] r.fits g.fits b.fits
@end example
@noindent
Examples (for a tutorial, see @ref{Color images with full dynamic range}):
@example
## Generate a color image from three images with default options.
$ astscript-color-faint-gray r.fits g.fits b.fits -g1 --output color.pdf
## Generate a color image, consider the minimum value to be zero.
$ astscript-color-faint-gray r.fits g.fits b.fits -g1 \
--minimum=0.0 --output=color.jpg
## Generate a color image considering different zero points, minimum
## values, weights, and also increasing the contrast.
$ astscript-color-faint-gray r.fits g.fits b.fits -g1 \
-z=22.4 -z=25.5 -z=24.6 \
-m=-0.1 -m=0.0 -m=0.1 \
-w=1 -w=2 -w=3 \
--contrast=3 \
--output=color.tiff
@end example
This script takes three inputs images to generate a RGB color image as the output.
The order of the images matters, reddest (longest wavelength) filter (R), green (an intermediate wavelength) filter (G) and bluest (shortest wavelength).
In astronomy, these can be any filter (for example from infra-red, radio, optical or x-ray); the ``RGB'' designation is from the general definition of colors (see @url{https://en.wikipedia.org/wiki/RGB_color_spaces})
These images are internally manipulated by a series of non-linear transformations and normalized to homogenize and finally combine them into a color image.
In general, for typical astronomical images, the default output is an image with bright pixels in color and noise pixels in black.
The option @option{--minimum} sets the minimum value to be shown and it is a key parameter, it uses to be a value close to the sky background level.
The current non-linear transformation is from Lupton et al. @url{https://ui.adsabs.harvard.edu/abs/2004PASP..116..133L, 2004}, which we call the ``asinh'' transformation.
The two important parameters that control this transformation are @option{--qthresh} and @option{--stretch}.
With the option @option{--coloronly}, it is possible to generate a color image with the background in black: bright pixels in color and the sky background (or noise) values in black.
It is possible to provide a fourth image (K) that will be used for showing the gray region: R, G, B, K
The generation of a good color image is something that requires several trials, so we encourage the user to play with the different parameters cleverly.
After some testing, we find it useful to follow the steps.
For a more complete description of the logic of the process, see the dedicated tutorial in @ref{Color images with full dynamic range}.
@enumerate
@item
Use the default options to estimate the parameters.
By running the script with no options at all, it will estimate the parameters and they will be printed on the command-line.
@item
Select a good sky background value of the images.
If the sky background has been subtracted, a minimum value of zero could be a good option: @option{--minimum=0.0}.
@item
Focus on the bright regions to tweak @option{--qbright} and @option{--stretch}.
First, try low values of @option{--qbright} to show the bright parts.
Then, adjust @option{--stretch} to show the fainter regions around bright parts.
Overall, play with these two parameters to show the color regions appropriately.
@item
Change @option{--colorval} to separate the color and black regions.
This is the lowest value of the threshold image that is shown in color.
@item
Change @option{--grayval} to separate the black and gray regions.
This is highest value of the threshold image that is shown in gray.
@item
Use @option{--checkparams} to check the pixel value distributions.
@item
Use @option{--keeptmp} to not remove the threshold image and check it.
@end enumerate
@noindent
A full description of each option is given below:
@table @code
@item -h
@itemx --hdu=STR/INT
Input HDU name or counter (counting from 0) for each input FITS file.
If the same HDU should be used from all the FITS files, you can use the @option{--globalhdu} option described below to avoid repeating this option.
@item -g
@itemx --globalhdu=STR/INT
Use the value given to this option (a HDU name or a counter, starting from 0) for the HDU identifier of all the input FITS files.
This is useful when all the inputs are distributed in different files, but have the same HDU in those files.
@item -o
@itemx --output
Output color image name.
The output can be in any of the recognized output formats of ConvertType (including PDF, JPEG and TIFF).
@item -m
@itemx --minimum=FLT
Minimum value to be mapped for each R, G, B, and K FITS images.
If a single value is given to this option it will be used for all the input images.
This parameter controls the smallest visualized pixel value.
In general, it is a good decision to set this value close to the sky background level.
This value can dramatically change the output color image (especially when there are large negative values in the image that you do not intend to visualize).
@item -Z
@itemx --zeropoint=FLT
Zero point value for each R, G, B, and K FITS images.
If a single value is given, it is used for all the input images.
Internally, the zero point values are used to transform the pixel values in units of Janskys.
The units are not important for a color image, but the fact that the images are photometrically calibrated is important for obtaining an output color image whose color distribution is realistic.
@item -w
@itemx --weight=FLT
Relative weight for each R, G, B channel.
With this parameter, it is possible to change the importance of each channel to modify the color balance of the image.
For example, @option{-w=1 -w=2 -w=5} indicates that the B band will be 5 times more important than the R band, and that the G band is 2 times more important than the R channel.
In this particular example, the combination will be done as @mymath{\rm{colored}=(1{\times}\rm{R}+2{\times}\rm{G}+5{\times}\rm{B})/(1 + 2 + 5)=0.125{\times}\rm{R} + 0.250{\times}\rm{G} + 0.625{\times}\rm{B}}.
In principle, a color image should recreate ``real'' colors, but ``real'' is a very subjective matter and with this option, it is possible to change the color balance and make it more aesthetically interesting.
However, be careful to avoid confusing the viewers of your image and report the weights with the filters you used for each channel.
It is up to the user to use this parameter carefully.
@item -Q
@itemx --qbright=FLT
It is one of the parameters that control the asinh transformation.
It should be used in combination with @option{--stretch}.
In general, it has to be set to low values to better show the brightest regions.
Afterwards, adjust @option{--stretch} to set the linear stretch (show the intermediate/faint structures).
@item -s
@itemx --stretch=FLT
It is one of the parameters that control the asinh transformation.
It should be used in combination with @option{--qbright}.
It is used for bringing out the faint/intermediate bright structures of the image that are shown linearly.
In general, this parameter is chosen after setting @option{--qbright} to a low value.
@cartouche
@noindent
@strong{The asinh transformation.}
The asinh transformation is done on the coadded R, G, B image.
It consists in the modification of the coadded image (I) in order to show the entire dynamical range appropriately following the expression: @mymath{f(I) = asinh(} @option{qbright} @mymath{\cdot} @option{stretch} @mymath{\cdot I) /} @option{qbright}.
See @ref{Color images with full dynamic range} for a complete tutorial that shows the intricacies of this transformation with step-by-step examples.
@end cartouche
@item --coloronly
By default, the fainter parts of the image are shown in grayscale (not color, since colored noise is not too informative).
With this option, the output image will be fully in color with the background (noise pixels) in black.
@item --colorval=FLT
The value that separates the color and black regions.
By default, it ranges from 100 (all pixels becoming in color) to 0 (all pixels becoming black).
Check the histogram @code{FOR COLOR and GRAY THRESHOLDS} with the option @option{--checkparams} for selecting a good value.
@item --grayval=FLT
This parameter defines the value that separates the black and gray regions.
It ranges from 100 (all pixels becoming black) to 0 (all pixels becoming white).
Check the histogram @code{FOR COLOR and GRAY THRESHOLDS} with the option @option{--checkparams} to select the value.
@item --regions=STR
Labeled image, identifying the pixels to use for color (value 2), those to use for black (value 1) and those to use for gray (value 0).
When this option is given the @option{--colorval} and @option{--grayval} options will be ignored.
This gives you the freedom to select the pixels to show in color, black or gray based on any criteria that is relevant for your purpose.
For an example of using this option to get a physically motivated threshold, see @ref{Manually setting color-black-gray regions}.
@item -r
@itemx --reghdu=STR/INT
HDU name or counter (counting from 0) of the region image given to @option{--regions}.
@cartouche
@noindent
@strong{IMPORTANT NOTE.}
The options @option{--colorval} and @option{--grayval} are related one to each other.
They are defined from the threshold image (an image generated in the temporary directory) named @file{colorgray_threshold.fits}.
By default, this image is computed from the coadd and later asinh-transformation of the three R, G, B channels.
Its pixel values range between 100 (brightest) to 0 (faintest).
The @option{--colorval} value computed by default is the median of this image.
Pixels above this value are shown in color.
Pixels below this value are shown in gray.
Regions of pure black color can be defined with the @option{--grayval} option if this value is between 0 and @option{--colorval}.
In other words.
Color region are defined by those pixels between 100 and @option{--colorval}.
Pure black region are defined by those pixels between @option{--colorval} to @option{grayval}.
Gray region are defined by those pixels between @option{--grayval} to 0.
If a fourth image is provided as the ``K'' channel, then this image is used as the threshold image.
See @ref{Color images with full dynamic range} for a complete tutorial.
@end cartouche
@item --colorkernelfwhm=FLT
Gaussian kernel FWHM (in pixels) for convolving the color regions.
Sometimes, a convolution of the color regions (bright pixels) is desired to further increase their signal-to-noise ratio (but make them look smoother).
With this option, the kernel will be created internally and convolved with the colored regions.
@item --graykernelfwhm=FLT
Gaussian kernel FWHM (in pixels) for convolving the background image.
Sometimes, a convolution of the background image is necessary to smooth the noisier regions and increase their signal-to-noise ratios.
With this option, the kernel will be created internally and convolved with the colored regions.
@item -b
@itemx --bias=FLT
Change the brightness of the final image.
By increasing this value, a pedestal value will be added to the color image.
This option is rarely useful, it is most common to use @option{--contrast}, see below.
@item -c
@itemx --contrast=FLT
Change the contrast of the final image.
The transformation is: @mymath{\rm{output}=\rm{contrast}\times{image}+brightness}.
@item -G
@itemx --gamma=FLT
Gamma exponent value for a gamma transformation.
This transformation is not linear: @mymath{\rm{output}=\rm{image}^{gamma}}.
This option overrides @option{--bias} or @option{--contrast}.
@item --markoptions=STR
Options to draw marks on the final output image.
Anything given to this option is passed directly to ConvertType in order to draw marks on the output image.
For example, if you construct a table named @file{marks.txt} that contains the column names: x, y, shape, size, axis ratio, angle, color; you will execute the script with the following option: @option{--markoptions="--marks=markers.txt --markcoords=x,y --markshape=shape --marksize=size,axisratio --markrotate=angle --markcolor=color"}.
See @ref{Drawing with vector graphics} for more information on how to draw markers and @ref{Weights contrast markers and other customizations} for a tutorial.
@item --checkparams
Print the statistics of intermediate images that are used for estimating the parameters.
This option is useful to decide the optimum set of parameters.
@item --keeptmp
Do not remove the temporary directory.
This is useful for debugging and checking the outputs of internal steps.
@item --cite
Give BibTeX and acknowledgment information for citing this script within your paper.
For more, see @option{Operating mode options}.
@item -q
@itemx --quiet
Do not print the series of commands or their outputs in the terminal.
For more, see @option{Operating mode options}.
@item -V
@itemx --version
Print the version of the running Gnuastro along with a copyright notice and list of authors that contributed to this script.
For more, see @option{Operating mode options}.
@end table
@node PSF construction and subtraction, , Color images with gray faint regions, Installed scripts
@section PSF construction and subtraction
The point spread function (PSF) describes how the light of a point-like source is affected by several optical scattering effects (atmosphere, telescope, instrument, etc.).
Since the light of all astrophysical sources undergoes all these effects, characterizing the PSF is key in astronomical analysis (for small and large objects).
Consequently, having a good characterization of the PSF is fundamental to any analysis.
In some situations@footnote{An example scenario where a parametric PSF may be enough: you are only interested in very small, high redshift objects that only extended a handful of pixels.} a parametric (analytical) model is sufficient for the PSF (such as Gaussian or Moffat, see @ref{PSF}).
However, once you are interested in objects that are larger than a handful of pixels, it is almost impossible to find an analytic function to adequately characterize the PSF.
Therefore, it is necessary to obtain an empirical (non-parametric) and extended PSF.
In this section we describe a set of installed scrips in Gnuastro that will let you construct the non-parametric PSF using point-like sources.
They allow you to derive the PSF from the same astronomical images that the science is derived from (without assuming any analytical function).
The scripts are based on the concepts described in Infante-Sainz et al. @url{https://arxiv.org/abs/1911.01430,2020} and further elaborated in Eskandarlou et al. @url{https://arxiv.org/abs/2510.12940,2025}, see the flow-charts and figures of the latter in particular for a visual companion to many of the steps below.
But to be complete, we first give a summary of the logic and overview of their combined usage in @ref{Overview of the PSF scripts}.
Furthermore, before going into the technical details of each script, we encourage you to go through the tutorial that is devoted to this at @ref{Building the extended PSF}.
The tutorial uses a real dataset and includes all the logic and reasoning behind every step of the usage in every installed script.
@menu
* Overview of the PSF scripts:: Summary of concepts and methods
* Invoking astscript-psf-select-stars:: Select good starts within an image.
* Invoking astscript-psf-stamp:: Make a stamp of each star to coadd.
* Invoking astscript-psf-unite:: Merge coadds of different regions of PSF.
* Invoking astscript-psf-scale-factor:: Calculate factor to scale PSF to star.
* Invoking astscript-psf-subtract:: Put the PSF in the image to subtract.
@end menu
@node Overview of the PSF scripts, Invoking astscript-psf-select-stars, PSF construction and subtraction, PSF construction and subtraction
@subsection Overview of the PSF scripts
To obtain an extended and non-parametric PSF, several steps are necessary and we will go through them here.
The fundamental ideas of the following methodology are thoroughly described in Infante-Sainz et al. @url{https://arxiv.org/abs/1911.01430,2020} and Eskandarlou et al. @url{https://arxiv.org/abs/2510.12940,2025}.
A full tutorial is also available in @ref{Building the extended PSF}.
The tutorial will go through the full process on a pre-selected dataset, but will describe the logic behind every step in away that can easily be modified/generalized to other datasets.
This section is basically just a summary of that tutorial.
We could have put all these steps into one large program (installed script), however this would introduce several problems.
The most prominent of these problems are:
@itemize
@item
The command would require @emph{many} options, making it very complex to run every time.
@item
You usually have many stars in an image, and many of the steps can be optimized or parallelized depending on the particular analysis scenario.
Predicting all the possible optimizations for all the possible usage scenarios would make the code extremely complex (filled with many unforeseen bugs!).
@end itemize
Therefore, following the modularity principle of software engineering, after several years of working on this, we have broken the full job into the smallest number of independent steps as separate scripts.
All scripts are independent of each other, meaning this that you are free to use all of them as you wish (for example, only some of them, using another program for a certain step, using them for other purposes, or running independent parts in parallel).
For constructing the PSF from your dataset, the first step is to obtain a catalog of stars within it (you cannot use galaxies to build the PSF!).
But you cannot blindly use all the stars either!
For example, we do not want contamination from other bright, and nearby objects.
The first script below is therefore designed for selecting only good star candidates in your image.
It will use different criteria, for example, good parallax (where available, to avoid confusion with galaxies), not being near to bright stars, axis ratio, etc.
For more on this script, see @ref{Invoking astscript-psf-select-stars}.
Once the catalog of stars is constructed, another script is in charge of making appropriate stamps of the stars.
Each stamp is a cropped image of the star with the desired size, normalization of the flux, and mask of the contaminant objects.
For more on this script, see @ref{Invoking astscript-psf-stamp}
After obtaining a set of star stamps, they can be coadded for obtaining the combined PSF from many stars (for example, with @ref{Coadding operators}).
In the combined PSF, the masked background objects of each star's image will be covered and the signal-to-noise ratio will increase, giving a very nice view of the ``clean'' PSF.
However, it is usually necessary to obtain different regions of the same PSF from different stars.
For example, to construct the far outer wings of the PSF, it is necessary to consider very bright stars.
However, these stars will be saturated in the most inner part, and immediately outside of the saturation level, they will be deformed due to non-linearity effects.
Consequently, fainter stars are necessary for the inner regions.
Therefore, you need to repeat the steps above for certain stars (in a certain magnitude range) to obtain the PSF in certain radial ranges.
For example, in Infante-Sainz et al. @url{https://arxiv.org/abs/1911.01430,2020} and Eskandarlou et al. @url{https://arxiv.org/abs/2510.12940,2025} the final PSF was constructed from three regions (and thus, using stars from three ranges in magnitude).
In other cases, we even needed four groups of stars!
But in the example dataset from the tutorial, only two groups are necessary (see @ref{Building the extended PSF}).
Once clean coadds of different parts of the PSF have been constructed through the steps above, it is therefore necessary to blend them all into one.
This is done by finding a common radial region in both, and scaling the inner region by a factor to add with the outer region.
This is not trivial, therefore, a third script is in charge of it, see @ref{Invoking astscript-psf-unite}.
Having constructed the PSF as described above (or by any other procedure), it can be scaled to the magnitude of the various stars in the image to get subtracted (and thus remove the extended/bright wings; better showing the background objects of interest).
Note that the absolute flux of a PSF is meaningless (and in fact, it is usually normalized to have a total sum of unity!), so it should be scaled.
We therefore have another script that will calculate the scale (multiplication) factor of the PSF for each star.
For more on the scaling script, see @ref{Invoking astscript-psf-scale-factor}.
Once the flux factor has been computed, a final script is in charge of placing the scaled PSF over the proper location in the image, and subtracting it.
It is also possible to only obtain the modeled star by the PSF.
For more on the scaling and positioning script, see @ref{Invoking astscript-psf-subtract}.
As mentioned above, in the following sections, each script has its own documentation and list of options for very detailed customization (if necessary).
But if you are new to these scripts, before continuing, we recommend that you do the tutorial @ref{Building the extended PSF}.
Just do not forget to run every command, and try to tweak its steps based on the logic to nicely understand it.
@node Invoking astscript-psf-select-stars, Invoking astscript-psf-stamp, Overview of the PSF scripts, PSF construction and subtraction
@subsection Invoking astscript-psf-select-stars
This installed script will select good star candidates for constructing a PSF.
It will consider stars within a given range of magnitudes without nearby contaminant objects.
To do that, it allows to the user to specify different options described here.
A complete tutorial is available to show the operation of this script as a modular component to extract the PSF of a dataset: @ref{Building the extended PSF}.
The executable name is @file{astscript-psf-select-stars}, with the following general template:
@example
$ astscript-psf-select-stars [OPTION...] FITS-file
@end example
@noindent
Examples:
@example
## Select all stars within 'image.fits' with magnitude in range
## of 6 to 10; only keeping those that are less than 0.02 degrees
## from other nearby stars.
$ astscript-psf-select-stars image.fits \
--magnituderange=6,10 --mindistdeg=0.02
@end example
The input of this script is an image, and the output is a catalog of stars with magnitude in the requested range of magnitudes (provided with @option{--magnituderange}).
The output catalog will also only contain stars that are sufficiently distant (@option{--mindistdeg}) from all other brighter, and some fainter stars.
It is possible to consider different datasets with the option @option{--dataset} (by default, Gaia DR3 dataset is considered)
All stars that are @option{--faintmagdiff} fainter than the faintest limit will also be accounted for, when selecting good stars.
The @option{--magnituderange}, and @option{--mindistdeg} are mandatory: if not specified the code will abort.
The output of this script is a file whose name can be specified with the (optional) @option{--output} option.
If not given, an automatically generated name will be used for the output.
A full description of each option is given below.
@table @option
@item -h STR/INT
@itemx --hdu=STR/INT
The HDU/extension of the input image to use.
@item -S STR
@itemx --segmented=STR
Optional segmentation file obtained by @ref{Segment}.
It should have two extensions (@option{CLUMPS} and @option{OBJECTS}).
If given, a catalog of @option{CLUMPS} will be computed and matched with the Gaia catalog to reject those objects that are too elliptical (see @option{--minaxisratio}).
The matching will occur on an aperture (in degrees) specified by @option{--matchaperturedeg}.
@item -a FLT
@itemx --matchaperturedeg=FLT
This option determines the aperture (in degrees) for matching the catalog from gaia with the clumps catalog that is produced by the segmentation image given to @option{--segmented}.
The default value is 10 arc-seconds.
@item -c STR
@itemx --catalog=STR
Optional reference catalog to use for selecting stars (instead of querying an external catalog like Gaia).
When this option is given, @option{--dataset} (described below) will be ignored and no internet connection will be necessary.
@item -d STR
@itemx --dataset=STR
Optional dataset to query (see @ref{Query}).
It should contain the database and dataset entries to Query.
Its value will be immediately given to @command{astquery}.
By default, its value is @code{gaia --dataset=dr3} (so it connects to the Gaia database and requests the data release 3).
For example, if you want to use VizieR's Gaia DR3 instead (for example due to a maintenance on ESA's Gaia servers), you should use @option{--dataset="vizier --dataset=gaiadr3"}.
It is possible to specify a different dataset from which the catalog is downloaded.
In that case, the necessary column names may also differ, so you also have to set @option{--refcatra}, @option{--refcatdec} and @option{--field}.
See their description for more.
@item -r STR
@itemx --refcatra=STR
The name of the column containing the Right Ascension (RA) in the requested dataset (@option{--dataset}).
If the user does not determine this option, the default value is assumed to be @code{ra}.
@item -d STR
@itemx --refcatdec=STR
The name of the column containing the Declination (Dec) in the requested dataset (@option{--dataset}).
If the user does not determine this option, the default value is assumed to be @code{dec}.
@item -f STR
@itemx --field=STR
The name of the column containing the magnitude in the requested dataset (@option{--dataset}).
The output will only contain stars that have a value in this column, between the values given to @option{--magnituderange} (see below).
By default, the value of this option is @option{phot_g_mean_mag} (that corresponds to the name of the magnitude of the G-band in the Gaia catalog).
@item -m FLT,FLT
@itemx --magnituderange=FLT,FLT
The acceptable range of values for the column in @option{--field}.
This option is mandatory and no default value is assumed.
@item -p STR,STR
@itemx --parallaxanderrorcolumn=STR,STR
With this option the user can provide the parallax and parallax error column names in the requested dataset.
When given, the output will only contain stars for which the parallax value is smaller than three times the parallax error.
If the user does not provide this option, the script will not use parallax information for selecting the stars.
In the case of Gaia, if you want to use parallax to further limit the good stars, you can pass @option{parallax,parallax_error}.
@item -D FLT
@itemx --mindistdeg=FLT
Stars with nearby bright stars closer than this distance are rejected.
The default value is 1 arc minute.
For fainter stars (when constructing the center of the PSF), you should decrease the value.
@item -b INT
@itemx --brightmag=INT
The brightest star magnitude to avoid (should be brighter than the brightest of @option{--magnituderange}).
The basic idea is this: if a user asks for stars with magnitude 6 to 10 and one of those stars is near a magnitude 3 star, that star (with a magnitude of 6 to 10) should be rejected because it is contaminated.
But since the catalog is constrained to stars of magnitudes 6-10, the star with magnitude 3 is not present and cannot be compared with!
Therefore, when considering proximity to nearby stars, it is important to use a larger magnitude range than the user's requested magnitude range for good stars.
The acceptable proximity is defined by @option{--mindistdeg}.
With this option, you specify the brightest limit for the proximity check.
The default value is a magnitude of @mymath{-10}, so you'll rarely need to change or customize this option!
The faint limit of the proximity check is specified by @option{--faintmagdiff}.
As the name suggests, this is a ``diff'' or relative value.
The default value is 4.
Therefore if the user wants to build the PSF with stars in the magnitude range of 6 to 10, the faintest stars used for the proximity check will have a magnitude of 14: @mymath{10+4}.
In summary, by default, the proximity check will be done with stars in the magnitude range @mymath{-10} to @mymath{14}.
@item -F INT
@itemx --faintmagdiff
The magnitude difference of the faintest star used for proximity checks to the faintest limit of @option{--magnituderange}.
For more, see the description of @option{--brightmag}.
@item -Q FLT
@itemx --minaxisratio=FLT
Minimum acceptable axis ratio for the selected stars.
In other words, only stars with axis ratio between @option{--minaxisratio} to 1.0 will be selected.
Default value is @option{--minaxisratio=0.9}.
Recall that the axis ratio is only used when you also give a segmented image with @option{--segmented}.
@item -t
@itemx --tmpdir
Directory to keep temporary files during the execution of the script.
If the directory does not exist at run-time, this script will create it.
By default, upon completion of the script, this directory will be deleted.
However, if you would like to keep the intermediate files, you can use the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Do not remove the temporary directory (see description of @option{--keeptmp}).
This option is useful for debugging and checking the outputs of internal steps.
@item -o STR
@itemx --output=STR
The output name of the final catalog containing good stars.
@end table
@node Invoking astscript-psf-stamp, Invoking astscript-psf-unite, Invoking astscript-psf-select-stars, PSF construction and subtraction
@subsection Invoking astscript-psf-stamp
This installed script will generate a stamp of fixed size, centered at the provided coordinates (performing sub-pixel re-gridding if necessary) and normalized at a certain normalization radius.
Optionally, it will also mask all the other background sources.
A complete tutorial is available to show the operation of this script as a modular component to extract the PSF of a dataset: @ref{Building the extended PSF}.
The executable name is @file{astscript-psf-stamp}, with the following general template:
@example
$ astscript-psf-stamp [OPTION...] FITS-file
@end example
@noindent
Examples:
@example
## Make a stamp around (x,y)=(53,69) of width=151 pixels.
## Normalize the stamp within the radii 20 and 30 pixels.
$ astscript-psf-stamp image.fits --mode=img \
--center=53,69 --widthinpix=151,151 --normradii=20,30 \
--output=stamp.fits
## Iterate over a catalog with positions of stars that are
## in the input image. Use WCS coordinates.
$ asttable catalog.fits | while read -r ra dec mag; do \
astscript-psf-stamp image.fits \
--mode=wcs \
--center=$ra,$dec \
--normradii=20,30 \
--widthinpix=150,150 \
--output=stamp-"$ra"-"$dec".fits; done
@end example
The input is an image from which the stamp of the stars are constructed.
The output image will have the following properties:
@itemize
@item
A certain width (specified by @option{--widthinpix} in pixels).
@item
Centered at the coordinate specified by the option @option{--center} (it can be in image/pixel or WCS coordinates, see @option{--mode}).
If no center is specified, then it is assumed that the object of interest is already in the center of the image.
@item
If the given coordinate has sub-pixel elements (for example, pixel coordinates 1.234,4.567), the pixel grid of the output will be warped so your given coordinate falls in the center of the central pixel of the final output.
This is very important for building the central parts of the PSF, but not too effective for the middle or outer parts (to speed up the program in such cases, you can disable it with the @option{--nocentering} option).
@item
Normalized ``normalized'' by the value computed within the ring around the center (at a radial distance between the two radii specified by the option @option{--normradii}).
If no normalization ring is considered, the output will not be normalized.
@end itemize
In the following cases, this script will produce a fully NaN-valued stamp (of the size given to @option{--widthinpix}).
A fully NaN image can safely be used with the coadding operators of Arithmetic (see @ref{Coadding operators}) because they will be ignored.
In case you do not want to waste storage with fully NaN images, you can compress them with @code{gzip --best output.fits}, and give the resulting @file{.fits.gz} file to Arithmetic.
@itemize
@item
The requested box (center coordinate with desired width) is not within the input image at all.
@item
If a normalization radius is requested, and all the pixels within the normalization radii are NaN.
Here are some scenarios that this can happen:
1) You have a saturated star (where the saturated pixels are NaN), and your normalization radius falls within the saturated region.
2) The star is outside the image by more than your larger normalization radius (so there are no pixels for doing normalization), but the full stamp width still overlaps part of the image.
@end itemize
@noindent
The full set of options are listed below for optimal customization in different scenarios:
@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension of the input image to use.
@item -O STR
@itemx --mode=STR
Interpret the center position of the object (values given to @option{--center}) in image or WCS coordinates.
This option thus accepts only two values: @option{img} or @option{wcs}.
@item -c FLT,FLT
@itemx --center=FLT,FLT
The central position of the object.
This option is used for placing the center of the stamp.
This parameter is used in @ref{Crop} to center and crop the image.
The positions along each dimension must be separated by a comma (@key{,}).
The units of the coordinates are read based on the value to the @option{--mode} option, see the examples above.
The given coordinate for the central value can have sub-pixel elements (for example, it falls on coordinate 123.4,567.8 of the input image pixel grid).
In such cases, after cropping, this script will use Gnuastro's @ref{Warp} to shift (or translate) the pixel grid by @mymath{-0.4} pixels along the horizontal and @mymath{1-0.8=0.2} pixels along the vertical.
Finally the newly added pixels (due to the warping) will be trimmed to have your desired coordinate exactly in the center of the central pixel of the output.
This is very important (critical!) when you are constructing the central part of the PSF.
But for the outer parts it is not too effective, so to avoid wasting time for the warping, you can simply use @option{--nocentering} to disable it.
@item -d
@itemx --nocentering
Do not do the sub-pixel centering to a new pixel grid.
See the description of the @option{--center} option for more.
@item -W INT,INT
@itemx --widthinpix=INT,INT
Size (width) of the output image stamp in pixels.
The size of the output image will be always an odd number of pixels.
As a consequence, if the user specify an even number, the final size will be the specified size plus 1 pixel.
This is necessary to place the specified coordinate (given to @option{--center}) in the center of the central pixel.
This is very important (and necessary) in the case of the centers of stars, therefore a sub-pixel translation will be performed internally to ensure this.
@item -n FLT,FLT
@itemx --normradii=FLT,FLT
Minimum and maximum radius of ring to normalize the image.
This option takes two values, separated by a comma (@key{,}).
The first value is the inner radius, the second is the outer radius.
@item -S STR
@itemx --segment=STR
Optional filename of a segmentation image from Segment's output (must contain the @code{CLUMPS} and @code{OBJECT} HDUs).
For more on the definition of ``objects'' and ``clumps'', see @ref{Segment}.
If given, Segment's output is used to mask all background sources from the large foreground object (a bright star):
@itemize
@item
Objects that are not the central object.
@item
Clumps (within the central object) that are not the central clump.
@end itemize
The result is that all objects and clumps that contaminate the central source are masked, while the diffuse flux of the central object remains.
The non masked object and clump labels are kept into the header of the output image.
The keywords are @code{CLABEL} and @code{OLABEL}.
If no segmentation image is used, then their values are set to @code{none}.
@item -T FLT
@itemx --snthresh=FLT
Mask all the pixels below the given signal-to-noise ratio (S/N) threshold.
This option is only valid with the @option{--segment} option (it will use the @code{SKY_STD} extension of the @ref{Segment output}.
This threshold is applied prior to the possible normalization or centering of the stamp.
After all pixels below the given threshold are masked, the mask is also dilated by one level to avoid single pixels above the threshold (which are mainly due to noise when the threshold is lower).
After applying the signal-to-noise threshold (if it is requested), any extra pixels that are not connected to the central target are also masked.
Such pixels can remain in rivers between bright clumps and will cause problem in the final coadd, if they are not masked.
This is useful for increasing the S/N of inner parts of each region of the finally coadded PSF.
As the stars (that are to be coadded) become fainter, the S/N of their outer parts (at a fixed radius) decreases.
The coadd of a higher-S/N image with a lower-S/N image will have an S/N that is lower than the higher one.
But we can still use the inner parts of those fainter stars (that have sufficiently high S/N).
@item -N STR
@itemx --normop=STR
The operator for measuring the values within the ring defined by the option @option{--normradii}.
The operator given to this option will be directly passed to the radial profile script @file{astscript-radial-profile}, see @ref{Generate radial profile}.
As a consequence, all MakeCatalog measurements (median, mean, sigclip-mean, sigclip-number, etc.) can be used here.
For a full list of MakeCatalog's measurements, please run @command{astmkcatalog --help}.
The final normalization value is saved into the header of the output image with the keyword @code{NORMVAL}.
If no normalization is done, then the value is set to @code{1.0}.
@item -Q FLT
@itemx --axis-ratio=FLT
The axis ratio of the radial profiles for computing the normalization value.
By default (when this option is not given), the radial profile will be circular (axis ratio of 1).
This parameter is used directly in the @file{astscript-radial-profile} script.
@item -p FLT
@itemx --position-angle=FLT
The position angle (in degrees) of the profiles relative to the first FITS axis (horizontal when viewed in SAO DS9).
By default, it is @option{--position-angle=0}, which means that the semi-major axis of the profiles will be parallel to the first FITS axis.
This parameter is used directly in the @file{astscript-radial-profile} script.
@item -s FLT,FLT
@itemx --sigmaclip=FLT,FLT
Sigma clipping parameters: only relevant if sigma-clipping operators are requested by @option{--normop}.
For more on sigma-clipping, see @ref{Sigma clipping}.
@item -t
@itemx --tmpdir
Directory to keep temporary files during the execution of the script.
If the directory does not exist at run-time, this script will create it.
By default, upon completion of the script, this directory will be deleted.
However, if you would like to keep the intermediate files, you can use the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Do not remove the temporary directory (see description of @option{--keeptmp}).
This option is useful for debugging and checking the outputs of internal steps.
@item -o STR
@itemx --output=STR
Filename of stamp image.
By default the name of the stamp will be a combination of the input image name, the name of the script, and the coordinates of the center.
For example, if the input image is named image.fits and the center is @option{--center=33,78}, then the output name wil be: image_stamp_33_78.fits
The main reason of setting this name is to have an unique name for each stamp by default.
@end table
@node Invoking astscript-psf-unite, Invoking astscript-psf-scale-factor, Invoking astscript-psf-stamp, PSF construction and subtraction
@subsection Invoking astscript-psf-unite
This installed script will join two PSF images at a given radius.
This operation is commonly used when merging (uniting) the inner and outer parts of the PSF.
A complete tutorial is available to show the operation of this script as a modular component to extract the PSF of a dataset: @ref{Building the extended PSF}.
The executable name is @file{astscript-psf-unite}, with the following general template:
@example
$ astscript-psf-unite [OPTION...] FITS-file
@end example
@noindent
Examples:
@example
## Multiply inner.fits by 3 and put it in the center (within a radius of
## 25 pixels) of outer.fits. The core goes up to a radius of 25 pixels.
$ astscript-psf-unite outer.fits \
--core=inner.fits --scale=3 \
--radius=25 --output=joined.fits
## Same example than the above, but considering an
## ellipse (instead of a circle).
$ astscript-psf-unite outer.fits \
--core=inner.fits --scale=3 \
--radius=25 --axis-ratio=0.5 \
--position-angle=40 --output=joined.fits
@end example
The junction is done by considering the input image as the outer part.
The central part is specified by FITS image given to @option{--inner} and it is multiplied by the factor @option{--scale}.
All pixels within @option{--radius} (in pixels) of the center of the outer part are then replaced with the inner image.
The scale factor to multiply with the inner part has to be explicitly provided (see the description of @option{--scale} below).
Note that this script assumes that PSF is centered in both images.
More options are available with the goal of obtaining a good junction.
A full description of each option is given below.
@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension of the input image to use.
@item -i STR
@itemx --inner=STR
Filename of the inner PSF.
This image is considered to be the central part of the PSF.
It will be cropped at the radius specified by the option @option{--radius}, and multiplied by the factor specified by @option{--scale}.
After that, it will be appended to the outer part (input image).
@item -I STR
@itemx --innerhdu=STR
The HDU/extension of the inner PSF (option @option{--inner}).
@item -f FLT
@itemx --scale=FLT
Factor by which the inner part (@option{--inner}) is multiplied.
This factor is necessary to put the two different parts of the PSF at the same flux level.
A convenient way of obtaining this value is by using the script @file{astscript-model-scale-factor}, see @ref{Invoking astscript-psf-scale-factor}.
There is also a full tutorial on using all the @command{astscript-psf-*} installed scripts together, see @ref{Building the extended PSF}.
We recommend doing that tutorial before starting to work on your own datasets.
@item -r FLT
@itemx --radius=FLT
Radius (in pixels) at which the junction of the images is done.
All pixels in the outer image within this radius (from its center) will be replaced with the pixels of the inner image (that has been scaled).
By default, a circle is assumed for the shape of the inner region, but this can be tweaked with @option{--axis-ratio} and @option{--position-angle} (see below).
@item -Q FLT
@itemx --axisratio=FLT
Axis ratio of ellipse to define the inner region.
By default this option has a value of 1.0, so all central pixels (of the outer image) within a circle of radius @option{--radius} are replaced with the scaled inner image pixels.
With this option, you can customize the shape of pixels to take from the inner and outer profiles.
For a PSF, it will usually not be necessary to change this option:
even if the PSF is non-circular, the inner and outer parts will both have the same ellipticity.
So if the scale factor is chosen accurately, using a circle to select which pixels from the inner image to use in the outer image will be irrelevant.
@item -p FLT
@itemx --position-angle=FLT
Position angle of the ellipse (in degrees) to define which central pixels of the outer image to replace with the scaled inner image.
Similar to @option{--axis-ratio} (see above).
@item -t
@itemx --tmpdir
Directory to keep temporary files during the execution of the script.
If the directory does not exist at run-time, this script will create it.
By default, upon completion of the script, this directory will be deleted.
However, if you would like to keep the intermediate files, you can use the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Do not remove the temporary directory (see description of @option{--keeptmp}).
This option is useful for debugging and checking the outputs of internal steps.
@end table
@node Invoking astscript-psf-scale-factor, Invoking astscript-psf-subtract, Invoking astscript-psf-unite, PSF construction and subtraction
@subsection Invoking astscript-psf-scale-factor
This installed script will compute the multiplicative factor (scale) that is necessary to match the PSF to a given star.
The match in flux is done within a ring of pixels.
The standard deviation of that ring of pixels is also provided as an output.
It can also be used to compute the scale factor to multiply the inner part of the PSF with the outer part during the creation of a PSF.
A complete tutorial is available to show the operation of this script as a modular component to extract the PSF of a dataset: @ref{Building the extended PSF}.
The executable name is @file{astscript-psf-scale-factor}, with the following general template:
@example
$ astscript-psf-scale-factor [OPTION...] FITS-file
@end example
@noindent
Examples:
@example
## Compute the scale factor for the object at (x,y)=(53,69) for
## the PSF (psf.fits). Compute it in the ring 20-30 pixels.
$ astscript-psf-scale-factor image.fits --mode=img \
--center=53,69 --normradii=20,30 --psf=psf.fits
## Iterate over a catalog with RA,Dec positions of stars that are in
## the input image to compute their scale factors.
$ asttable catalog.fits | while read -r ra dec mag; do \
astscript-psf-scale-factor image.fits \
--mode=wcs \
--psf=psf.fits \
--center=$ra,$dec --quiet \
--normradii=20,30 > scale-"$ra"-"$dec".txt; done
@end example
The input should be an image containing the star that you want to match in flux with the PSF.
The output will be two numbers that are printed on the command-line.
The first number is the multiplicative factor to scale the PSF image (given to @option{--psf}) to match in flux with the given star (which is located in @option{--center} coordinate of the input image).
The scale factor will be calculated within the ring of pixels specified by the option @option{--normradii}.
The second number is the standard deviation value (sigma-clipped) of such a ring of pixels.
All the pixels within this ring will be separated from both the PSF and input images.
For the input image, around the selected coordinate; while masking all other sources (see @option{--segment}).
The finally selected pixels of the input image will then be divided by those of the PSF image.
This gives us an image containing one scale factor per pixel.
The finally reported value is the sigma-clipped median of all the scale factors in the finally-used pixels.
To fully understand the process on first usage, we recommend that you run this script with @option{--keeptmp} and inspect the files inside of the temporary directory.
The most common use-cases of this scale factor are:
@enumerate
@item
To find the factor for joining two different parts of the same PSF, see @ref{Invoking astscript-psf-unite}.
@item
When modeling a star in order to subtract it using the PSF, see @ref{Invoking astscript-psf-subtract}.
@end enumerate
For a full tutorial on how to use this script along with the other @command{astscript-psf-*} scripts in Gnuastro, please see @ref{Building the extended PSF}.
To allow full customizability, the following options are available with this script.
@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension of the input image to use.
@item -p STR
@itemx --psf=STR
Filename of the PSF image.
The PSF is assumed to be centered in this image.
@item -P STR
@itemx --psfhdu=STR
The HDU/extension of the PSF image.
@item -c FLT,FLT
@itemx --center=FLT,FLT
The central position of the object to scale with the PSF.
This parameter is passed to Gnuastro's Crop program make a crop for further processing (see @ref{Crop}).
The positions along each dimension must be separated by a comma (@key{,}).
The units of the coordinates are interpreted based on the value to the @option{--mode} option (see below).
The given coordinate for the central value can have sub-pixel elements (for example, it falls on coordinate 123.4,567.8 of the input image pixel grid).
In such cases, after cropping, this script will use Gnuastro's @ref{Warp} to shift (or translate) the pixel grid by @mymath{-0.4} pixels along the horizontal and @mymath{1-0.8=0.2} pixels along the vertical.
Finally the newly added pixels (due to the warping) will be trimmed to have your desired coordinate exactly in the center of the central pixel of the output.
This is very important (critical!) when you are constructing the central part of the PSF.
But for the very far outer parts it may not too effective (should be checked), or the target object may have already been centered at the requested coordinate.
In such cases, to avoid wasting time for the warping, you can simply use @option{--nocentering} to disable sub-pixel centering.
@item -d
@itemx --nocentering
Do not do the sub-pixel centering to a new pixel grid.
See the description of the @option{--center} option for more.
@item -O STR
@itemx --mode=STR
Interpret the center position of the object (values given to @option{--center}) in image or WCS coordinates.
This option thus accepts only two values: @option{img} or @option{wcs}.
@item -n INT,INT
@itemx --normradii=INT,INT
Inner (inclusive) and outer (exclusive) radii (in units of pixels) around the central position in which the scale factor is computed.
The option takes two values separated by a comma (@key{,}).
The first value is the inner radius, the second is the outer radius.
These two radii define a ring of pixels around the center that is used for obtaining the scale factor value.
@item -W INT,INT
@itemx --widthinpix=INT,INT
Size (width) of the image stamp in pixels.
This is an intermediate product computed internally by the script.
By default, the size of the stamp is automatically set to be as small as possible (i.e., two times the external radius of the ring specified by @option{--normradii}) to make the computation fast.
As a consequence, this option is only relevant for checking and testing that everything is fine (debugging; it will usually not be necessary).
@item -S STR
@itemx --segment=STR
Optional filename of a segmentation image from Segment's output (must contain the @code{CLUMPS} and @code{OBJECT} HDUs).
For more on the definition of ``objects'' and ``clumps'', see @ref{Segment}.
If given, Segment's output is used to mask all background sources from the large foreground object (a bright star):
@itemize
@item
Objects that are not the central object.
@item
Clumps (within the central object) that are not the central clump.
@end itemize
The result is that all objects and clumps that contaminate the central source are masked, while the diffuse flux of the central object remains.
@item -s FLT,FLT
@itemx --sigmaclip=FLT,FLT
Sigma clipping parameters used in the end to find the final scale factor from the distribution of all pixels used.
For more on sigma-clipping, see @ref{Sigma clipping}.
@item -t
@itemx --tmpdir
Directory to keep temporary files during the execution of the script.
If the directory does not exist at run-time, this script will create it.
By default, upon completion of the script, this directory will be deleted.
However, if you would like to keep the intermediate files, you can use the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Do not remove the temporary directory (see description of @option{--keeptmp}).
This option is useful for debugging and checking the outputs of internal steps.
@end table
@node Invoking astscript-psf-subtract, , Invoking astscript-psf-scale-factor, PSF construction and subtraction
@subsection Invoking astscript-psf-subtract
This installed script will put the provided PSF into a given position within the input image (implementing sub-pixel adjustments where necessary), and then it will subtract it.
It is aimed at modeling and subtracting the scattered light field of an input image.
It is also possible to obtain the modeled star with the PSF (and not make the subtraction of it from the original image).
A complete tutorial is available to show the operation of this script as a modular component to extract the PSF of a dataset: @ref{Building the extended PSF}.
The executable name is @file{astscript-psf-subtract}, with the following general template:
@example
$ astscript-psf-subtract [OPTION...] FITS-file
@end example
@noindent
Examples:
@example
## Multiply the PSF (psf.fits) by 3 and subtract it from the
## input image (image.fits) at the pixel position (x,y)=(53,69).
$ astscript-psf-subtract image.fits \
--psf=psf.fits \
--mode=img \
--scale=3 \
--center=53,69 \
--output=star-53_69.fits
## Iterate over a catalog with positions of stars that are
## in the input image. Use WCS coordinates.
$ asttable catalog.fits | while read -r ra dec mag; do
scale=$(cat scale-"$ra"_"$dec".txt)
astscript-psf-subtract image.fits \
--mode=wcs \
--psf=psf.fits \
--scale=$scale \
--center=$ra,$dec; done
@end example
The input is an image from which the star is considered.
The result is the same image but with the star subtracted (modeled by the PSF).
The modeling of the star is done with the PSF image specified with the option @option{--psf}, and flux-scaled with the option @option{--scale} at the position defined by @option{--center}.
Instead of obtaining the PSF-subtracted image, it is also possible to obtain the modeled star by the PSF.
To do that, use the option @option{--modelonly}.
With this option, the output will be an image with the same size as the original one with the PSF situated in the star coordinates and flux-scaled.
In this case, the region not covered by the PSF are set to zero values.
Note that this script works over individual objects.
As a consequence, to generate a scattered light field of many stars, it is necessary to make multiple calls.
A full description of each option is given below.
@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension of the input image to use.
@item -p STR
@itemx --psf=STR
Filename of the PSF image.
The PSF is assumed to be centered in this image.
@item -P STR
@itemx --psfhdu=STR
The HDU/extension of the PSF image.
@item -O STR
@itemx --mode=STR
Interpret the center position of the object (values given to @option{--center}) in image or WCS coordinates.
This option thus accepts only two values: @option{img} or @option{wcs}.
@item -c FLT,FLT
@itemx --center=FLT,FLT
The central position of the object.
This parameter is used in @ref{Crop} to center and crop the image.
The positions along each dimension must be separated by a comma (@key{,}).
The number of values given to this option must be the same as the dimensions of the input dataset.
The units of the coordinates are read based on the value to the @option{--mode} option, see above.
If the central position does not fall in the center of a pixel in the input image, the PSF is resampled with sub-pixel change in the pixel grid before subtraction.
@item -s FLT
@itemx --scale=FLT
Factor by which the PSF (@option{--psf}) is multiplied.
This factor is necessary to put the PSF with the desired flux level.
A convenient way of obtaining this value is by using the script @file{astscript-scale-factor}, see @ref{Invoking astscript-psf-scale-factor}.
For a full tutorial on using the @command{astscript-psf-*} scripts together, see @ref{Building the extended PSF}.
@item -t
@itemx --tmpdir
Directory to keep temporary files during the execution of the script.
If the directory does not exist at run-time, this script will create it.
By default, upon completion of the script, this directory will be deleted.
However, if you would like to keep the intermediate files, you can use the @option{--keeptmp} option.
@item -k
@itemx --keeptmp
Do not remove the temporary directory (see description of @option{--keeptmp}).
This option is useful for debugging and checking the outputs of internal steps.
@item -m
@itemx --modelonly
Do not make the subtraction of the modeled star by the PSF.
This option is useful when the user wants to obtain the scattered light field given by the PSF modeled star.
@end table
@node Makefile extensions, Library, Installed scripts, Top
@chapter Makefile extensions (for GNU Make)
@cindex Make
@url{https://en.wikipedia.org/wiki/Make_(software), Make} is a build automation tool.
It can greatly help manage your analysis workflow, even very complex projects with thousands of files and hundreds of processing steps.
In this book, we have discussed Make previously in the context of parallelization (see @ref{How to run simultaneous operations}).
For example, @url{http://maneage.org,Maneage} uses Make to organize complex and reproducible workflows, see Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}.
@cindex GNU Make
GNU Make is the most common and powerful implementation of Make, with many unique additions to the core POSIX standard of Make.
One of those features is the ability to add extensions using a dynamic library (that Gnuastro provides).
For the details of this feature from GNU Make's own manual, see its @url{https://www.gnu.org/software/make/manual/html_node/Loading-Objects.html, Loading dynamic objects} section.
Through this feature, Gnuastro provides additional Make functions that are useful in the context of data analysis.
To use this feature, Gnuastro has to be built in shared library more.
Gnuastro's Make extensions will not work if you build Gnuastro without shared libraries (for example, when you configure Gnuastro with @option{--disable-shared} or @option{--debug}).
@menu
* Loading the Gnuastro Make functions:: How to find and load Gnuastro's Make library.
* Makefile functions of Gnuastro:: The available functions.
@end menu
@node Loading the Gnuastro Make functions, Makefile functions of Gnuastro, Makefile extensions, Makefile extensions
@section Loading the Gnuastro Make functions
To load Gnuastro's Make functions in your Makefile, you should use the @command{load} command of GNU Make in your Makefile.
The load command should be given Gnuastro's @file{libgnuastro_make.so} dynamic library, which has been specifically written for being called by GNU Make.
The generic command looks like this (the @file{/PATH/TO} part should be changed):
@example
load /PATH/TO/lib/libgnuastro_make.so
@end example
@noindent
Here are the possible replacements of the @file{/PATH/TO} component:
@table @file
@item /usr/local
If you installed Gnuastro from source and did not use the @option{--prefix} option at configuration time, you should use this base directory.
@item /usr/
If you installed Gnuastro through your operating system's package manager, it is highly likely that Gnuastro's library is here.
@item ~/.local
If you installed Gnuastro from source, but used @option{--prefix} to install Gnuastro in your home directory (as described in @ref{Installation directory}).
@end table
If you cannot find @file{libgnuastro_make.so} in the locations above, the command below should give you its location.
It assumes that the libraries are in the same base directory as the programs (which is usually the case).
@example
$ which astfits | sed -e's|bin/astfits|lib/libgnuastro_make.so|'
@end example
The problem with the command above is that it is not in the Makefile and has to be run by the user when running it (adding more complexity and therefore potential for bugs).
Therefore a more generic way to have a portable Makefile (that will run independent of where the host's Gnuastro is installed), you can implement the command above in Make like below.
For the @code{:=}, see @ref{Makefile functions of Gnuastro}.
@example
gnuastro-prefix := $(subst bin/astfits,,$(shell which astfits))
load $(gnuastro-prefix)/lib/libgnuastro_make.so
@end example
@node Makefile functions of Gnuastro, , Loading the Gnuastro Make functions, Makefile extensions
@section Makefile functions of Gnuastro
All Gnuastro Make functions start with the @command{ast-} prefix (similar to the programs on the command-line, but with a dash).
After you have loaded Gnuastro's shared library for Makefiles within your Makefile, you can call these functions just like any Make function.
For instructions on how to load Gnuastro's Make functions, see @ref{Loading the Gnuastro Make functions}.
There are two types of Make functions in Gnuastro's Make extensions:
1) Basic operations on text which is more general than astronomy or Gnuastro, see @ref{Text functions for Makefiles}).
2) Operations that are directly related to astronomy (mostly FITS files) and Gnuastro, see @ref{Astronomy functions for Makefiles}).
@cartouche
@noindent
@strong{Difference between `@code{=}' or `@code{:=}' for variable definition} When you define a variable with `@code{=}', its value is expanded only when used, not when defined.
However, when you use `@code{:=}', it is immediately expanded when defined.
Therefore the location of a `@code{:=}' variable in the Makefile matters: if used before its definition, it will be empty!
Those defined by `@code{=}' can be used even before they are defined!
On the other hand, if your variable invokes functions (like @code{foreach} or @code{wildcard}), it is better to use `@code{:=}'.
Otherwise, each time the value is used, the function will be expanded (possibly may times) and this will reduce the speed of your pipeline.
For more, see the @url{https://www.gnu.org/software/make/manual/html_node/Flavors.html, The two flavors of variables} in the GNU Make manual.
@end cartouche
@menu
* Text functions for Makefiles:: Basic text operations to supplement Make.
* Astronomy functions for Makefiles:: Astronomy/FITS related functions.
@end menu
@node Text functions for Makefiles, Astronomy functions for Makefiles, Makefile functions of Gnuastro, Makefile functions of Gnuastro
@subsection Text functions for Makefiles
The functions described below operate on simple strings (plain text).
They are therefore generic (not limited to astronomy/FITS), but because they are commonly necessary in astronomical data analysis pipelines and are not available anywhere else, we have included them in Gnuastro.
The names of these functions start with @code{ast-text-*} and each has a fully working example to demonstrate its usage.
@table @code
@item $(ast-text-to-upper STRING)
Returns the input string but with all characters in UPPER-CASE.
For example, the following minimal Makefile will print @code{FOOO BAAR UGGH} word of the list.
@example
load /usr/local/lib/libgnuastro_make.so
list = fOOo bAar UggH
ulist := $(ast-text-to-upper $(list))
all:; echo $(ulist)
@end example
@item $(ast-text-to-lower STRING)
Returns the input string but with all characters in lower-case.
For example, the following minimal Makefile will print @code{fooo baar uggh} word of the list.
@example
load /usr/local/lib/libgnuastro_make.so
list = fOOo bAar UggH
llist := $(ast-text-to-lower $(list))
all:; echo $(llist)
@end example
@item $(ast-text-contains STRING, TEXT)
Returns all white-space-separated words in @code{TEXT} that contain the @code{STRING}, removing any words that @emph{do not} match.
For example, the following minimal Makefile will only print the @code{bAaz Aah} word of the list.
@example
load /usr/local/lib/libgnuastro_make.so
list = fooo baar bAaz uggh Aah
all:
echo $(ast-text-contains Aa, $(list))
@end example
This can be thought of as Make's own @code{filter} function, but if it would accept two patterns in a format like this @code{$(filter %Aa%,$(list))} (for the example above).
In fact, the first sentence describing this function is taken from the Make manual's first sentence that describes the @code{filter} function!
However, unfortunately Make's @code{filter} function only accepts a single @code{%}, not two!
@item $(ast-text-not-contains STRING, TEXT)
Returns all white-space-separated words in @code{TEXT} that @emph{do not} contain the @code{STRING}, removing any words that @emph{do not} match.
This is the inverse of the @code{ast-text-contains} function.
For example, the following minimal Makefile will print @code{fooo baar uggh} word of the list.
@example
load /usr/local/lib/libgnuastro_make.so
list = fooo baar bAaz uggh Aah
all:
echo $(ast-text-not-contains Aa, $(list))
@end example
@item $(ast-text-prev TARGET, LIST)
Returns the word in @code{LIST} that is previous to @code{TARGET}.
If @code{TARGET} is the first word of the list, or is not within it at all, this function will return an empty string (nothing).
If any of the arguments are an empty string (or only contain space characters like `@key{SPACE}', `@key{TAB}', new-line and etc), this function will return an empty string (having no effect in Make).
The simple example below shows a minimal usage scenario where the output will be @code{fooo}.
@example
load /usr/local/lib/libgnuastro_make.so
list = fooo baar bAaz uggh Aah
all:
echo $(ast-text-prev baar, $(list))
@end example
This function is useful in many other scenarios.
For example, one scenario when this function can be useful is when you want a list of higher-level targets to always be executed in sequence (even when Make is run in parallel).
But you want their lower-level prerequisites to be executed in parallel.
The fully working example below shows this in practice: the ``final'' target depends on the sub-components @file{a.fits}, @file{b.fits}, @file{c.fits} and @file{d.fits}.
But each one of these has seven dependencies (for example @file{a.fits} depends on the sub-sub-components @file{a-1.fits}, @file{a-2.fits}, @file{a-3.fits}, ...).
Without this function, Make will first build all the sub-sub-components first, then the sub-components and ultimately the final target.
When the files are small and there aren't too many of them, this is not a problem.
But when you hundreds/thousands of sub-sub-components, your computer may not have the capacity to hold them all in storage or RAM (during processing).
In such cases, you want to build the sub-components to built in series, but the sub-sub-components of each sub-component to be built in parallel.
This function allows just this in an easy manner as below: the sub-sub-components of each sub-component depend on the previous sub-component.
To see the effect of this function put the example below in a @file{Makefile} and run @code{make -j12} (to simultaneously execute 12 jobs); then comment/remove this function (so there is no prerequisite in @code{$(subsubs)}) and re-run @code{make -j12}.
@example
# Basic settings
all: final
.SECONDEXPANSION:
load /usr/local/lib/libgnuastro_make.so
# 4 sub-components (alphabetic), each with 7
# sub-sub-components (numeric).
subids = a b c d
subsubids = 1 2 3 4 5 6 7
subs := $(foreach s, $(subids), $(s).fits)
subsubs := $(foreach s, $(subids), \
$(foreach ss, $(subsubids), \
$(s)-$(ss).fits))
# Build the sub-components:
$(subsubs): %.fits: $$(ast-text-prev \
$$(word 1, $$(subst -, ,%)).fits, \
$(subs))
@@echo "$@@: $^"
# Build the final components
$(subs): %.fits: $$(foreach s, $(subsubids), %-$$(s).fits)
@@echo "$@@: $^"
# Final
final: $(subs)
@@echo "$@@: $^"
@end example
As you see, when this function is present, the sub-sub-components of each sub-component are executed in parallel, while at each moment, only a single sub-component's prerequisites are being made.
Without this function, make first builds all the sub-sub-components, then goes to the sub-components.
There can be any level of components between these, allowing this operation to be as complex as necessary in your data analysis pipeline.
Unfortunately the @code{.NOTPARALLEL} target of GNU Make doesn't allow this level of customization.
@item $(ast-text-prev-batch TARGET, NUM, LIST)
Returns the previous batch of @code{NUM} words in @code{LIST} (in relation to the batch containing @code{TARGET}).
@code{NUM} will be interpreted as an unsigned integer and cannot be zero.
If any of the arguments are an empty string (or only contain space characters like `@key{SPACE}', `@key{TAB}', new-line and etc), this function will return an empty string (having no effect in Make).
In the special case that @code{NUM=1}, this is equivalent to the @code{ast-text-prev} function that is described above.
Here is one scenario where this function is useful: in astronomy datasets are can easily be very large.
Therefore, some Make recipes in your pipeline may require a lot of memory; such that executing them on all the available threads (for example 12 threads with @code{-j12}) will immediately occupy all your RAM, causing a crash in your pipeline.
However, let's assume that you have sufficient RAM to execute 4 targets of those recipes in parallel.
Therefore while you want all the other steps of your pipeline to be using all 12 threads, you want one rule to only build 4 targets at any time.
But before starting to use this function, also see @code{ast-text-prev-batch-by-ram}.
The example below demonstrates the usage of this function in a minimal working example of the scenario above: we want to build 15 targets, but in batches of 4 target at a time, irrespective of how many threads Make was executed with.
@example
load /usr/local/lib/libgnuastro_make.so
.SECONDEXPANSION:
targets := $(foreach i,$(shell seq 15),a-$(i).fits)
all: $(targets)
$(targets): $$(ast-text-prev-batch $$@@,4,$(targets))
@@echo "$@@: $^"
@end example
@noindent
If you place the example above in a plain-text file called @file{Makefile} (correcting for the TAB at the start of the recipe), and run Make on 12 threads like below, you will see the following output.
The targets in each batch are not ordered (and the order may change in different runs) because they have been run in parallel.
@example
$ make -j12
a-1.fits:
a-3.fits:
a-2.fits:
a-4.fits:
a-5.fits: a-1.fits a-2.fits a-3.fits a-4.fits
a-6.fits: a-1.fits a-2.fits a-3.fits a-4.fits
a-8.fits: a-1.fits a-2.fits a-3.fits a-4.fits
a-7.fits: a-1.fits a-2.fits a-3.fits a-4.fits
a-9.fits: a-5.fits a-6.fits a-7.fits a-8.fits
a-11.fits: a-5.fits a-6.fits a-7.fits a-8.fits
a-12.fits: a-5.fits a-6.fits a-7.fits a-8.fits
a-10.fits: a-5.fits a-6.fits a-7.fits a-8.fits
a-13.fits: a-9.fits a-10.fits a-11.fits a-12.fits
a-15.fits: a-9.fits a-10.fits a-11.fits a-12.fits
a-14.fits: a-9.fits a-10.fits a-11.fits a-12.fits
@end example
Any other rule that is later added to this make file (as a prerequisite/parent of @code{targets} or as a child of @code{targets}) will be run on 12 threads.
@item $(ast-text-prev-batch-by-ram TARGET, NEEDED_RAM_GB, LIST)
@cindex RAM
Similar to @code{ast-text-prev-batch}, but instead of taking the number of words/files in each batch, this function takes the maximum amount of RAM that is needed by one instance of the recipe.
Through the @code{NEEDED_RAM_GB} argument, you should specify the amount of ram that a @emph{single} instance of the recipe in this rule needs.
If any of the arguments are an empty string (or only contain space characters like `@key{SPACE}', `@key{TAB}', new-line and etc), this function will return an empty string (having no effect in Make).
When the needed RAM is larger than the available RAM only one job will be done at a time (similar to @code{ast-text-prev}).
The number of files in each batch is calculated internally by reading the available RAM on the system at the moment Make calls this function.
Therefore this function is more generalizable to different computers (with very different RAM and/or CPU threads).
But to avoid overlapping with other rules that may consume a lot of RAM, it is better to design your Makefile such that other rules are only executed once all instances of this rule have been completed.
For example, assume every instance of one rule in your Makefile requires a maximum of 5.2 GB of RAM during its execution, and your computer has 32 GB of RAM and 2 threads.
In this case, you do not need to manage the targets at all: at the worst moment your pipeline will consume 10.4GB of RAM (much smaller than the 32GB of RAM that you have).
However, you later run the same pipeline on another machine with identical RAM, but 12 threads!
In this case, you will need @mymath{5.2\times12=62.4}GB of RAM; but the new system doesn't have that much RAM, causing your pipeline to crash.
If you used @code{ast-text-prev-batch} function (described above) to manage these hardware limitations, you would have to manually change the number on every new system; this is inconvenient, can cause many bugs, and requires manual intervention (not making your pipeline automatic).
The @code{ast-text-prev-batch-by-ram} function was designed as a solution to the problem above: it will read the amount of available RAM at the time that Make starts (before the recipes in your pipeline are actually executed).
From the value to @code{NEEDED_RAM_GB}, it will then estimate how many instances of that recipe can be executed in parallel without breaching the available RAM of the system.
Therefore it is important to not run another heavy RAM consumer on the system while your pipeline is being executed.
Note that this function reads the available RAM, not total RAM; it therefore accounts for the background operations of the operating system or graphic user environment that are running in parallel to your pipeline; and assumes they will remain at the same level.
The fully working example below shows the usage of this function in a scenario where we assume the recipe requires 4.2GB of RAM for each target.
@example
load /usr/local/lib/libgnuastro_make.so
.SECONDEXPANSION:
targets := $(foreach i,$(shell seq 13),$(i).fits)
all: $(targets)
$(targets): $$(ast-text-prev-batch-by-ram $$@@,4.2,$(targets))
@@echo "$@@: $^"
@end example
@noindent
Once the contents above are placed in a @file{Makefile} and you execute the command below in a system with about 27GB of available RAM (total RAM is 32GB; the 5GB difference is used by the operating system and other background programs), you will get an output like below.
@example
$ make -j12
1.fits:
2.fits:
3.fits:
4.fits:
5.fits:
6.fits:
7.fits: 1.fits 2.fits 3.fits 4.fits 5.fits 6.fits
8.fits: 1.fits 2.fits 3.fits 4.fits 5.fits 6.fits
11.fits: 1.fits 2.fits 3.fits 4.fits 5.fits 6.fits
10.fits: 1.fits 2.fits 3.fits 4.fits 5.fits 6.fits
9.fits: 1.fits 2.fits 3.fits 4.fits 5.fits 6.fits
12.fits: 1.fits 2.fits 3.fits 4.fits 5.fits 6.fits
13.fits: 7.fits 8.fits 9.fits 10.fits 11.fits 12.fits
@end example
Depending on the amount of available RAM on your system, you will get a different output.
To see the effect, you can decrease or increase the amount of required RAM (@code{4.2} in the example above).
@cartouche
@noindent
@cindex RAM usage (maximum)
@strong{What is the maximum RAM required by my command?} Put a `@code{/usr/bin/time --format=%M}' prefix behind your full command (including any options and arguments).
For example like this for a call to Gnuastro's Warp program:
@example
/usr/bin/time --format=%M astwarp image.fits
@end example
After the regular outputs of the program, you will see a number on the last line.
This number is the maximum used RAM (in @emph{kilobytes}) during the execution of the program.
Later, you can convert this to Gigabytes (to feed into this function) by dividing it to @mymath{10^6}.
@end cartouche
@item $(ast-text-next TARGET, LIST)
Returns the word in @code{LIST} that is next to @code{TARGET}.
Its operation is very similar to the @code{ast-text-prev} function defined above.
The output of the example below (that uses this function) is @code{bAaz}.
@example
load /usr/local/lib/libgnuastro_make.so
list = fooo baar bAaz uggh Aah
all:
echo $(ast-text-next baar, $(list))
@end example
@item $(ast-text-next-words TARGET, NUM, LIST)
Returns the next @code{NUM} words in @code{LIST} (after @code{TARGET}).
The output of the example below (that uses this function) is @code{bAaz uggh}.
@example
load /usr/local/lib/libgnuastro_make.so
list = fooo baar bAaz uggh Aah
all:
echo $(ast-text-next baar, 2, $(list))
@end example
@end table
@node Astronomy functions for Makefiles, , Text functions for Makefiles, Makefile functions of Gnuastro
@subsection Astronomy functions for Makefiles
FITS files (the standard data format in astronomy) have unique features (header keywords and HDUs) that can greatly help designing workflows in Makefiles.
The Makefile extension functions of this section allow you to optimally use those features within your pipelines.
Besides FITS, when designing your workflow/pipeline with Gnuastro, there are also special features like version checking that simplify your design.
@table @code
@item $(ast-version-is STRING)
@cindex Reproducibility
Returns @code{1} if the version of the used Gnuastro is equal to @code{STRING}, and @code{0} otherwise.
This is useful/critical for obtaining reproducible results on different systems.
It can be used in combination with @url{https://www.gnu.org/software/make/manual/html_node/Conditionals.html, Conditionals in Make} to ensure the required version of Gnuastro is going to be used in your workflow.
For example, in the minimal working Makefile below, we are using it to specify if the default (first) target (@code{all}) should have any prerequisites (and let the workflow start), or if it should simply print a message (that the required version of Gnuastro isn't installed) and abort (without any prerequisites).
@example
load /usr/local/lib/libgnuastro_make.so
gnuastro-version = 0.19
ifeq ($(ast-version-is $(gnuastro-version)),1)
all: paper.pdf
else
all:; @@echo "Please use Gnuastro $(gnuastro-version)"
endif
result.fits: input.fits
astnoisechisel $< --output=$@@
paper.pdf: result.fits
pdflatex --halt-on-error paper.tex
@end example
@item $(ast-fits-with-keyvalue KEYNAME, KEYVALUES, HDU, FITS_FILES)
Will select only the FITS files (from a list of many in @code{FITS_FILES}, non-FITS files are ignored), where the @code{KEYNAME} keyword has the value(s) given in @code{KEYVALUES}.
Only the HDU given in the @code{HDU} argument will be checked.
According to the FITS standard, the keyword name is not case sensitive, but the keyword value is.
For example, if you have many FITS files in the @file{/datasets/images} directory, the minimal Makefile below will put those with a value of @code{BAR} or @code{BAZ} for the @code{FOO} keyword in HDU number @code{1} in the @code{selected} Make variable.
Notice how there is no comma between @code{BAR} and @code{BAZ}: you can specify any series of values.
@verbatim
load /usr/local/lib/libgnuastro_make.so
files := $(wildcard /datasets/images/*.fits)
selected := $(ast-fits-with-keyvalue FOO, BAR BAZ, 1, $(files))
all:
echo "Full: $(words $(files)) files";
echo "Selected: $(words $(selected)) files"
@end verbatim
@item $(ast-fits-unique-keyvalues KEYNAME, HDU, FITS_FILES)
Will return the unique values given to the given FITS keyword (@code{KEYNAME}) in the given HDU of all the input FITS files (non-FITS files are ignored).
For example, after the commands below, the @code{keyvalues} variable will contain the unique values given to the @code{FOO} keyword in HDU number 1 of all the FITS files in @file{/datasets/images/*.fits}.
@example
files := $(wildcard /datasets/images/*.fits)
keyvalues := $(ast-fits-unique-keyvalues FOO, 1, $(files))
@end example
This is useful when you do not know the full range of values a-priori.
For example, let's assume that you are looking at a night's observations with a telescope and the purpose of the FITS image is written in the @code{OBJECT} keyword of the image (which we can assume is in HDU number 1).
This keyword can have the name of the various science targets (for example, @code{NGC123} and @code{M31}) and calibration targets (for example, @code{BIAS} and @code{FLAT}).
The list of science targets is different from project to project, such that in one night, you can observe multiple projects.
But the calibration frames have unique names.
Knowing the calibration keyword values, you can extract the science keyword values of the night with the command below (feeding the output of this function to Make's @code{filter-out} function).
@example
calib = BIAS FLAT
files := $(wildcard /datasets/images/*.fits)
science := $(filter-out $(calib), \
$(ast-fits-unique-keyvalues OBJECT, 1, $(files)))
@end example
The @code{science} variable will now contain the unique science targets that were observed in your selected FITS images.
You can use it to group the various exposures together in the next stages to make separate coadds of deep images for each science target (you can select FITS files based on their keyword values using the @code{ast-fits-with-keyvalue} function, which is described separately in this section).
@end table
@node Library, Developing, Makefile extensions, Top
@chapter Library
Each program in Gnuastro that was discussed in the prior chapters (or any program in general) is a collection of functions that is compiled into one executable file which can communicate directly with the outside world.
The outside world in this context is the operating system.
By communication, we mean that control is directly passed to a program from the operating system with a (possible) set of inputs and after it is finished, the program will pass control back to the operating system.
For programs written in C and C++, the unique @code{main} function is in charge of this communication.
Similar to a program, a library is also a collection of functions that is compiled into one executable file.
However, unlike programs, libraries do not have a @code{main} function.
Therefore they cannot communicate directly with the outside world.
This gives you the chance to write your own @code{main} function and call library functions from within it.
After compiling your program into a binary executable, you just have to @emph{link} it to the library and you are ready to run (execute) your program.
In this way, you can use Gnuastro at a much lower-level, and in combination with other libraries on your system, you can significantly boost your creativity.
This chapter starts with a basic introduction to libraries and how you can use them in @ref{Review of library fundamentals}.
The separate functions in the Gnuastro library are then introduced (classified by context) in @ref{Gnuastro library}.
If you end up routinely using a fixed set of library functions, with a well-defined input and output, it will be much more beneficial if you define a program for the job.
Therefore, in its @ref{Version controlled source}, Gnuastro comes with the @ref{The TEMPLATE program} to easily define your own programs(s).
@menu
* Review of library fundamentals:: Guide on libraries and linking.
* BuildProgram:: Link and run source files with this library.
* Gnuastro library:: Description of all library functions.
* Library demo programs:: Demonstration for using libraries.
@end menu
@node Review of library fundamentals, BuildProgram, Library, Library
@section Review of library fundamentals
Gnuastro's libraries are written in the C programming language.
In @ref{Why C}, we have thoroughly discussed the reasons behind this choice.
C was actually created to write Unix, thus understanding the way C works can greatly help in effectively using programs and libraries in all Unix-like operating systems.
Therefore, in the following subsections some important aspects of C, as it relates to libraries (and thus programs that depend on them) on Unix are reviewed.
First we will discuss header files in @ref{Headers} and then go onto @ref{Linking}.
This section finishes with @ref{Summary and example on libraries}.
If you are already familiar with these concepts, please skip this section and go directly to @ref{Gnuastro library}.
@cindex Modularity
In theory, a full operating system (or any software) can be written as one function.
Such a software would not need any headers or linking (that are discussed in the subsections below).
However, writing that single function and maintaining it (adding new features, fixing bugs, documentation, etc.) would be a programmer or scientist's worst nightmare! Furthermore, all the hard work that went into creating it cannot be reused in other software: every other programmer or scientist would have to re-invent the wheel.
The ultimate purpose behind libraries (which come with headers and have to be linked) is to address this problem and increase modularity: ``the degree to which a system's components may be separated and recombined'' (from Wikipedia).
The more modular the source code of a program or library, the easier maintaining it will be, and all the hard work that went into creating it can be reused for a wider range of problems.
@menu
* Headers:: Header files included in source.
* Linking:: Linking the compiled source files into one.
* Summary and example on libraries:: A summary and example on using libraries.
@end menu
@node Headers, Linking, Review of library fundamentals, Review of library fundamentals
@subsection Headers
@cindex Pre-Processor
C source code is read from top to bottom in the source file, therefore program components (for example, variables, data structures and functions) should all be @emph{defined} or @emph{declared} closer to the top of the source file: before they are used.
@emph{Defining} something in C or C++ is jargon for providing its full details.
@emph{Declaring} it, on the other-hand, is jargon for only providing the minimum information needed for the compiler to pass it temporarily and fill in the detailed definition later.
For a function, the @emph{declaration} only contains the inputs and their data-types along with the output's type@footnote{Recall that in C, functions only have one output.}.
The @emph{definition} adds to the declaration by including the exact details of what operations are done to the inputs to generate the output.
As an example, take this simple summation function:
@example
double
sum(double a, double b)
@{
return a + b;
@}
@end example
@noindent
What you see above is the @emph{definition} of this function: it shows you (and the compiler) exactly what it does to the two @code{double} type inputs and that the output also has a @code{double} type.
Note that a function's internal operations are rarely so simple and short, it can be arbitrarily long and complicated.
This unreasonably short and simple function was chosen here for ease of reading.
The declaration for this function is:
@example
double
sum(double a, double b);
@end example
@noindent
You can think of a function's declaration as a building's address in the city, and the definition as the building's complete blueprints.
When the compiler confronts a call to a function during its processing, it does not need to know anything about how the inputs are processed to generate the output.
Just as the postman does not need to know the inner structure of a building when delivering the mail.
The declaration (address) is enough.
Therefore by @emph{declaring} the functions once at the start of the source files, we do not have to worry about @emph{defining} them after they are used.
Even for a simple real-world operation (not a simple summation like above!), you will soon need many functions (for example, some for reading/preparing the inputs, some for the processing, and some for preparing the output).
Although it is technically possible, managing all the necessary functions in one file is not easy and is contrary to the modularity principle (see @ref{Review of library fundamentals}), for example, the functions for preparing the input can be usable in your other projects with a different processing.
Therefore, as we will see later (in @ref{Linking}), the functions do not necessarily need to be defined in the source file where they are used.
As long as their definitions are ultimately linked to the final executable, everything will be fine.
For now, it is just important to remember that the functions that are called within one source file must be declared within the source file (declarations are mandatory), but not necessarily defined there.
In the spirit of modularity, it is common to define contextually similar functions in one source file.
For example, in Gnuastro, functions that calculate the median, mean and other statistical functions are defined in @file{lib/statistics.c}, while functions that deal directly with FITS files are defined in @file{lib/fits.c}.
Keeping the definition of similar functions in a separate file greatly helps their management and modularity, but this fact alone does not make things much easier for the caller's source code: recall that while definitions are optional, declarations are mandatory.
So if this was all, the caller would have to manually copy and paste (@emph{include}) all the declarations from the various source files into the file they are working on now.
To address this problem, programmers have adopted the header file convention: the header file of a source code contains all the declarations that a caller would need to be able to use any of its functions.
For example, in Gnuastro, @file{lib/statistics.c} (file containing function definitions) comes with @file{lib/gnuastro/statistics.h} (only containing function declarations).
The discussion above was mainly focused on functions, however, there are many more programming constructs such as preprocessor macros and data structures.
Like functions, they also need to be known to the compiler when it confronts a call to them.
So the header file also contains their definitions or declarations when they are necessary for the functions.
@cindex Macro
@cindex Structures
@cindex Data structures
@cindex Pre-processor macros
Preprocessor macros (or macros for short) are replaced with their defined value by the preprocessor before compilation.
Conventionally they are written only in capital letters to be easily recognized.
It is just important to understand that the compiler does not see the macros, it sees their fixed values.
So when a header specifies macros you can do your programming without worrying about the actual values.
The standard C types (for example, @code{int}, or @code{float}) are very low-level and basic.
We can collect multiple C types into a @emph{structure} for a higher-level way to keep and pass-along data.
See @ref{Generic data container} for some examples of macros and data structures.
The contents in the header need to be @emph{include}d into the caller's source code with a special preprocessor command: @code{#include <path/to/header.h>}.
As the name suggests, the @emph{preprocessor} goes through the source code prior to the processor (or compiler).
One of its jobs is to include, or merge, the contents of files that are mentioned with this directive in the source code.
Therefore the compiler sees a single entity containing the contents of the main file and all the included files.
This allows you to include many (sometimes thousands of) declarations into your code with only one line.
Since the headers are also installed with the library into your system, you do not even need to keep a copy of them for each separate program, making things even more convenient.
Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory with a text editor to check out the include directives at the start of the file (after the copyright notice).
Let's take @file{lib/fits.c} as an example.
You will notice that Gnuastro's header files (like @file{gnuastro/fits.h}) are indeed within this directory (the @file{fits.h} file is in the @file{gnuastro/} directory).
You will notice that files like @file{stdio.h}, or @file{string.h} are not in this directory (or anywhere within Gnuastro).
On most systems the basic C header files (like @file{stdio.h} and @file{string.h} mentioned above) are located in @file{/usr/include/}@footnote{The @file{include/} directory name is taken from the pre-processor's @code{#include} directive, which is also the motivation behind the `I' in the @option{-I} option to the pre-processor.}.
Your compiler is configured to automatically search that directory (and possibly others), so you do not have to explicitly mention these directories.
Go ahead, look into the @file{/usr/include} directory and find @file{stdio.h} for example.
When the necessary header files are not in those specific libraries, the preprocessor can also search in places other than the current directory.
You can specify those directories with this preprocessor option@footnote{Try running Gnuastro's @command{make} and find the directories given to the compiler with the @option{-I} option.}:
@table @option
@item -I DIR
``Add the directory @file{DIR} to the list of directories to be searched for header files.
Directories named by '-I' are searched before the standard system include directories.
If the directory @file{DIR} is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system headers are not defeated...'' (quoted from the GNU Compiler Collection manual).
Note that the space between @key{I} and the directory is optional and commonly not used.
@end table
If the preprocessor cannot find the included files, it will abort with an error.
In fact a common error when building programs that depend on a library is that the compiler does not know where a library's header is (see @ref{Known issues}).
So you have to manually tell the compiler where to look for the library's headers with the @option{-I} option.
For a small software with one or two source files, this can be done manually (see @ref{Summary and example on libraries}).
However, to enhance modularity, Gnuastro (and most other bin/libraries) contain many source files, so the compiler is invoked many times@footnote{Nearly every command you see being executed after running @command{make} is one call to the compiler.}.
This makes manual addition or modification of this option practically impossible.
@cindex GNU build system
@cindex @command{CPPFLAGS}
To solve this problem, in the GNU build system, there are conventional environment variables for the various kinds of compiler options (or flags).
These environment variables are used in every call to the compiler (they can be empty).
The environment variable used for the C preprocessor (or CPP) is @command{CPPFLAGS}.
By giving @command{CPPFLAGS} a value once, you can be sure that each call to the compiler will be affected.
See @ref{Known issues} for an example of how to set this variable at configure time.
@cindex GNU build system
As described in @ref{Installation directory}, you can select the top installation directory of a software using the GNU build system, when you @command{./configure} it.
All the separate components will be put in their separate sub-directory under that, for example, the programs, compiled libraries and library headers will go into @file{$prefix/bin} (replace @file{$prefix} with a directory), @file{$prefix/lib}, and @file{$prefix/include} respectively.
For enhanced modularity, libraries that contain diverse collections of functions (like GSL, WCSLIB, and Gnuastro), put their header files in a sub-directory unique to themselves.
For example, all Gnuastro's header files are installed in @file{$prefix/include/gnuastro}.
In your source code, you need to keep the library's sub-directory when including the headers from such libraries, for example, @code{#include <gnuastro/fits.h>}@footnote{the top @file{$prefix/include} directory is usually known to the compiler}.
Not all libraries need to follow this convention, for example, CFITSIO only has one header (@file{fitsio.h}) which is directly installed in @file{$prefix/include}.
@node Linking, Summary and example on libraries, Headers, Review of library fundamentals
@subsection Linking
@cindex GNU Libtool
To enhance modularity, similar functions are defined in one source file (with a @file{.c} suffix, see @ref{Headers} for more).
After running @command{make}, each human-readable, @file{.c} file is translated (or compiled) into a computer-readable ``object'' file (ending with @file{.o}).
Note that object files are also created when building programs, they are not particular to libraries.
Try opening Gnuastro's @file{lib/} and @file{bin/progname/} directories after running @command{make} to see these object files@footnote{Gnuastro uses GNU Libtool for portable library creation.
Libtool will also make a @file{.lo} file for each @file{.c} file when building libraries (@file{.lo} files are human-readable).}.
Afterwards, the object files are @emph{linked} together to create an executable program or a library.
@cindex GNU Binutils
The object files contain the full definition of the functions in the respective @file{.c} file along with a list of any other function (or generally ``symbol'') that is referenced there.
To get a list of those functions you can use the @command{nm} program which is part of GNU Binutils.
For example, from the top Gnuastro directory, run:
@example
$ nm bin/arithmetic/arithmetic.o
@end example
@noindent
This will print a list of all the functions (more generally, `symbols') that were called within @file{bin/arithmetic/arithmetic.c} along with some further information (for example, a @code{T} in the second column shows that this function is actually defined here, @code{U} says that it is undefined here).
Try opening the @file{.c} file to check some of these functions for yourself. Run @command{info nm} for more information.
@cindex Linking
To recap, the @emph{compiler} created the separate object files mentioned above for each @file{.c} file.
The @emph{linker} will then combine all the symbols of the various object files (and libraries) into one program or library.
In the case of Arithmetic (a program) the contents of the object files in @file{bin/arithmetic/} are copied (and re-ordered) into one final executable file which we can run from the operating system.
@cindex Static linking
@cindex Linking: Static
@cindex Dynamic linking
@cindex Linking: Dynamic
There are two ways to @emph{link} all the necessary symbols: static and dynamic/shared.
When the symbols (computer-readable function definitions in most cases) are copied into the output, it is called @emph{static} linking.
When the symbols are kept in their original file and only a reference to them is kept in the executable, it is called @emph{dynamic}, or @emph{shared} linking.
Let's have a closer look at the executable to understand this better: we will assume you have built Gnuastro without any customization and installed Gnuastro into the default @file{/usr/local/} directory (see @ref{Installation directory}).
If you tried the @command{nm} command on one of Arithmetic's object files above, then with the command below you can confirm that all the functions that were defined in the object file above (had a @code{T} in the second column) are also defined in the @file{astarithmetic} executable:
@example
$ nm /usr/local/bin/astarithmetic
@end example
@noindent
These symbols/function have been statically linked (copied) in the final executable.
But you will notice that there are still many undefined symbols in the executable (those with a @code{U} in the second column).
One class of such functions are Gnuastro's own library functions that start with `@code{gal_}':
@example
$ nm /usr/local/bin/astarithmetic | grep gal_
@end example
@cindex Plugin
@cindex GNU Libtool
@cindex Shared library
@cindex Library: shared
@cindex Dynamic linking
@cindex Linking: dynamic
These undefined symbols (functions) are present in another file and will be linked to the Arithmetic program every time you run it.
Therefore they are known as dynamically @emph{linked} libraries @footnote{Do not confuse dynamically @emph{linked} libraries with dynamically @emph{loaded} libraries.
The former (that is discussed here) are only loaded once at the program startup.
However, the latter can be loaded anytime during the program's execution, they are also known as plugins.}.
As we saw above, static linking is done when the executable is being built.
However, when a program is dynamically linked to a library, at build-time, the library's symbols are only checked with the available libraries: they are not actually copied into the program's executable.
Every time you run the program, the (dynamic) linker will be activated and will try to link the program to the installed library before the program starts.
If you want all the libraries to be statically linked to the executables, you have to tell Libtool (which Gnuastro uses for the linking) to disable shared libraries at configure time@footnote{Libtool is very common and is commonly used.
Therefore, you can use this option to configure on most programs using the GNU build system if you want static linking.}:
@example
$ configure --disable-shared
@end example
@noindent
Try configuring Gnuastro with the command above, then build and install it (as described in @ref{Quick start}).
Afterwards, check the @code{gal_} symbols in the installed Arithmetic executable like before.
You will see that they are actually copied this time (have a @code{T} in the second column).
If the second column does not convince you, look at the executable file size with the following command:
@example
$ ls -lh /usr/local/bin/astarithmetic
@end example
@noindent
It should be around 4.2 Megabytes with this static linking.
If you configure and build Gnuastro again with shared libraries enabled (which is the default), you will notice that it is roughly 100 Kilobytes!
This huge difference would have been very significant in the old days, but with the roughly Terabyte storage drives commonly in use today, it is negligible.
Fortunately, output file size is not the only benefit of dynamic linking: since it links to the libraries at run-time (rather than build-time), you do not have to rebuild a higher-level program or library when an update comes for one of the lower-level libraries it depends on.
You just install the new low-level library and it will automatically be used/linked next time in the programs that use it.
To be fair, this also creates a few complications@footnote{Both of these can be avoided by joining the mailing lists of the lower-level libraries and checking the changes in newer versions before installing them.
Updates that result in such behaviors are generally heavily emphasized in the release notes.}:
@itemize
@item
Reproducibility: Even though your high-level tool has the same version as before, with the updated library, you might not get the same results.
@item
Broken links: if some functions have been changed or removed in the updated library, then the linker will abort with an error at run-time.
Therefore you need to rebuild your higher-level program or library.
@end itemize
@cindex GNU C library
To see a list of all the shared libraries that are needed for a program or
a shared library to run, you can use GNU C library's
@command{ldd}@footnote{If your operating system is not using the GNU C
library, you might need another tool.} program, for example:
@example
$ ldd /usr/local/bin/astarithmetic
@end example
Library file names (in their installation directory) start with a @file{lib} and their ending (suffix) shows if they are static (@file{.a}) or dynamic (@file{.so}), as described below.
The name of the library is in the middle of these two, for example, @file{libgsl.a} or @file{libgnuastro.a} (GSL and Gnuastro's static libraries), and @file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's shared library, the numbers may be different).
@itemize
@item
A static library is known as an archive file and has the @file{.a} suffix.
A static library is not an executable file.
@item
@cindex Shared library versioning
@cindex Versioning: Shared library
A shared library ends with the @file{.so.X.Y.Z} suffix and is executable.
The three numbers in the suffix, describe the version of the shared library.
Shared library versions are defined to allow multiple versions of a shared library simultaneously on a system and to help detect possible updates in the library and programs that depend on it by the linker.
It is very important to mention that this version number is different from the software version number (see @ref{Version numbering}), so do not confuse the two.
See the ``Library interface versions'' chapter of GNU Libtool for more.
For each shared library, we also have two symbolic links ending with @file{.so.X} and @file{.so}.
They are automatically set by the installer, but you can change them (point them to another version of the library) when you have multiple versions of a library on your system.
@end itemize
@cindex GNU Libtool
Libraries that are built with GNU Libtool (including Gnuastro and its dependencies), build both static and dynamic libraries by default and install them in @file{prefix/lib/} directory (for more on @file{prefix}, see @ref{Installation directory}).
In this way, programs depending on the libraries can link with them however they prefer.
See the contents of @file{/usr/local/lib} with the command below to see both the static and shared libraries available there, along with their executable nature and the symbolic links:
@example
$ ls -l /usr/local/lib/
@end example
To link with a library, the linker needs to know where to find the library.
@emph{At compilation time}, these locations can be passed to the linker with two separate options (see @ref{Summary and example on libraries} for an example) as described below.
You can see these options and their usage in practice while building Gnuastro (after running @command{make}):
@table @option
@item -L DIR
Will tell the linker to look into @file{DIR} for the libraries.
For example, @file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}.
You can make multiple calls to this option, so the linker looks into several directories at compilation time.
Note that the space between @key{L} and the directory is optional and commonly ignored (written as @option{-LDIR}).
@item -lLIBRARY
Specify the unique library identifier/name (not containing directory or shared/dynamic nature) to be linked with the executable.
As discussed above, library file names have fixed parts which must not be given to this option.
So @option{-lgsl} will guide the linker to either look for @file{libgsl.a} or @file{libgsl.so} (depending on the type of linking it is suppose to do).
You can link many libraries by repeated calls to this option.
@strong{Very important: } The place of this option on the compiler's command matters.
This is often a source of confusion for beginners, so let's assume you have asked the linker to link with library A using this option.
As soon as the linker confronts this option, it looks into the list of the undefined symbols it has found until that point and does a search in library A for any of those symbols.
If any pending undefined symbol is found in library A, it is used.
After the search in undefined symbols is complete, the contents of library A are completely discarded from the linker's memory.
Therefore, if a later object file or library uses an unlinked symbol in library A, the linker will abort after it has finished its search in all the input libraries or object files.
As an example, Gnuastro's @code{gal_fits_img_read} function depends on the @code{fits_read_pix} function of CFITSIO (specified with @option{-lcfitsio}, which in turn depends on the cURL library, called with @option{-lcurl}).
So the proper way to link something that uses this function is @option{-lgnuastro -lcfitsio -lcurl}.
If instead, you give: @option{-lcfitsio -lgnuastro} the linker will complain and abort.
To avoid such linking complexities when using Gnuastro's library, we recommend using @ref{BuildProgram}.
@end table
If you have compiled and linked your program with a dynamic library, then the dynamic linker also needs to know the location of the libraries after building the program: @emph{every time} the program is run afterwards.
Therefore, it may happen that you do not get any errors when compiling/linking a program, but are unable to run your program because of a failure to find a library.
This happens because the dynamic linker has not found the dynamic library @emph{at run time}.
To find the dynamic libraries at run-time, the linker looks into the paths, or directories, in the @code{LD_LIBRARY_PATH} environment variable.
For a discussion on environment variables, especially search paths like @code{LD_LIBRARY_PATH}, and how you can add new directories to them, see @ref{Installation directory}.
@node Summary and example on libraries, , Linking, Review of library fundamentals
@subsection Summary and example on libraries
After the mostly abstract discussions of @ref{Headers} and @ref{Linking}, we will give a small tutorial here.
But before that, let's recall the general steps of how your source code is prepared, compiled and linked to the libraries it depends on so you can run it:
@enumerate
@item
The @strong{preprocessor} includes the header (@file{.h}) files into the function definition (@file{.c}) files, expands preprocessor macros.
Generally the preprocessor prepares the human-readable source for compilation (reviewed in @ref{Headers}).
@item
The @strong{compiler} will translate (compile) the human-readable contents of each source (merged @file{.c} and the @file{.h} files, or generally the output of the preprocessor) into the computer-readable code of @file{.o} files.
@item
The @strong{linker} will link the called function definitions from various compiled files to create one unified object.
When the unified product has a @code{main} function, this function is the product's only entry point, enabling the operating system or user to directly interact with it, so the product is a program.
When the product does not have a @code{main} function, the linker's product is a library and it is exported functions can be linked to other executables (it has many entry points).
@end enumerate
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
The GNU Compiler Collection (or GCC for short) will do all three steps.
So as a first example, from Gnuastro's source, go to @file{tests/lib/}.
This directory contains the library tests, you can use these as some simple tutorials.
For this demonstration, we will compile and run the @file{arraymanip.c}.
This small program will call Gnuastro library for some simple operations on an array (open it and have a look).
To compile this program, run this command inside the directory containing it.
@example
$ gcc arraymanip.c -lgnuastro -lm -o arraymanip
@end example
@noindent
The two @option{-lgnuastro} and @option{-lm} options (in this order) tell GCC to first link with the Gnuastro library and then with C's math library.
The @option{-o} option is used to specify the name of the output executable, without it the output file name will be @file{a.out} (on most OSs), independent of your input file name(s).
If your top Gnuastro installation directory (let's call it @file{$prefix}, see @ref{Installation directory}) is not recognized by GCC, you will get preprocessor errors for unknown header files.
Once you fix it, you will get linker errors for undefined functions.
To fix both, you should run GCC as follows: additionally telling it which directories it can find Gnuastro's headers and compiled library (see @ref{Headers} and @ref{Linking}):
@example
$ gcc -I$prefix/include -L$prefix/lib arraymanip.c -lgnuastro -lm \
-o arraymanip
@end example
@noindent
This single command has done all the preprocessor, compilation and linker operations.
Therefore no intermediate files (object files in particular) were created, only a single output executable was created.
You are now ready to run the program with:
@example
$ ./arraymanip
@end example
The Gnuastro functions called by this program only needed to be linked with the C math library.
But if your program needs WCS coordinate transformations, needs to read a FITS file, needs special math operations (which include its linear algebra operations), or you want it to run on multiple CPU threads, you also need to add these libraries in the call to GCC: @option{-lgnuastro -lwcs -lcfitsio -lgsl -lgslcblas -pthread -lm}.
In @ref{Gnuastro library}, where each function is documented, it is mentioned which libraries (if any) must also be linked when you call a function.
If you feel all these linkings can be confusing, please consider Gnuastro's @ref{BuildProgram} program.
@node BuildProgram, Gnuastro library, Review of library fundamentals, Library
@section BuildProgram
The number and order of libraries that are necessary for linking a program with Gnuastro library might be too confusing when you need to compile a small program for one particular job (with one source file).
BuildProgram will use the information gathered during configuring Gnuastro and link with all the appropriate libraries on your system.
This will allow you to easily compile, link and run programs that use Gnuastro's library with one simple command and not worry about which libraries to link to, or the linking order.
@cindex GNU Libtool
BuildProgram uses GNU Libtool to find the necessary libraries to link against (GNU Libtool is the same program that builds all of Gnuastro's libraries and programs when you run @code{make}).
So in the future, if Gnuastro's prerequisite libraries change or other libraries are added, you do not have to worry, you can just run BuildProgram and internal linking will be done correctly.
@cartouche
@noindent
@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU Libtool, other implementations do not have some necessary features.
If GNU Libtool is not available at Gnuastro's configure time, you will get a notice at the end of the configuration step and BuildProgram will not be built or installed.
Please see @ref{Optional dependencies} for more information.
@end cartouche
@menu
* Invoking astbuildprog:: Options and examples for using this program.
@end menu
@node Invoking astbuildprog, , BuildProgram, BuildProgram
@subsection Invoking BuildProgram
BuildProgram will compile and link a C source program with Gnuastro's library and all its dependencies, greatly facilitating the compilation and running of small programs that use Gnuastro's library.
The executable name is @file{astbuildprog} with the following general template:
@example
$ astbuildprog [OPTION...] C_SOURCE_FILE
@end example
@noindent
One line examples:
@example
## Compile, link and run `myprogram.c':
$ astbuildprog myprogram.c
## Similar to previous, but with optimization and compiler warnings:
$ astbuildprog -Wall -O2 myprogram.c
## Compile and link `myprogram.c', then run it with `image.fits'
## as its argument:
$ astbuildprog myprogram.c image.fits
## Also look in other directories for headers and linking:
$ astbuildprog -Lother -Iother/dir myprogram.c
## Just build (compile and link) `myprogram.c', do not run it:
$ astbuildprog --onlybuild myprogram.c
@end example
If BuildProgram is to run, it needs a C programming language source file as input.
By default it will compile and link the given source into a final executable program and run it.
The built executable name can be set with the optional @option{--output} option.
When no output name is set, BuildProgram will use Gnuastro's @ref{Automatic output} system to remove the suffix of the input source file (usually @file{.c}) and use the resulting name as the built program name.
For the full list of options that BuildProgram shares with other Gnuastro programs, see @ref{Common options}.
You may also use Gnuastro's @ref{Configuration files} to specify other libraries/headers to use for special directories and not have to type them in every time.
The C compiler can be chosen with the @option{--cc} option, or environment variables, please see the description of @option{--cc} for more.
The two common @code{LDFLAGS} and @code{CPPFLAGS} environment variables are also checked and used in the build by default.
Note that they are placed after the values to the corresponding options @option{--includedir} and @option{--linkdir}.
Therefore BuildProgram's own options take precedence.
Using environment variables can be disabled with the @option{--noenv} option.
Just note that BuildProgram also keeps the important flags in these environment variables in its configuration file.
Therefore, in many cases, even though you may needed them to build Gnuastro, you will not need them in BuildProgram.
The first argument is considered to be the C source file that must be compiled and linked.
Any other arguments (non-option tokens on the command-line) will be passed onto the program when BuildProgram wants to run it.
Recall that by default BuildProgram will run the program after building it.
This behavior can be disabled with the @code{--onlybuild} option.
@cindex GNU Make
When the @option{--quiet} option (see @ref{Operating mode options}) is not called, BuildPrograms will print the compilation and running commands.
Once your program grows and you break it up into multiple files (which are much more easily managed with Make), you can use the linking flags of the non-quiet output in your @code{Makefile}.
@table @option
@item -c STR
@itemx --cc=STR
@cindex C compiler
@cindex Compiler, C
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
C compiler to use for the compilation, if not given environment variables will be used as described in the next paragraph.
If the compiler is in your system's search path, you can simply give its name, for example, @option{--cc=gcc}.
If it is not in your system's search path, you can give its full path, for example, @option{--cc=/path/to/your/custom/cc}.
If this option has no value after parsing the command-line and all configuration files (see @ref{Configuration file precedence}), then BuildProgram will look into the following environment variables in the given order @code{CC} and @code{GCC}.
If they are also not defined, BuildProgram will ultimately default to the @command{gcc} command which is present in many systems (sometimes as a link to other compilers).
@item -I STR
@itemx --includedir=STR
@cindex GNU CPP
@cindex C preprocessor
Directory to search for files that you @code{#include} in your C program.
Note that headers relating to Gnuastro and its dependencies do not need this option.
This is only necessary if you want to use other headers.
It may be called multiple times and order matters.
This directory will be searched before those of Gnuastro's build and also the system search directories.
See @ref{Headers} for a thorough introduction.
From the GNU C preprocessor manual: ``Add the directory @code{STR} to the list of directories to be searched for header files.
Directories named by @option{-I} are searched before the standard system include directories.
If the directory @code{STR} is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system headers are not defeated''.
@item -L STR
@itemx --linkdir=STR
@cindex GNU Libtool
Directory to search for compiled libraries to link the program with.
Note that all the directories that Gnuastro was built with will already be used by BuildProgram (GNU Libtool).
This option is only necessary if your libraries are in other directories.
Multiple calls to this option are possible and order matters.
This directory will be searched before those of Gnuastro's build and also the system search directories.
See @ref{Linking} for a thorough introduction.
@item -l STR
@itemx --linklib=STR
Library to link with your program.
Note that all the libraries that Gnuastro was built with will already be linked by BuildProgram (GNU Libtool).
This option is only necessary if you want to link with other directories.
Multiple calls to this option are possible and order matters.
This library will be linked before Gnuastro's library or its dependencies.
See @ref{Linking} for a thorough introduction.
@item -O INT/STR
@itemx --optimize=INT/STR
@cindex Optimization
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
Compiler optimization level: 0 (for no optimization, good debugging), 1, 2, 3 (for the highest level of optimizations).
From the GNU Compiler Collection (GCC) manual: ``Without any optimization option, the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results.
Statements are independent: if you stop the program with a break point between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you expect from the source code.
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.'' Please see your compiler's manual for the full list of acceptable values to this option.
@item -g
@itemx --debug
@cindex Debug
Emit extra information in the compiled binary for use by a debugger.
When calling this option, it is best to explicitly disable optimization with @option{-O0}.
To combine both options you can run @option{-gO0} (see @ref{Options} for how short options can be merged into one).
@item -W STR
@itemx --warning=STR
Print compiler warnings on command-line during compilation.
``Warnings are diagnostic messages that report constructions that are not inherently erroneous but that are risky or suggest there may have been an error.'' (from the GCC manual).
It is always recommended to compile your programs with warnings enabled.
All compiler warning options that start with @option{W} are usable by this option in BuildProgram also, see your compiler's manual for the full list.
Some of the most common values to this option are: @option{pedantic} (Warnings related to standard C) and @option{all} (all issues the compiler confronts).
@item -t
@itemx --tag=STR
The language configuration information.
Libtool can build objects and libraries in many languages.
In many cases, it can identify the language automatically, but when it does not you can use this option to explicitly notify Libtool of the language.
The acceptable values are: @code{CC} for C, @code{CXX} for C++, @code{GCJ} for Java, @code{F77} for Fortran 77, @code{FC} for Fortran, @code{GO} for Go and @code{RC} for Windows Resource.
Note that the Gnuastro library is not yet fully compatible with all these languages.
@item -b
@itemx --onlybuild
Only build the program, do not run it.
By default, the built program is immediately run afterwards.
@item -d
@itemx --deletecompiled
Delete the compiled binary file after running it.
This option is only relevant when the compiled program is run after being built.
In other words, it is only relevant when @option{--onlybuild} is not called.
It can be useful when you are busy testing a program or just want a fast result and the actual binary/compiled file is not of later use.
@item -a STR
@itemx --la=STR
Use the given @file{.la} file (Libtool control file) instead of the one that was produced from Gnuastro's configuration results.
The Libtool control file keeps all the necessary information for building and linking a program with a library built by Libtool.
The default @file{prefix/lib/libgnuastro.la} keeps all the information necessary to build a program using the Gnuastro library gathered during configure time (see @ref{Installation directory} for prefix).
This option is useful when you prefer to use another Libtool control file.
@item -e
@itemx --noenv
@cindex @code{CC}
@cindex @code{GCC}
@cindex @code{LDFLAGS}
@cindex @code{CPPFLAGS}
Do not use environment variables in the build, just use the values given to the options.
As described above, environment variables like @code{CC}, @code{GCC}, @code{LDFLAGS}, @code{CPPFLAGS} will be read by default and used in the build if they have been defined.
@end table
@node Gnuastro library, Library demo programs, BuildProgram, Library
@section Gnuastro library
Gnuastro library's programming constructs (function declarations, macros, data structures, or global variables) are classified by context into multiple header files (see @ref{Headers})@footnote{Within Gnuastro's source, all installed @file{.h} files in @file{lib/gnuastro/} are accompanied by a @file{.c} file in @file{/lib/}.}.
In this section, the functions in each header will be discussed under a separate sub-section, which includes the name of the header.
Assuming a function declaration is in @file{headername.h}, you can include its declaration in your source code with:
@example
# include <gnuastro/headername.h>
@end example
@noindent
The names of all constructs in @file{headername.h} are prefixed with @code{gal_headername_} (or @code{GAL_HEADERNAME_} for macros).
The @code{gal_} prefix stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
Gnuastro library functions are compiled into a single file which can be linked on the command-line with the @option{-lgnuastro} option.
See @ref{Linking} and @ref{Summary and example on libraries} for an introduction on linking and some fully working examples of the libraries.
Gnuastro's library is a high-level library which depends on lower level libraries for some operations (see @ref{Dependencies}).
Therefore if at least one of Gnuastro's functions in your program use functions from the dependencies, you will also need to link those dependencies after linking with Gnuastro.
See @ref{BuildProgram} for a convenient way to deal with the dependencies.
BuildProgram will take care of the libraries to link with your program (which uses the Gnuastro library), and can even run the built program afterwards.
Therefore it allows you to conveniently focus on your exciting science/research when using Gnuastro's libraries.
@cartouche
@noindent
@strong{Libraries are still under heavy development: } Gnuastro was initially created to be a collection of command-line programs.
However, as the programs and their the shared functions grew, internal (not installed) libraries were added.
Since the 0.2 release, the libraries are install-able.
Hence the libraries are currently under heavy development and will significantly evolve between releases and will become more mature and stable in due time.
It will stabilize with the removal of this notice.
Check the @file{NEWS} file for interface changes.
If you use the Info version of this manual (see @ref{Info}), you do not have to worry: the documentation will correspond to your installed version.
@end cartouche
@menu
* Configuration information:: General information about library config.
* Multithreaded programming:: Tools for easy multi-threaded operations.
* Library data types:: Definitions and functions for types.
* Pointers:: Wrappers for easy working with pointers.@strong{}
* Library blank values:: Blank values and functions to deal with them.
* Library data container:: General data container in Gnuastro.
* Dimensions:: Dealing with coordinates and dimensions.
* Linked lists:: Various types of linked lists.
* Array input output:: Reading and writing images or cubes.
* Table input output:: Reading and writing table columns.
* FITS files:: Working with FITS data.
* File input output:: Reading and writing to various file formats.
* World Coordinate System:: Dealing with the world coordinate system.
* Arithmetic on datasets:: Arithmetic operations on a dataset.
* Tessellation library:: Functions for working on tiles.
* Bounding box:: Finding the bounding box.
* Polygons:: Working with the vertices of a polygon.
* Qsort functions:: Helper functions for Qsort.
* K-d tree:: Space partitioning in K dimensions.
* Permutations:: Re-order (or permute) the values in a dataset.
* Matching:: Matching catalogs based on position.
* Statistical operations:: Functions for basic statistics.
* Fitting functions:: Fit independent and measured variables.
* Binary datasets:: Datasets that can only have values of 0 or 1.
* Labeled datasets:: Working with Segmented/labeled datasets.
* Convolution functions:: Library functions to do convolution.
* Pooling functions:: Reduce size of input by statistical methods.
* Interpolation:: Interpolate (over blank values possibly).
* Warp library:: Warp pixel grid to a new one.
* Color functions:: Definitions and operations related to colors.
* Git wrappers:: Wrappers for functions in libgit2.
* Python interface:: Functions to help in writing Python wrappers.
* Unit conversion library:: Converting between recognized units.
* Spectral lines library:: Functions for operating on Spectral lines.
* Cosmology library:: Cosmological calculations.
* SAO DS9 library:: Take inputs from files generated by SAO DS9.
@end menu
@node Configuration information, Multithreaded programming, Gnuastro library, Gnuastro library
@subsection Configuration information (@file{config.h})
The @file{gnuastro/config.h} header contains information about the full Gnuastro installation on your system.
Gnuastro developers should note that this is the only header that is not available within Gnuastro, it is only available to a Gnuastro library user @emph{after} installation.
Within Gnuastro, @file{config.h} (which is included in every Gnuastro @file{.c} file, see @ref{Coding conventions}) has more than enough information about the overall Gnuastro installation.
@deffn Macro GAL_CONFIG_VERSION
This macro can be used as a string literal@footnote{@url{https://en.wikipedia.org/wiki/String_literal}} containing the version of Gnuastro that is being used.
See @ref{Version numbering} for the version formats. For example:
@example
printf("Gnuastro version: %s\n", GAL_CONFIG_VERSION);
@end example
@noindent
or
@example
char *gnuastro_version=GAL_CONFIG_VERSION;
@end example
@end deffn
@deffn Macro GAL_CONFIG_HAVE_GSL_INTERP_STEFFEN
GNU Scientific Library (GSL) is a mandatory dependency of Gnuastro (see @ref{GNU Scientific Library}).
The Steffen interpolation function that can be used in Gnuastro was introduced in GSL version 2.0 (released in October 2015).
This macro will have a value of @code{1} if the host GSL contains this feature at configure time, and @code{0} otherwise.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_FITS_IS_REENTRANT
@cindex CFITSIO
This macro will have a value of 1 when the CFITSIO of the host system has the @code{fits_is_reentrant} function (available from CFITSIO version 3.30).
This function is used to see if CFITSIO was configured to read a FITS file simultaneously on different threads.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_WCSLIB_VERSION
WCSLIB is the reference library for world coordinate system transformation (see @ref{WCSLIB} and @ref{World Coordinate System}).
However, only more recent versions of WCSLIB also provide its version number.
If the WCSLIB that is installed on the system provides its version (through the possibly existing @code{wcslib_version} function), this macro will have a value of one, otherwise it will have a value of zero.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_WCSLIB_DIS_H
This macro has a value of 1 if the host's WCSLIB has the @file{wcslib/dis.h} header for distortion-related operations.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_WCSLIB_MJDREF
This macro has a value of 1 if the host's WCSLIB reads and stores the @file{MJDREF} FITS header keyword as part of its core @code{wcsprm} structure.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_WCSLIB_OBSFIX
This macro has a value of 1 if the host's WCSLIB supports the @code{OBSFIX} feature (used by @code{wcsfix} function to parse the input WCS for known errors).
@end deffn
@deffn Macro GAL_CONFIG_HAVE_PTHREAD_BARRIER
The POSIX threads standard define barriers as an optional requirement.
Therefore, some operating systems choose to not include it.
As one of the @command{./configure} step checks, Gnuastro we check if your system has this POSIX thread barriers.
If so, this macro will have a value of @code{1}, otherwise it will have a value of @code{0}.
see @ref{Implementation of pthread_barrier} for more.
@end deffn
@cindex 32-bit
@cindex 64-bit
@cindex bit-32
@cindex bit-64
@deffn Macro GAL_CONFIG_SIZEOF_LONG
@deffnx Macro GAL_CONFIG_SIZEOF_SIZE_T
The size of (number of bytes in) the system's @code{long} and @code{size_t} types.
Their values are commonly either 4 or 8 for 32-bit and 64-bit systems.
You can also get this value with the expression `@code{sizeof size_t}' for example, without having to include this header.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_LIBGIT2
Libgit2 is an optional dependency of Gnuastro (see @ref{Optional dependencies}).
When it is installed and detected at configure time, this macro will have a value of @code{1} (one).
Otherwise, it will have a value of @code{0} (zero).
Gnuastro also comes with some wrappers to make it easier to use libgit2 (see @ref{Git wrappers}).
@end deffn
@deffn Macro GAL_CONFIG_HAVE_PYTHON
Gnuastro can optionally provide a set of basic functions to facilitate wrapper libraries in Python (see @ref{Python interface}).
If a version of Python 3.X was found on the host system that has the necessary Numpy headers, this macro will be given a value of @code{1}.
Otherwise, it will be given a value of @code{0} and the the Python interface functions won't be available in the host's Gnuastro library.
@end deffn
@deffn Macro GAL_CONFIG_HAVE_GNUMAKE_H
Gnuastro provides a set of GNU Make extension functions (see @ref{Makefile extensions}).
In order to use those, the host should have @file{gnumake.h} in its include paths.
This check is done at Gnuastro's configuration time.
If it was found, this macro is given a value of @code{1}, otherwise, it will have a value of @code{0}.
@end deffn
@node Multithreaded programming, Library data types, Configuration information, Gnuastro library
@subsection Multithreaded programming (@file{threads.h})
@cindex Multithreaded programming
In recent years, newer CPUs do not have significantly higher frequencies any
more. However, CPUs are being manufactured with more cores, enabling more
than one operation (thread) at each instant. This can be very useful to
speed up many aspects of processing and in particular image processing.
Most of the programs in Gnuastro utilize multi-threaded programming for the
CPU intensive processing steps. This can potentially lead to a significant
decrease in the running time of a program, see @ref{A note on threads}. In
terms of reading the code, you do not need to know anything about
multi-threaded programming. You can simply follow the case where only one
thread is to be used. In these cases, threads are not used and can be
completely ignored.
@cindex POSIX threads library
@cindex Lawrence Livermore National Laboratory
When the C language was defined (the K&R's book was written), using threads
was not common, so C's threading capabilities are not introduced
there. Gnuastro uses POSIX threads for multi-threaded programming, defined
in the @file{pthread.h} system wide header. There are various resources for
learning to use POSIX threads. An excellent
@url{https://computing.llnl.gov/tutorials/pthreads/, tutorial} is provided
by the Lawrence Livermore National Laboratory, with abundant figures to
better understand the concepts, it is a very good start. The book
`Advanced programming in the Unix environment'@footnote{Do not let the title
scare you! The two chapters on Multi-threaded programming are very
self-sufficient and do not need any more knowledge than K&R.}, by Richard
Stevens and Stephen Rago, Addison-Wesley, 2013 (Third edition) also has two
chapters explaining the POSIX thread constructs which can be very helpful.
@cindex OpenMP
An alternative to POSIX threads was OpenMP, but POSIX threads are low
level, allowing much more control, while being easier to understand, see
@ref{Why C}. All the situations where threads are used in Gnuastro
currently are completely independent with no need of coordination between
the threads. Such problems are known as ``embarrassingly parallel''
problems. They are some of the simplest problems to solve with threads and
are also the ones that benefit most from them, see the LLNL
introduction@footnote{@url{https://computing.llnl.gov/tutorials/parallel_comp/}}.
One very useful POSIX thread concept is
@code{pthread_barrier}. Unfortunately, it is only an optional feature in
the POSIX standard, so some operating systems do not include it. Therefore
in @ref{Implementation of pthread_barrier}, we introduce our own
implementation. This is a rather technical section only necessary for more
technical readers and you can safely ignore it. Following that, we describe
the helper functions in this header that can greatly simplify writing a
multi-threaded program, see @ref{Gnuastro's thread related functions} for
more.
@menu
* Implementation of pthread_barrier:: Some systems do not have pthread_barrier
* Gnuastro's thread related functions:: Functions for managing threads.
@end menu
@node Implementation of pthread_barrier, Gnuastro's thread related functions, Multithreaded programming, Multithreaded programming
@subsubsection Implementation of @code{pthread_barrier}
@cindex POSIX threads
@cindex pthread_barrier
One optional feature of the POSIX Threads standard is the
@code{pthread_barrier} concept. It is a very useful high-level construct
that allows for independent threads to ``wait'' behind a ``barrier'' for
the rest after they finish. Barriers can thus greatly simplify the code in
a multi-threaded program, so they are heavily used in Gnuastro. However,
since it is an optional feature in the POSIX standard, some operating systems
do not include it. So to make Gnuastro portable, we have written our own
implementation of those @code{pthread_barrier} functions.
At @command{./configure} time, Gnuastro will check if
@code{pthread_barrier} constructs are available on your system or not. If
@code{pthread_barrier} is not available, our internal implementation will
be compiled into the Gnuastro library and the definitions and declarations
below will be usable in your code with @code{#include
<gnuastro/threads.h>}.
@deffn Type pthread_barrierattr_t
Type to specify the attributes of a POSIX threads barrier.
@end deffn
@deffn Type pthread_barrier_t
Structure defining the POSIX threads barrier.
@end deffn
@deftypefun int pthread_barrier_init (pthread_barrier_t @code{*b}, pthread_barrierattr_t @code{*attr}, unsigned int @code{limit})
Initialize the barrier @code{b}, with the attributes @code{attr} and total
@code{limit} (a number of) threads that must wait behind it. This function
must be called before spinning off threads.
@end deftypefun
@deftypefun int pthread_barrier_wait (pthread_barrier_t @code{*b})
This function is called within each thread, just before it is ready to
return. Once a thread's function hits this, it will ``wait'' until all the
other functions are also finished.
@end deftypefun
@deftypefun int pthread_barrier_destroy (pthread_barrier_t @code{*b})
Destroy all the information in the barrier structure. This should be called
by the function that spun-off the threads after all the threads have
finished.
@cartouche
@noindent
@strong{Destroy a barrier before re-using it:} It is very important to
destroy the barrier before (possibly) reusing it. This destroy function not
only destroys the internal structures, it also waits (in 1 microsecond
intervals, so you will not notice!) until all the threads do not need the
barrier structure any more. If you immediately start spinning off new
threads with a not-destroyed barrier, then the internal structure of the
remaining threads will get mixed with the new ones and you will get very
strange and apparently random errors that are extremely hard to debug.
@end cartouche
@end deftypefun
@node Gnuastro's thread related functions, , Implementation of pthread_barrier, Multithreaded programming
@subsubsection Gnuastro's thread related functions
@cindex POSIX Threads
The POSIX Threads functions offered in the C library are very low-level and offer a great range of control over the properties of the threads.
So if you are interested in customizing your tools for complicated thread applications, it is strongly encouraged to get a nice familiarity with them.
Some resources were introduced in @ref{Multithreaded programming}.
However, in many cases used in astronomical data analysis, you do not need communication between threads and each target operation can be done independently.
Since such operations are very common, Gnuastro provides the tools below to facilitate the creation and management of jobs without any particular knowledge of POSIX Threads for such operations.
The most interesting high-level functions of this section are the @code{gal_threads_number} and @code{gal_threads_spin_off} that identify the number of threads on the system and spin-off threads.
You can see a demonstration of using these functions in @ref{Library demo - multi-threaded operation}.
@deftp {C @code{struct}} gal_threads_params
Structure keeping the parameters of each thread.
When each thread is created, a pointer to this structure is passed to it.
The @code{params} element can be the pointer to a structure defined by the user which contains all the necessary parameters to pass onto the worker function.
The rest of the elements within this structure are set internally by @code{gal_threads_spin_off} and are relevant to the worker function.
@example
struct gal_threads_params
@{
size_t id; /* Id of this thread. */
void *params; /* User-identified pointer. */
size_t *indexs; /* Target indices given to this thread. */
pthread_barrier_t *b; /* Barrier for all threads. */
@};
@end example
@end deftp
@deftypefun size_t gal_threads_number ()
Return the number of threads that the operating system has available for your program.
This number is usually fixed for a single machine and does not change.
So this function is useful when you want to run your program on different machines (with different CPUs).
@end deftypefun
@deftypefun void gal_threads_spin_off (void @code{*(*worker)(void *)}, void @code{*caller_params}, size_t @code{numactions}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Distribute @code{numactions} jobs between @code{numthreads} threads and spin-off each thread by calling the @code{worker} function.
The @code{caller_params} pointer will also be passed to @code{worker} as part of the @code{gal_threads_params} structure.
For a fully working example of this function, please see @ref{Library demo - multi-threaded operation}.
If there are many jobs (millions or billions) to organize, memory issues may become important.
With @code{minmapsize} you can specify the minimum byte-size to allocate the necessary space in a memory-mapped file or alternatively in RAM.
If @code{quietmmap} is non-zero, then a warning will be printed upon creating a memory-mapped file.
For more on Gnuastro's memory management, see @ref{Memory management}.
@end deftypefun
@deftypefun void gal_threads_attr_barrier_init (pthread_attr_t @code{*attr}, pthread_barrier_t @code{*b}, size_t @code{limit})
@cindex Detached threads
This is a low-level function in case you do not want to use @code{gal_threads_spin_off}.
It will initialize the general thread attribute @code{attr} and the barrier @code{b} with @code{limit} threads to wait behind the barrier.
For maximum efficiency, the threads initialized with this function will be detached.
Therefore no communication is possible between these threads and in particular @code{pthread_join} will not work on these threads.
You have to use the barrier constructs to wait for all threads to finish.
@end deftypefun
@deftypefun {char *} gal_threads_dist_in_threads (size_t @code{numactions}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{**indexs}, size_t @code{*icols})
This is a low-level function in case you do not want to use @code{gal_threads_spin_off}.
The job of this function is to distribute @code{numactions} jobs/actions in @code{numthreads} threads.
To do this, it will assign each job an ID, ranging from 0 to @code{numactions}-1.
The output is the allocated @code{*indexs} array and the @code{*icols} number.
In memory, it is just a simple 1D array that has @code{numthreads} @mymath{\times} @code{*icols} elements.
But you can visualize it as a 2D array with @code{numthreads} rows and @code{*icols} columns.
For more on the logic of the distribution, see below.
@cindex RAM
@cindex Memory management
When you have millions/billions of jobs to distribute, @code{indexs} will become very large.
For memory management (when to use a memory-mapped file, and when to use RAM), you need to specify the @code{minmapsize} and @code{quietmmap} arguments.
For more on memory management, see @ref{Memory management}.
In general, if your distributed jobs will not be on the scale of billions (and you want everything to always be written in RAM), just set @code{minmapsize=-1} and @code{quietmmap=1}.
When @code{indexs} is actually in a memory-mapped file, this function will return a string containing the name of the file (that you can later give to @code{gal_pointer_mmap_free} to free/delete).
When @code{indexs} is in RAM, this function will return a @code{NULL} pointer.
So after you are finished with @code{indexs}, you can free it like this:
@example
char *mmapname;
int quietmmap=1;
size_t *indexs, thrdcols;
size_t numactions=5000, minmapsize=-1;
size_t numthreads=gal_threads_number();
/* Distribute the jobs. */
mmapname=gal_threads_dist_in_threads(numactions, numthreads,
minmapsize, quietmmap,
&indexs, &thrdcols);
/* Do any processing you want... */
/* Free the 'indexs' array. */
if(mmapname) gal_pointer_mmap_free(&mmapname, quietmmap);
else free(indexs);
@end example
Here is a brief description of the reasoning behind the @code{indexs} array and how the jobs are distributed.
Let's assume you have @mymath{A} actions (where there is only one function and the input values differ for each action) and @mymath{T} threads available to the system with @mymath{A>T} (common values for these two would be @mymath{A>1000} and @mymath{T<10}).
Spinning off a thread is not a cheap job and requires a significant number of CPU cycles.
Therefore, creating @mymath{A} threads is not the best way to address such a problem.
The most efficient way to manage the actions is such that only @mymath{T} threads are created, and each thread works on a list of actions identified for it in series (one after the other).
This way your CPU will get all the actions done with minimal overhead.
The purpose of this function is to do what we explained above: each row in the @code{indexs} array contains the indices of actions which must be done by one thread (so it has @code{numthreads} rows with @code{*icols} columns).
However, when using @code{indexs}, you do not have to know the number of columns.
It is guaranteed that all the rows finish with @code{GAL_BLANK_SIZE_T} (see @ref{Library blank values}).
The @code{GAL_BLANK_SIZE_T} macro plays a role very similar to a string's @code{\0}: every row finishes with this macro, so can easily stop parsing the indexes in the row as soon as you confront @code{GAL_BLANK_SIZE_T}.
For some real examples, please see the example program in @file{tests/lib/multithread.c} for a demonstration.
@end deftypefun
@node Library data types, Pointers, Multithreaded programming, Gnuastro library
@subsection Library data types (@file{type.h})
Data in astronomy can have many types, numeric (numbers) and strings
(names, identifiers). The former can also be divided into integers and
floats, see @ref{Numeric data types} for a thorough discussion of the
different numeric data types and which one is useful for different
contexts.
To deal with the very large diversity of types that are available (and used
in different contexts), in Gnuastro each type is identified with global
integer variable with a fixed name, this variable is then passed onto
functions that can work on any type or is stored in Gnuastro's @ref{Generic
data container} as one piece of meta-data.
The actual values within these integer constants is irrelevant and you
should never rely on them. When you need to check, explicitly use the named
variable in the table below. If you want to check with more than one type,
you can use C's @code{switch} statement.
Since Gnuastro heavily deals with file input-output, the types it defines
are fixed width types, these types are portable to all systems and are
defined in the standard C header @file{stdint.h}. You do not need to include
this header, it is included by any Gnuastro header that deals with the
different types. However, the most commonly used types in a C (or C++)
program (for example, @code{int} or @code{long}) are not defined by their
exact width (storage size), but by their minimum storage. So for example, on
some systems, @code{int} may be 2 bytes (16-bits, the minimum required by
the standard) and on others it may be 4 bytes (32-bits, common in modern
systems).
With every type, a unique ``blank'' value (or place-holder showing the
absence of data) can be defined. Please see @ref{Library blank values} for
constants that Gnuastro recognizes as a blank value for each type. See
@ref{Numeric data types} for more explanation on the limits and particular
aspects of each type.
@deffn {Global integer} GAL_TYPE_INVALID
This is just a place-holder to specifically mark that no type has been set.
@end deffn
@deffn {Global integer} GAL_TYPE_BIT
Identifier for a bit-stream. Currently no program in Gnuastro works
directly on bits, but features will be added in the future.
@end deffn
@deffn {Global integer} GAL_TYPE_UINT8
Identifier for an unsigned, 8-bit integer type: @code{uint8_t} (from
@file{stdint.h}), or an @code{unsigned char} in most modern systems.
@end deffn
@deffn {Global integer} GAL_TYPE_INT8
Identifier for a signed, 8-bit integer type: @code{int8_t} (from
@file{stdint.h}), or an @code{signed char} in most modern systems.
@end deffn
@deffn {Global integer} GAL_TYPE_UINT16
Identifier for an unsigned, 16-bit integer type: @code{uint16_t} (from
@file{stdint.h}), or an @code{unsigned short} in most modern systems.
@end deffn
@deffn {Global integer} GAL_TYPE_INT16
Identifier for a signed, 16-bit integer type: @code{int16_t} (from
@file{stdint.h}), or a @code{short} in most modern systems.
@end deffn
@deffn {Global integer} GAL_TYPE_UINT32
Identifier for an unsigned, 32-bit integer type: @code{uint32_t} (from
@file{stdint.h}), or an @code{unsigned int} in most modern systems.
@end deffn
@deffn {Global integer} GAL_TYPE_INT32
Identifier for a signed, 32-bit integer type: @code{int32_t} (from
@file{stdint.h}), or an @code{int} in most modern systems.
@end deffn
@deffn {Global integer} GAL_TYPE_UINT64
Identifier for an unsigned, 64-bit integer type: @code{uint64_t} (from
@file{stdint.h}), or an @code{unsigned long} in most modern 64-bit systems.
@end deffn
@deffn {Global integer} GAL_TYPE_INT64
Identifier for a signed, 64-bit integer type: @code{int64_t} (from
@file{stdint.h}), or an @code{long} in most modern 64-bit systems.
@end deffn
@deffn {Global integer} GAL_TYPE_INT
Identifier for a @code{int} type. This is just an alias to @code{int16}, or
@code{int32} types, depending on the system.
@end deffn
@deffn {Global integer} GAL_TYPE_UINT
Identifier for a @code{unsigned int} type. This is just an alias to
@code{uint16}, or @code{uint32} types, depending on the system.
@end deffn
@deffn {Global integer} GAL_TYPE_ULONG
Identifier for a @code{unsigned long} type. This is just an alias to
@code{uint32}, or @code{uint64} types for 32-bit, or 64-bit systems
respectively.
@end deffn
@deffn {Global integer} GAL_TYPE_LONG
Identifier for a @code{long} type. This is just an alias to @code{int32},
or @code{int64} types for 32-bit, or 64-bit systems respectively.
@end deffn
@deffn {Global integer} GAL_TYPE_SIZE_T
Identifier for a @code{size_t} type. This is just an alias to
@code{uint32}, or @code{uint64} types for 32-bit, or 64-bit systems
respectively.
@end deffn
@deffn {Global integer} GAL_TYPE_FLOAT32
Identifier for a 32-bit single precision floating point type or
@code{float} in C.
@end deffn
@deffn {Global integer} GAL_TYPE_FLOAT64
Identifier for a 64-bit double precision floating point type or
@code{double} in C.
@end deffn
@deffn {Global integer} GAL_TYPE_COMPLEX32
Identifier for a complex number composed of two @code{float} types. Note
that the complex type is not yet fully implemented in all Gnuastro's
programs.
@end deffn
@deffn {Global integer} GAL_TYPE_COMPLEX64
Identifier for a complex number composed of two @code{double} types. Note
that the complex type is not yet fully implemented in all Gnuastro's
programs.
@end deffn
@deffn {Global integer} GAL_TYPE_STRING
Identifier for a string of characters (@code{char *}).
@end deffn
@deffn {Global integer} GAL_TYPE_STRLL
Identifier for a linked list of string of characters
(@code{gal_list_str_t}, see @ref{List of strings}).
@end deffn
@noindent
The functions below are defined to make working with the integer constants
above easier. In the functions below, the constants above can be used for
the @code{type} input argument.
@deftypefun size_t gal_type_sizeof (uint8_t @code{type})
Return the number of bytes occupied by @code{type}.
Internally, this function uses C's @code{sizeof} operator to measure the size of each type.
For strings, this function will return the size of @code{char *}.
@end deftypefun
@deftypefun {char *} gal_type_name (uint8_t @code{type}, int @code{long_name})
Return a string literal that contains the name of @code{type}.
It can return both short and long formats of the type names (for example, @code{f32} and @code{float32}).
If @code{long_name} is non-zero, the long format will be returned, otherwise the short name will be returned.
The output string is statically allocated, so it should not be freed.
This function is the inverse of the @code{gal_type_from_name} function.
For the full list of names/strings that this function will return, see @ref{Numeric data types}.
@end deftypefun
@deftypefun uint8_t gal_type_from_name (char @code{*str})
Return the Gnuastro integer constant that corresponds to the string
@code{str}. This function is the inverse of the @code{gal_type_name}
function and accepts both the short and long formats of each type. For the
full list of names/strings that this function will return, see @ref{Numeric
data types}.
@end deftypefun
@deftypefun void gal_type_min (uint8_t @code{type}, void @code{*in})
Put the minimum possible value of @code{type} in the space pointed to by
@code{in}. Since the value can have any type, this function does not return
anything, it assumes the space for the given type is available to @code{in}
and writes the value there. Here is one example
@example
int32_t min;
gal_type_min(GAL_TYPE_INT32, &min);
@end example
@noindent
Note: Do not use the minimum value for a blank value of a general
(initially unknown) type, please use the constants/functions provided in
@ref{Library blank values} for the definition and usage of blank values.
@end deftypefun
@deftypefun void gal_type_max (uint8_t @code{type}, void @code{*in})
Put the maximum possible value of @code{type} in the space pointed to by
@code{in}. Since the value can have any type, this function does not return
anything, it assumes the space for the given type is available to @code{in}
and writes the value there. Here is one example
@example
uint16_t max;
gal_type_max(GAL_TYPE_INT16, &max);
@end example
@noindent
Note: Do not use the maximum value for a blank value of a general
(initially unknown) type, please use the constants/functions provided in
@ref{Library blank values} for the definition and usage of blank values.
@end deftypefun
@deftypefun int gal_type_is_int (uint8_t @code{type})
Return 1 if the type is an integer (any width and any sign).
@end deftypefun
@deftypefun int gal_type_is_list (uint8_t @code{type})
Return 1 if the type is a linked list and zero otherwise.
@end deftypefun
@deftypefun int gal_type_out (int @code{first_type}, int @code{second_type})
Return the larger of the two given types which can be used for the type of
the output of an operation involving the two input types.
@end deftypefun
@deftypefun {char *} gal_type_bit_string (void @code{*in}, size_t @code{size})
Return the bit-string in the @code{size} bytes that @code{in} points
to. The string is dynamically allocated and must be freed afterwards. You
can use it to inspect the bits within one region of memory. Here is one
short example:
@example
int32_t a=2017;
char *bitstr=gal_type_bit_string(&a, 4);
printf("%d: %s (%X)\n", a, bitstr, a);
free(bitstr);
@end example
@noindent
which will produce:
@example
2017: 11100001000001110000000000000000 (7E1)
@end example
As the example above shows, the bit-string is not the most efficient way to
inspect bits. If you are familiar with hexadecimal notation, it is much
more compact, see @url{https://en.wikipedia.org/wiki/Hexadecimal}. You can
use @code{printf}'s @code{%x} or @code{%X} to print integers in hexadecimal
format.
@end deftypefun
@deftypefun {char *} gal_type_to_string (void @code{*ptr}, uint8_t @code{type}, int @code{quote_if_str_has_space});
Read the contents of the memory that @code{ptr} points to (assuming it has
type @code{type} and print it into an allocated string which is returned.
If the memory is a string of characters and @code{quote_if_str_has_space}
is non-zero, the output string will have double-quotes around it if it
contains space characters. Also, note that in this case, @code{ptr} must be
a pointer to an array of characters (or @code{char **}), as in the example
below (which will put @code{"sample string"} into @code{out}):
@example
char *out, *string="sample string"
out = gal_type_to_string(&string, GAL_TYPE_STRING, 1);
@end example
@end deftypefun
@deftypefun int gal_type_from_string (void @code{**out}, char @code{*string}, uint8_t @code{type})
Read a string as a given data type and put a pointer to it in @code{*out}.
When @code{*out!=NULL}, then it is assumed to be already allocated and the value will be simply put there.
If @code{*out==NULL}, then space will be allocated for the given type and the string will be read into that type.
Note that when we are dealing with a string type, @code{*out} should be interpreted as @code{char **} (one element in an array of pointers to different strings).
In other words, @code{out} should be @code{char ***}.
This function can be used to fill in arrays of numbers from strings (in an already allocated data structure), or add nodes to a linked list (if the type is a list type).
For an array, you have to pass the pointer to the @code{i}th element where you want the value to be stored, for example, @code{&(array[i])}.
If the string was successfully parsed to the requested type, this function will return a @code{0} (zero), otherwise it will return @code{1} (one).
This output format will help you check the status of the conversion in a code like the example below where we will try reading a string as a single precision floating point number.
@example
float out;
void *outptr=&out;
if( gal_type_from_string(&outptr, string, GAL_TYPE_FLOAT32) )
@{
fprintf(stderr, "%s could not be read as float32\n", string);
exit(EXIT_FAILURE);
@}
@end example
@noindent
When you need to read many numbers into an array, @code{out} would be an array, and you can simply increment @code{outptr=out+i} (where you increment @code{i}).
@end deftypefun
@deftypefun {void *} gal_type_string_to_number (char @code{*string}, uint8_t @code{*type})
Read @code{string} into smallest type that can host the number, the allocated space for the number will be returned and the type of the number will be put into the memory that @code{type} points to.
If @code{string} could not be read as a number, this function will return @code{NULL}.
This function first calls the C library's @code{strtod} function to read @code{string} as a double-precision floating point number.
When successful, it will check the value to put it in the smallest numerical data type that can handle it; for example, @code{120} and @code{50000} will be read as a signed 8-bit integer and unsigned 16-bit integer types.
When reading as an integer, the C library's @code{strtol} function is used (in base-10) to parse the string again.
This re-parsing as an integer is necessary because integers with many digits (for example, the Unix epoch seconds) will not be accurately stored as a floating point and we cannot use the result of @code{strtod}.
When @code{string} is successfully parsed as a number @emph{and} there is @code{.} in @code{string}, it will force the number into floating point types.
For example, @code{"5"} is read as an integer, while @code{"5."} or @code{"5.0"}, or @code{"5.00"} will be read as a floating point (single-precision).
For floating point types, this function will count the number of significant digits and determine if the given string is single or double precision as described in @ref{Numeric data types}.
For integers, negative numbers will always be placed in signed types (as expected).
If a positive integer falls below the maximum of a signed type of a certain width, it will be signed (for example, @code{10} and @code{150} will be defined as a signed and unsigned 8-bit integer respectively).
In other words, even though @code{10} can be unsigned, it will be read as a signed 8-bit integer.
This is done to respect the C implicit type conversion in binary operators, where signed integers will be interpreted as unsigned, when the other operand is an unsigned integer of the same width.
For example, see the short program below.
It will print @code{-50 is larger than 100000} (which is wrong!).
This happens because when a negative number is parsed as an unsigned, the value is effectively subtracted from the maximum and @mymath{4294967295-50} is indeed larger than 100000 (recall that @mymath{4294967295} is the largest unsigned 32-bit integer, see @ref{Numeric data types}).
@example
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int
main(void)
@{
int32_t a=-50;
uint32_t b=100000;
printf("%d is %s than %d\n", a,
a>b ? "larger" : "less or equal", b);
return 0;
@}
@end example
However, if we read 100000 as a signed 32-bit integer, there will not be any problem and the printed sentence will be logically correct (for someone who does not know anything about numeric data types: users of your programs).
For the advantages of integers, see @ref{Integer benefits and pitfalls}.
@end deftypefun
@node Pointers, Library blank values, Library data types, Gnuastro library
@subsection Pointers (@file{pointer.h})
@cindex Pointers
Pointers play an important role in the C programming language. As the name
suggests, they @emph{point} to a byte in memory (like an address in a
city). The C programming language gives you complete freedom in how to use
the byte (and the bytes that follow it). Pointers are thus a very powerful
feature of C. However, as the saying goes: ``With great power comes great
responsibility'', so they must be approached with care. The functions in
this header are not very complex, they are just wrappers over some basic
pointer functionality regarding pointer arithmetic and allocation (in
memory or HDD/SSD).
@deftypefun {void *} gal_pointer_increment (void @code{*pointer}, size_t @code{increment}, uint8_t @code{type})
Return a pointer to an element that is @code{increment} elements ahead of
@code{pointer}, assuming each element has type of @code{type}. For the type
codes, see @ref{Library data types}.
When working with the @code{array} elements of @code{gal_data_t}, we are
actually dealing with @code{void *} pointers. However, pointer arithmetic
does not apply to @code{void *}, because the system does not know how many
bytes there are in each element to increment the pointer respectively. This
function will use the given @code{type} to calculate where the incremented
element is located in memory.
@end deftypefun
@deftypefun size_t gal_pointer_num_between (void @code{*earlier}, void @code{*later}, uint8_t @code{type})
Return the number of elements (in the given @code{type}) between
@code{earlier} and @code{later}. For the type codes, see @ref{Library data
types}).
@end deftypefun
@deftypefun {void *} gal_pointer_allocate (uint8_t @code{type}, size_t @code{size}, int @code{clear}, const char @code{*funcname}, const char @code{*varname})
Allocate an array of type @code{type} with @code{size} elements in RAM (for the type codes, see @ref{Library data types}).
If @code{clear!=0}, then the allocated space is set to zero (cleared).
This is effectively just a wrapper around C's @code{malloc} or @code{calloc} functions but takes Gnuastro's integer type codes and will also abort with a clear error if there the allocation was not successful.
The number of allocated bytes is the value given to @code{size} that is multiplied by the returned value of @code{gal_type_sizeof} for the given type.
So if you want to allocate space for an array of strings you should pass the type @code{GAL_TYPE_STRING}.
Otherwise, if you just want space for one string (for example, 6 bytes for @code{hello}, including the string-termination character), you should set the type @code{GAL_TYPE_UINT8}.
@cindex C99
When space cannot be allocated, this function will abort the program with a message containing the reason for the failure.
@code{funcname} (name of the function calling this function) and @code{varname} (name of variable that needs this space) will be used in this error message if they are not @code{NULL}.
In most modern compilers, you can use the generic @code{__func__} variable for @code{funcname}.
In this way, you do not have to manually copy and paste the function name or worry about it changing later (@code{__func__} was standardized in C99).
To use this function effectively and avoid memory leaks, make sure to free the allocated array after you are done with it.
Also, be mindful of any functions that make use of this function as they should also free any allocated arrays to maintain memory management and prevent issues with the system.
@end deftypefun
@deftypefun {void *} gal_pointer_allocate_ram_or_mmap (uint8_t @code{type}, size_t @code{size}, int @code{clear}, size_t @code{minmapsize}, char @code{**mmapname}, int @code{quietmmap}, const char @code{*funcname}, const char @code{*varname})
Allocate the given space either in RAM or in a memory-mapped file.
This function is just a high-level wrapper to @code{gal_pointer_allocate} (to allocate in RAM) or @code{gal_pointer_mmap_allocate} (to use a memory-mapped file).
For more on memory management in Gnuastro, please see @ref{Memory management}.
The various arguments are more fully explained in the two functions above.
@end deftypefun
@deftypefun {void *} gal_pointer_mmap_allocate (size_t @code{size}, uint8_t @code{type}, int @code{clear}, char @code{**mmapname}, int @code{allocfailed})
Allocate the necessary space to keep @code{size} elements of type @code{type} in HDD/SSD (a file, not in RAM).
For the type codes, see @ref{Library data types}.
If @code{clear!=0}, then the allocated space will also be cleared.
The allocation is done using C's @code{mmap} function.
The name of the file containing the allocated space is an allocated string that will be put in @code{*mmapname}.
Note that the kernel does not allow an infinite number of memory mappings to files.
So it is not recommended to use this function with every allocation.
The best-case scenario to use this function is for arrays that are very large and can fill up the RAM.
Keep the smaller arrays in RAM, which is faster and can have a (theoretically) unlimited number of allocations.
When you are done with the dataset and do not need it anymore, do not use @code{free} (the dataset is not in RAM).
Just delete the file (and the allocated space for the filename) with the commands below, or simply use @code{gal_pointer_mmap_free}.
@example
remove(mmapname);
free(mmapname);
@end example
If @code{allocfailed!=0} and the memory mapping attempt fails, the warning message will say something like this (assuming you have tried something like @code{malloc} before calling this function): even though there was enough space in RAM, the previous attempts at allocation in RAM failed, so we tried memory mapping, but that also failed.
@end deftypefun
@deftypefun void gal_pointer_mmap_free (char @code{**mmapname}, int @code{quietmmap})
``Free'' (actually delete) the memory-mapped file that is named @code{*mmapname}, then free the string.
If @code{quietmmap} is non-zero, then a warning will be printed for the user to know that the given file has been deleted.
@end deftypefun
@node Library blank values, Library data container, Pointers, Gnuastro library
@subsection Library blank values (@file{blank.h})
When the position of an element in a dataset is important (for example, a pixel in an image), a place-holder is necessary for the element if we do not have a value to fill it with (for example, the CCD cannot read those pixels).
We cannot simply shift all the other pixels to fill in the one we have no value for.
In other cases, it often occurs that the field of sky that you are studying is not a clean rectangle to nicely fit into the boundaries of an image.
You need a way to separate the pixels outside your scientific field from those inside it.
Blank values act as these place holders in a dataset.
They have no usable value but they have a position.
@cindex NaN
Every type needs a corresponding blank value (see @ref{Numeric data types} and @ref{Library data types}).
Floating point types have a unique value identified by IEEE known as Not-a-Number (or NaN) which is a unique value that is recognized by the compiler.
However, integer and string types do not have any standard value.
For integers, in Gnuastro we take an extremum of the given type: for signed types (that allow negatives), the minimum possible value is used as blank and for unsigned types (that only accept positives), the maximum possible value is used.
To be generic and easy to read/write we define a macro for these blank values and strongly encourage you only use these, and never make any assumption on the value of a type's blank value.
@cindex NaN
The IEEE NaN blank value type is defined to fail on any comparison, so if you are dealing with floating point types, you cannot use equality (a NaN will @emph{not} be equal to a NaN).
If you know your dataset is floating point, you can use the @code{isnan} function in C's @file{math.h} header.
For a description of numeric data types see @ref{Numeric data types}.
For the constants identifying integers, please see @ref{Library data types}.
@deffn {Global integer} GAL_BLANK_UINT8
Blank value for an unsigned, 8-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_INT8
Blank value for a signed, 8-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_UINT16
Blank value for an unsigned, 16-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_INT16
Blank value for a signed, 16-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_UINT32
Blank value for an unsigned, 32-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_INT32
Blank value for a signed, 32-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_UINT64
Blank value for an unsigned, 64-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_INT64
Blank value for a signed, 64-bit integer.
@end deffn
@deffn {Global integer} GAL_BLANK_INT
Blank value for @code{int} type (@code{int16_t} or @code{int32_t} depending on the system.
@end deffn
@deffn {Global integer} GAL_BLANK_UINT
Blank value for @code{int} type (@code{int16_t} or @code{int32_t} depending on the system.
@end deffn
@deffn {Global integer} GAL_BLANK_LONG
Blank value for @code{long} type (@code{int32_t} or @code{int64_t} in 32-bit or 64-bit systems).
@end deffn
@deffn {Global integer} GAL_BLANK_ULONG
Blank value for @code{unsigned long} type (@code{uint32_t} or
@code{uint64_t} in 32-bit or 64-bit systems).
@end deffn
@deffn {Global integer} GAL_BLANK_SIZE_T
Blank value for @code{size_t} type (@code{uint32_t} or @code{uint64_t} in
32-bit or 64-bit systems).
@end deffn
@cindex NaN
@deffn {Global integer} GAL_BLANK_FLOAT32
Blank value for a single precision, 32-bit floating point type (IEEE NaN value).
@end deffn
@cindex NaN
@deffn {Global integer} GAL_BLANK_FLOAT64
Blank value for a double precision, 64-bit floating point type (IEEE NaN value).
@end deffn
@deffn {Global integer} GAL_BLANK_STRING
Blank value for string types (this is itself a string, it is not the
@code{NULL} pointer).
@end deffn
@noindent
The functions below can be used to work with blank pixels.
@deftypefun void gal_blank_write (void @code{*pointer}, uint8_t @code{type})
Write the blank value for the given @code{type} into the space that @code{pointer} points to.
This can be used when the space is already allocated (for example, one element in an array or a statically allocated variable).
@end deftypefun
@deftypefun {void *} gal_blank_alloc_write (uint8_t @code{type})
Allocate the space required to keep the blank for the given data type @code{type}, write the blank value into it and return the pointer to it.
@end deftypefun
@deftypefun void gal_blank_initialize (gal_data_t @code{*input})
Initialize all the elements in the @code{input} dataset to the blank value that corresponds to its type.
If @code{input} is not a string, and is a tile over a larger dataset, only the region that the tile covers will be set to blank.
For strings, the full dataset will be initialized.
@end deftypefun
@deftypefun void gal_blank_initialize_array (void @code{*array}, size_t @code{size}, uint8_t @code{type})
Initialize all the elements in the @code{array} to the blank value that corresponds to its type (identified with @code{type}), assuming the array has @code{size} elements.
@end deftypefun
@deftypefun {char *} gal_blank_as_string (uint8_t @code{type}, int @code{width})
Write the blank value for the given data type @code{type} into a string and return it.
The space for the string is dynamically allocated so it must be freed after you are done with it.
If @code{width!=0}, then the final string will be padded with white space characters to have the requested width if it is smaller.
@end deftypefun
@deftypefun int gal_blank_is (void @code{*pointer}, uint8_t @code{type})
Return 1 if the contents of @code{pointer} (assuming a type of @code{type}) is blank.
Otherwise, return 0.
Note that this function only works on one element of the given type.
So if @code{pointer} is an array, only its first element will be checked.
Therefore for strings, the type of @code{pointer} is assumed to be @code{char *}.
To check if an array/dataset has blank elements or to find which elements in an array are blank, you can use @code{gal_blank_present} or @code{gal_blank_flag} respectively (described below).
@end deftypefun
@deftypefun int gal_blank_present (gal_data_t @code{*input}, int @code{updateflag})
Return 1 if the dataset has a blank value and zero if it does not.
Before checking the dataset, this function will look at @code{input}'s flags.
If the @code{GAL_DATA_FLAG_BLANK_CH} bit of @code{input->flag} is on, this function will not do any check and will just use the information in the flags.
This can greatly speed up processing when a dataset needs to be checked multiple times.
When the dataset's flags were not used and @code{updateflags} is non-zero, this function will set the flags appropriately to avoid having to re-check the dataset in future calls.
When @code{updateflags==0}, this function has no side-effects on the dataset: it will not toggle the flags.
If you want to re-check a dataset with the blank-value-check flag already set (for example, if you have made changes to it), then explicitly set the @code{GAL_DATA_FLAG_BLANK_CH} bit to zero before calling this function.
When there are no other flags, you can just set the flags to zero (@code{input->flag=0}), otherwise you can use this expression:
@example
input->flag &= ~GAL_DATA_FLAG_BLANK_CH;
@end example
@end deftypefun
@deftypefun size_t gal_blank_number (gal_data_t @code{*input}, int @code{updateflag})
Return the number of blank elements in @code{input}.
If @code{updateflag!=0}, then the dataset blank keyword flags will be updated.
See the description of @code{gal_blank_present} (above) for more on these flags.
If @code{input==NULL}, then this function will return @code{GAL_BLANK_SIZE_T}.
@end deftypefun
@deftypefun {gal_data_t *} gal_blank_flag (gal_data_t @code{*input})
Return a ``flag'' dataset with the same size as the input, but with an @code{uint8_t} type that has a value of 1 for data elements that are blank and 0 for those that are not.
@end deftypefun
@deftypefun {gal_data_t *} gal_blank_flag_not (gal_data_t @code{*input})
Return a ``flag'' dataset with the same size as the input, but with an @code{uint8_t} type that has a value of 1 for data elements that are @emph{not} blank and 0 for those that are blank.
@end deftypefun
@deftypefun {size_t *} gal_blank_not_minmax_coords (gal_data_t @code{*input})
Find the minimum and maximum coordinates of the non-blank regions within the input dataset.
The coordinates are in C order: starting from 0, and with the slowest dimension being first.
The output is an allocated array (that should be freed later) with @mymath{2\times N} elements (@mymath{N} is the number of dimensions).
The first two elements contain the minimum and maximum of regions containing non-blank elements along the 0-th dimension (the slowest), the second two elements contain the next dimension's extrema; and so on.
When all the elements/pixels of the input are blank, the all the elements of the output will have a value of 0.
@end deftypefun
@deftypefun {gal_data_t *} gal_blank_trim (gal_data_t @code{*input}, int @code{inplace})
Trim all the outer layers of blank values from the input dataset.
For example in the 2D image, ``layers'' would correspond to columns or rows that are fully blank and touching the edge of the image.
For a more complete description, see the description of the @code{trim} operator in @ref{Dimensionality changing operators}.
@end deftypefun
@deftypefun void gal_blank_flag_apply (gal_data_t @code{*input}, gal_data_t @code{*flag})
Set all non-zero and non-blank elements of @code{flag} to blank in @code{input}.
@code{flag} has to have an unsigned 8-bit type and be the same size as @code{input}.
@end deftypefun
@deftypefun void gal_blank_flag_remove (gal_data_t @code{*input}, gal_data_t @code{*flag})
Remove all elements within @code{input} that are flagged, convert it to a 1D dataset and adjust the size properly (the number of non-flagged elements).
In practice this function does not@code{realloc} the input array (see @code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just shifts the blank elements to the end and adjusts the size elements of the @code{gal_data_t}, see @ref{Generic data container}.
Note that elements that are blank, but not flagged will not be removed.
This function will only remove flagged elements.
If all the elements were flagged, then @code{input->size} will be zero.
This is thus a good parameter to check after calling this function to see if there actually were any non-flagged elements in the input or not and take the appropriate measure.
This check is highly recommended because it will avoid strange bugs in later steps.
@end deftypefun
@deftypefun void gal_blank_remove (gal_data_t @code{*input}, int @code{free_if_all_blank})
Remove blank elements from a dataset, convert it to a 1D dataset, adjust the size properly (the number of non-blank elements), and toggle the blank-value-related bit-flags.
In practice this function does not@code{realloc} the input array (see @code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just shifts the blank elements to the end and adjusts the size elements of the @code{gal_data_t}, see @ref{Generic data container}.
If all the elements were blank, then @code{input->size} will be zero.
In case @code{free_if_all_blank==1} is also zero, then @code{input->array} will also be freed and set to @code{NULL}.
To see if there actually were any non-blank elements in the input or not and take the appropriate measure, the most generic parameter to check after calling this function is therefore @code{input->size}.
This check is highly recommended in followup functions because it will avoid strange bugs in later steps of your code (reading a possibly freed space!).
@end deftypefun
@deftypefun void gal_blank_remove_realloc (gal_data_t @code{*input})
Similar to @code{gal_blank_remove}, but also shrinks/re-allocates the dataset's allocated memory.
@end deftypefun
@deftypefun {gal_data_t *} gal_blank_remove_rows (gal_data_t @code{*columns}, gal_list_sizet_t @code{*column_indexs}, int @code{onlydim0})
Remove (in place) any row that has at least one blank value in any of the input columns and return a ``flag'' dataset (that should be freed later).
The input @code{columns} is a list of @code{gal_data_t}s (see @ref{List of gal_data_t}).
When @code{onlydim0!=0} the vector columns (with 2 dimensions) will not be checked for the presence of blank values.
After this function, all the elements in @code{columns} will still have the same size as each other, but if any of the searched columns has blank elements, all their sizes will decrease together.
The returned flag dataset has the same size as the original input dataset, with a type of @code{uint8_t}.
Every row that has been removed from the original dataset has a value of 1, and the rest have a value of 0.
When @code{column_indexs!=NULL}, only the columns whose index (counting from zero) is in @code{column_indexs} will be used to check for blank values (see @ref{List of size_t}.
Therefore, if you want to check all columns, just set this to @code{NULL}.
In any case (no matter which columns are checked for blanks), the selected rows from all columns will be removed.
@end deftypefun
@node Library data container, Dimensions, Library blank values, Gnuastro library
@subsection Data container (@file{data.h})
Astronomical datasets have various dimensions, for example, 1D spectra or table columns, 2D images, or 3D Integral field data cubes.
Datasets can also have various numeric data types, depending on the operation/purpose, for example, processed images are commonly stored in floating point format, but their mask images are integers (allowing bitwise flags to identify certain classes of pixels to keep or mask, see @ref{Numeric data types}).
Certain other information about a dataset are also commonly necessary, for example, the units of the dataset, the name of the dataset and some comments.
To deal with any generic dataset, Gnuastro defines the @code{gal_data_t} as input or output.
@menu
* Generic data container:: Definition of Gnuastro's generic container.
* Dataset allocation:: Allocate, initialize and free a dataset.
* Arrays of datasets:: Functions to help with array of datasets.
* Copying datasets:: Functions to copy a dataset to a new one.
@end menu
@node Generic data container, Dataset allocation, Library data container, Library data container
@subsubsection Generic data container (@code{gal_data_t})
To be able to deal with any dataset (various dimensions, numeric data types, units and higher-level structures), Gnuastro defines the @code{gal_data_t} type which is the input/output container of choice for many of Gnuastro library's functions.
It is defined in @file{gnuastro/data.h}.
If you will be using (`@code{# include}'ing) those libraries, you do not need to include this header explicitly, it is already included by any library header that uses @code{gal_data_t}.
@deftp {Type (C @code{struct})} gal_data_t
The main container for datasets in Gnuastro.
It can host data of any dimensions, with any numeric data type.
It is actually a structure, but @code{typedef}'d as a new type to avoid having to write the @code{struct} before any declaration.
The actual structure is shown below which is followed by a description of each element.
@example
typedef struct gal_data_t
@{
void *restrict array; /* Basic array information. */
uint8_t type;
size_t ndim;
size_t *dsize;
size_t size;
int quietmmap;
char *mmapname;
size_t minmapsize;
int nwcs; /* WCS information. */
struct wcsprm *wcs;
uint8_t flag; /* Content description. */
int status;
char *name;
char *unit;
char *comment;
int disp_fmt; /* For text printing. */
int disp_width;
int disp_precision;
struct gal_data_t *next; /* For higher-level datasets. */
struct gal_data_t *block;
@} gal_data_t;
@end example
@end deftp
@noindent
The list below contains a description for each @code{gal_data_t} element.
@cindex @code{void *}
@table @code
@item void *restrict array
This is the pointer to the main array of the dataset containing the raw data (values).
All the other elements in this data-structure are actually meta-data enabling us to use/understand the series of values in this array.
It must allow data of any type (see @ref{Numeric data types}), so it is defined as a @code{void *} pointer.
A @code{void *} array is not directly usable in C, so you have to cast it to proper type before using it, please see @ref{Library demo - reading a FITS image} for a demonstration.
@cindex @code{restrict}
@cindex C: @code{restrict}
The @code{restrict} keyword was formally introduced in C99 and is used to tell the compiler that at any moment only this pointer will modify what it points to (a pixel in an image for example)@footnote{Also see @url{https://en.wikipedia.org/wiki/Restrict}.}.
This extra piece of information can greatly help in compiler optimizations and thus the running time of the program.
But older compilers might not have this capability, so at @command{./configure} time, Gnuastro checks this feature and if the user's compiler does not support @code{restrict}, it will be removed from this definition.
@cindex Data type
@item uint8_t type
A fixed code (integer) used to identify the type of data in @code{array} (see @ref{Numeric data types}).
For the list of acceptable values to this variable, please see @ref{Library data types}.
@item size_t ndim
The dataset's number of dimensions.
@cindex FORTRAN
@cindex FITS standard
@cindex Standard, FITS
@item size_t *dsize
The size of the dataset along each dimension.
This is an array (with @code{ndim} elements), of positive integers in row-major order@footnote{Also see @url{https://en.wikipedia.org/wiki/Row-_and_column-major_order}.} (based on C).
When a data file is read into memory with Gnuastro's libraries, this array is dynamically allocated based on the number of dimensions that the dataset has.
It is important to remember that C's row-major ordering is the opposite of the FITS standard which is in column-major order: in the FITS standard the fastest dimension's size is specified by @code{NAXIS1}, and slower dimensions follow.
The FITS standard was defined mainly based on the FORTRAN language which is the opposite of C's approach to multi-dimensional arrays (and also starts counting from 1 not 0).
Hence if a FITS image has @code{NAXIS1==20} and @code{NAXIS2==50}, the @code{dsize} array must be filled with @code{dsize[0]==50} and @code{dsize[1]==20}.
The fastest dimension is the one that is contiguous in memory: to increment by one along that dimension, just go to the next element in the array.
As we go to slower dimensions, the number of memory cells we have to skip for an increment along that dimension becomes larger.
@item size_t size
The total number of elements in the dataset.
This is actually a multiplication of all the values in the @code{dsize} array, so it is not an independent parameter.
However, low-level operations with the dataset (irrespective of its dimensions) commonly need this number, so this element is designed to avoid calculating it every time.
@item int quietmmap
When this value is zero, and the dataset must not be allocated in RAM (see @code{mmapname} and @code{minmapsize} below), a warning will be printed to inform the user when the file is created and when it is deleted.
The warning includes the filename, the size in bytes, and the fact that they can toggle this behavior through @code{--minmapsize} option in Gnuastro's programs.
@item char *mmapname
Name of file hosting the @code{mmap}'d contents of @code{array}.
If the value of this variable is @code{NULL}, then the contents of @code{array} are actually stored in RAM, not in a file on the HDD/SSD.
See the description of @code{minmapsize} below for more.
If a file is used, it will be kept in the @file{gnuastro_mmap} directory of the running directory.
Its name is randomly selected to allow multiple arrays at the same time, see description of @option{--minmapsize} in @ref{Processing options}.
When @code{gal_data_free} is called the randomly named file will be deleted.
@item size_t minmapsize
The minimum size of an array (in bytes) to store the contents of @code{array} as a file (on the non-volatile HDD/SSD), not in RAM.
This can be very useful for large datasets which can be very memory intensive and the user's RAM might not be sufficient to keep/process it.
A random filename is assigned to the array which is available in the @code{mmapname} element of @code{gal_data_t} (above), see there for more.
@code{minmapsize} is stored in each @code{gal_data_t}, so it can be passed on to subsequent/derived datasets.
See the description of the @option{--minmapsize} option in @ref{Processing options} for more on using this value.
@item nwcs
The number of WCS coordinate representations (for WCSLIB).
@item struct wcsprm *wcs
The main WCSLIB structure keeping all the relevant information necessary for WCSLIB to do its processing and convert data-set positions into real-world positions.
When it is given a @code{NULL} value, all possible WCS calculations/measurements will be ignored.
@item uint8_t flag
Bitwise flags to describe general properties of the dataset.
The number of bytes available in this flag is stored in the @code{GAL_DATA_FLAG_SIZE} macro.
Note that you should use bitwise operators@footnote{See @url{https://en.wikipedia.org/wiki/Bitwise_operations_in_C}.} to check these flags.
The currently recognized bits are stored in these macros:
@table @code
@cindex Blank data
@item GAL_DATA_FLAG_BLANK_CH
Marking that the dataset has been checked for blank values or not.
When a dataset does not have any blank values, the @code{GAL_DATA_FLAG_HASBLANK} bit will be zero.
But upon initialization, all bits also get a value of zero.
Therefore, a checker needs this flag to see if the value in @code{GAL_DATA_FLAG_HASBLANK} is reliable (dataset has actually been parsed for a blank value) or not.
Also, if it is necessary to re-check the presence of flags, you just have to set this flag to zero and call @code{gal_blank_present} for example, to parse the dataset and check for blank values.
Note that for improved efficiency, when this flag is set, @code{gal_blank_present} will not actually parse the dataset, it will just use @code{GAL_DATA_FLAG_HASBLANK}.
@item GAL_DATA_FLAG_HASBLANK
This bit has a value of @code{1} when the given dataset has blank values.
If this bit is @code{0} and @code{GAL_DATA_FLAG_BLANK_CH} is @code{1}, then the dataset has been checked and it did not have any blank values, so there is no more need for further checks.
@item GAL_DATA_FLAG_SORT_CH
Marking that the dataset is already checked for being sorted or not and thus that the possible @code{0} values in @code{GAL_DATA_FLAG_SORTED_I} and @code{GAL_DATA_FLAG_SORTED_D} are meaningful.
The logic behind this is similar to that in @code{GAL_DATA_FLAG_BLANK_CH}.
@item GAL_DATA_FLAG_SORTED_I
This bit has a value of @code{1} when the given dataset is sorted in an increasing manner.
If this bit is @code{0} and @code{GAL_DATA_FLAG_SORT_CH} is @code{1}, then the dataset has been checked and was not sorted (increasing), so there is no more need for further checks.
@item GAL_DATA_FLAG_SORTED_D
This bit has a value of @code{1} when the given dataset is sorted in a decreasing manner.
If this bit is @code{0} and @code{GAL_DATA_FLAG_SORT_CH} is @code{1}, then the dataset has been checked and was not sorted (decreasing), so there is no more need for further checks.
@end table
The macro @code{GAL_DATA_FLAG_MAXFLAG} contains the largest internally used bit-position.
Higher-level flags can be defined with the bitwise shift operators using this macro to define internal flags for libraries/programs that depend on Gnuastro without causing any possible conflict with the internal flags discussed above or having to check the values manually on every release.
@item int status
A context-specific status values for this data-structure.
This integer will not be set by Gnuastro's libraries.
You can use it keep some additional information about the dataset (with integer constants) depending on your applications.
@item char *name
The name of the dataset.
If the dataset is a multi-dimensional array and read/written as a FITS image, this will be the value in the @code{EXTNAME} FITS keyword.
If the dataset is a one-dimensional table column, this will be the column name.
If it is set to @code{NULL} (by default), it will be ignored.
@item char *unit
The units of the dataset (for example, @code{BUNIT} in the standard FITS keywords) that will be read from or written to files/tables along with the dataset.
If it is set to @code{NULL} (by default), it will be ignored.
@item char *comment
Any further explanation about the dataset which will be written to any output file if present.
@item disp_fmt
Format to use for printing each element of the dataset to a plain text file, the acceptable values to this element are defined in @ref{Table input output}.
Based on C's @code{printf} standards.
@item disp_width
Width of printing each element of the dataset to a plain text file, the acceptable values to this element are defined in @ref{Table input output}.
Based on C's @code{printf} standards.
@item disp_precision
Precision of printing each element of the dataset to a plain text file, the acceptable values to this element are defined in @ref{Table input output}.
Based on C's @code{printf} standards.
@item gal_data_t *next
Through this pointer, you can link a @code{gal_data_t} with other datasets related datasets, for example, the different columns in a dataset each have one @code{gal_data_t} associate with them and they are linked to each other using this element.
There are several functions described below to facilitate using @code{gal_data_t} as a linked list.
See @ref{Linked lists} for more on these wonderful high-level constructs.
@item gal_data_t *block
Pointer to the start of the complete allocated block of memory.
When this pointer is not @code{NULL}, the dataset is not treated as a contiguous patch of memory.
Rather, it is seen as covering only a portion of the larger patch of memory that @code{block} points to.
See @ref{Tessellation library} for a more thorough explanation and functions to help work with tiles that are created from this pointer.
@end table
@node Dataset allocation, Arrays of datasets, Generic data container, Library data container
@subsubsection Dataset allocation
Gnuastro's main data container was defined in @ref{Generic data container}.
The functions listed in this section describe the most basic operations on @code{gal_data_t}: those related to allocation and freeing.
These functions are declared in @file{gnuastro/data.h} which is also visible from the function names (see @ref{Gnuastro library}).
@deftypefun {gal_data_t *} gal_data_alloc (void @code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*name}, char @code{*unit}, char @code{*comment})
Dynamically allocate a @code{gal_data_t} and initialize it will all the given values.
See the description of @code{gal_data_initialize} and @ref{Generic data container} for more information.
This function will often be the most frequently used because it allocates the @code{gal_data_t} hosting all the values @emph{and} initializes it.
Once you are done with the dataset, be sure to clean up all the allocated spaces with @code{gal_data_free}.
@end deftypefun
@deftypefun void gal_data_initialize (gal_data_t @code{*data}, void @code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*name}, char @code{*unit}, char @code{*comment})
Initialize the given data structure (@code{data}) with all the given values.
Note that the raw input @code{gal_data_t} must already have been allocated before calling this function.
For a description of each variable see @ref{Generic data container}.
It will set the values and do the necessary allocations.
If they are not @code{NULL}, all input arrays (@code{dsize}, @code{wcs}, @code{name}, @code{unit}, @code{comment}) are separately copied (allocated) by this function for usage in @code{data}, so you can safely use one value to initialize many datasets or use statically allocated variables in this function call.
Once you are done with the dataset, you can free all the allocated spaces with @code{gal_data_free_contents}.
If @code{array} is not @code{NULL}, it will be directly copied into @code{data->array} (based on the total number of elements calculated from @code{dsize}) and no new space will be allocated for the array of this dataset, this has many low-level advantages and can be used to work on regions of a dataset instead of the whole allocated array (see the description under @code{block} in @ref{Generic data container} for one example).
If the given pointer is not the start of an allocated block of memory or it is used in multiple datasets, be sure to set it to @code{NULL} (with @code{data->array=NULL}) before cleaning up with @code{gal_data_free_contents}.
@code{ndim} may be zero.
In this case no allocation will occur, @code{data->array} and @code{data->dsize} will be set to @code{NULL} and @code{data->size} will be zero.
However (when necessary) @code{dsize} must not have any zero values (a dimension of length zero is not defined).
@end deftypefun
@deftypefun {gal_data_t *} gal_data_alloc_empty (size_t @code{ndim}, size_t @code{minmapsize}, int @code{quietmmap})
Allocate an empty dataset with a certain number of dimensions, but no 'array' component.
The @code{size} element will be set to zero and the @code{dsize} array will be properly allocated (based on the number of dimensions), but all elements will be zero.
This is useful in scenarios where you just need a @code{gal_data_t} for metadata.
@end deftypefun
@deftypefun void gal_data_free_contents (gal_data_t @code{*data})
Free all the non-@code{NULL} pointers in @code{gal_data_t} except for @code{next} and @code{block}.
All freed arrays are set to @code{NULL}.
If @code{data} is actually a tile (@code{data->block!=NULL}, see @ref{Tessellation library}), then @code{data->array} is not freed.
For a complete description of @code{gal_data_t} and its contents, see @ref{Generic data container}.
@end deftypefun
@deftypefun void gal_data_free (gal_data_t @code{*data})
Free all the non-@code{NULL} pointers in @code{gal_data_t}, then free the actual data structure.
@end deftypefun
@node Arrays of datasets, Copying datasets, Dataset allocation, Library data container
@subsubsection Arrays of datasets
Gnuastro's generic data container (@code{gal_data_t}) is a very versatile structure that can be used in many higher-level contexts.
One such higher-level construct is an array of @code{gal_data_t} structures to simplify the allocation (and later cleaning) of several @code{gal_data_t}s that are related.
For example, each column in a table is usually represented by one @code{gal_data_t} (so it has its own name, data type, units, etc.).
A table (with many columns) can be seen as an array of @code{gal_data_t}s (when the number of columns is known a-priori).
The functions below are defined to create a cleared array of data structures and to free them when none are necessary any more.
These functions are declared in @file{gnuastro/data.h} which is also visible from the function names (see @ref{Gnuastro library}).
@deftypefun {gal_data_t *} gal_data_array_calloc (size_t @code{size})
Allocate an array of @code{gal_data_t} with @code{size} elements.
This function will also initialize all the values (@code{NULL} for pointers and 0 for other types).
You can use @code{gal_data_initialize} to fill each element of the array afterwards.
The following code snippet is one example of doing this.
@example
size_t i;
gal_data_t *dataarr;
dataarr=gal_data_array_calloc(10);
for(i=0;i<10;++i) gal_data_initialize(&dataarr[i], ...);
...
gal_data_array_free(dataarr, 10, 1);
@end example
@end deftypefun
@deftypefun void gal_data_array_free (gal_data_t @code{*dataarr}, size_t @code{num}, int @code{free_array})
Free all the @code{num} elements within @code{dataarr} and the actual allocated array.
If @code{free_array} is not zero, then the @code{array} element of all the datasets will also be freed, see @ref{Generic data container}.
@end deftypefun
@deftypefun {gal_data_t **} gal_data_array_ptr_calloc (size_t @code{size})
Allocate an array of pointers to Gnuastro's generic data structure and initialize all pointers to @code{NULL}.
This is useful when you want to allocate individual datasets later (for example, with @code{gal_data_alloc}).
@end deftypefun
@deftypefun void gal_data_array_ptr_free (gal_data_t @code{**dataptr}, size_t @code{size}, int @code{free_array});
Free all the individual datasets within the elements of @code{dataptr}, then free @code{dataptr} itself (the array of pointers that was probably allocated with @code{gal_data_array_ptr_calloc}.
@end deftypefun
@node Copying datasets, , Arrays of datasets, Library data container
@subsubsection Copying datasets
The functions in this section describes Gnuastro's facilities to copy a given dataset into another.
The new dataset can have a different type (including a string), it can be already allocated (in which case only the values will be written into it).
In all these cases, if the input dataset is a tile or a list, only the data within the given tile, or the given node in a list, are copied.
If the input is a list, the @code{next} pointer will also be copied to the output, see @ref{List of gal_data_t}.
In many of the functions here, it is possible to copy the dataset to a new numeric data type (see @ref{Numeric data types}.
In such cases, Gnuastro's library is going to use the native conversion by C.
So if you are converting to a smaller type, it is up to you to make sure that the values fit into the output type.
@deftypefun {gal_data_t *} gal_data_copy (gal_data_t @code{*in})
Return a new dataset that is a copy of @code{in}, all of @code{in}'s meta-data will also copied into the output, except for @code{block}.
If the dataset is a tile/list, only the given tile/node will be copied, the @code{next} pointer will also be copied however.
@end deftypefun
@deftypefun {gal_data_t *} gal_data_copy_to_new_type (gal_data_t @code{*in}, uint8_t @code{newtype})
Return a copy of the dataset @code{in}, converted to @code{newtype}, see @ref{Library data types} for Gnuastro library's type identifiers.
The returned dataset will have all meta-data except their type and @code{block} equal to the input's metadata.
If the dataset is a tile/list, only the given tile/node will be copied, the @code{next} pointer will also be copied however.
@end deftypefun
@deftypefun {gal_data_t *} gal_data_copy_to_new_type_free (gal_data_t @code{*in}, uint8_t @code{newtype})
Return a copy of the dataset @code{in} that is converted to @code{newtype} and free the input dataset.
See @ref{Library data types} for Gnuastro library's type identifiers.
The returned dataset will have all meta-data, except their type, equal to the input's metadata (including @code{next}).
Note that if the input is a tile within a larger block, it will not be freed.
This function is similar to @code{gal_data_copy_to_new_type}, except that it will free the input dataset.
@end deftypefun
@deftypefun {void} gal_data_copy_to_allocated (gal_data_t @code{*in}, gal_data_t @code{*out})
Copy the contents of the array in @code{in} into the already allocated array in @code{out}.
The types of the input and output may be different, type conversion will be done internally.
When @code{in->size != out->size} this function will behave as follows:
@table @code
@item out->size < in->size
This function will not re-allocate the necessary space, it will abort with an error, so please check before calling this function.
@item out->size > in->size
This function will write the values in @code{out->size} and @code{out->dsize} from the same values of @code{in}.
So if you want to use a pre-allocated space/dataset multiple times with varying input sizes, be sure to reset @code{out->size} before every call to this function.
@end table
@end deftypefun
@deftypefun {gal_data_t *} gal_data_copy_string_to_number (char @code{*string})
Read @code{string} into the smallest type that can store the value (see @ref{Numeric data types}).
This function is just a wrapper for the @code{gal_type_string_to_number}, but will put the value into a single-element dataset.
@end deftypefun
@deftypefun {void} gal_data_append_second_array_to_first_free (gal_data_t @code{*a}, gal_data_t @code{*b})
Append the contents of @code{a->array} and @code{b->array} into an internal array, and replace it with @code{a->array}.
Finally, free @code{b}.
Both inputs have to have the same type and have to have a single dimension.
@end deftypefun
@node Dimensions, Linked lists, Library data container, Gnuastro library
@subsection Dimensions (@file{dimension.h})
An array is a contiguous region of memory.
Hence, at the lowest level, every element of an array just has one single-valued position: the number of elements that lie between it and the first element in the array.
This is also known as the @emph{index} of the element within the array.
A dataset's number of dimensions is high-level abstraction (meta-data) that we project onto that contiguous patch of memory.
When the array is interpreted as a one-dimensional dataset, this index is also the @emph{coordinate} of the element.
But once we associate the patch of memory with a higher dimension, there must also be one coordinate for each dimension.
The functions and macros in this section provide you with the tools to
convert an index into a coordinate and vice-versa along with several other
issues for example, issues with the neighbors of an element in a
multi-dimensional context.
@deftypefun size_t gal_dimension_total_size (size_t @code{ndim}, size_t @code{*dsize})
Return the total number of elements for a dataset with @code{ndim} dimensions that has @code{dsize} elements along each dimension.
@end deftypefun
@deftypefun int gal_dimension_is_different (gal_data_t @code{*first}, gal_data_t @code{*second})
Return @code{1} (one) if the two datasets do not have the same size along all dimensions.
This function will also return @code{1} when the number of dimensions of the two datasets are different.
@end deftypefun
@deftypefun {size_t *} gal_dimension_increment (size_t @code{ndim}, size_t @code{*dsize})
Return an allocated array that has the number of elements necessary to increment an index along every dimension.
For example, along the fastest dimension (last element in the @code{dsize} and returned arrays), the value is @code{1} (one).
@end deftypefun
@deftypefun size_t gal_dimension_num_neighbors (size_t @code{ndim})
The maximum number of neighbors (any connectivity) that a data element can have in @code{ndim} dimensions.
Effectively, this function just returns @mymath{3^n-1} (where @mymath{n} is the number of dimensions).
@end deftypefun
@deffn {Function-like macro} GAL_DIMENSION_FLT_TO_INT (@code{FLT})
Calculate the integer pixel position that the floating point @code{FLT} number belongs to.
In the FITS format (and thus in Gnuastro), the center of each pixel is allocated on an integer (not it edge), so the pixel which hosts a floating point number cannot simply be found with internal type conversion.
@end deffn
@deftypefun void gal_dimension_add_coords (size_t @code{*c1}, size_t @code{*c2}, size_t @code{*out}, size_t @code{ndim})
For every dimension, add the coordinates in @code{c1} with @code{c2} and put the result into @code{out}.
In other words, for dimension @code{i} run @code{out[i]=c1[i]+c2[i];}.
Hence @code{out} may be equal to any one of @code{c1} or @code{c2}.
@end deftypefun
@deftypefun size_t gal_dimension_coord_to_index (size_t @code{ndim}, size_t @code{*dsize}, size_t @code{*coord})
Return the index (counting from zero) from the coordinates in @code{coord}
(counting from zero) assuming the dataset has @code{ndim} elements and the
size of the dataset along each dimension is in the @code{dsize} array.
@end deftypefun
@deftypefun void gal_dimension_index_to_coord (size_t @code{index}, size_t @code{ndim}, size_t @code{*dsize}, size_t @code{*coord})
Fill in the @code{coord} array with the coordinates that correspond to @code{index} assuming the dataset has @code{ndim} elements and the size of the dataset along each dimension is in the @code{dsize} array.
Note that both @code{index} and each value in @code{coord} are assumed to start from @code{0} (zero).
Also that the space which @code{coord} points to must already be allocated before calling this function.
@end deftypefun
@deftypefun size_t gal_dimension_dist_manhattan (size_t @code{*a}, size_t @code{*b}, size_t @code{ndim})
@cindex Manhattan distance
@cindex Distance, Manhattan
Return the manhattan distance (see
@url{https://en.wikipedia.org/wiki/Taxicab_geometry, Wikipedia}) between
the two coordinates @code{a} and @code{b} (each an array of @code{ndim}
elements).
@end deftypefun
@deftypefun float gal_dimension_dist_radial (size_t @code{*a}, size_t @code{*b}, size_t @code{ndim})
Return the radial distance between the two coordinates @code{a} and
@code{b} (each an array of @code{ndim} elements).
@end deftypefun
@deftypefun float gal_dimension_dist_elliptical (double @code{*center}, double @code{*pa_deg}, double @code{*q}, size_t @code{ndim}, double @code{*point})
@cindex Ellipse
@cindex Ellipsoid
@cindex Axis ratio
@cindex Position angle
@cindex Elliptical distance
@cindex Ellipsoidal distance
@cindex Distance, elliptical/ellipsoidal
Return the elliptical/ellipsoidal distance of the single point @code{point} (containing @code{ndim} values: coordinates of the point in each dimension) from an ellipse that is defined by @code{center}, @code{pa_deg} and @code{q}.
@code{center} is the coordinates of the ellipse center (also with @code{ndim} elements).
@code{pa} is the position-angle in degrees (the angle of the semi-major axis from the first dimension in a 2D ellipse) and @code{q} is the axis ratio.
In a 2D ellipse, @code{pa} and @code{q} are a single-element array.
However, in a 3D ellipsoid, @code{pa} must have three elements, and @code{q} must have 2 elements.
For more see @ref{Defining an ellipse and ellipsoid}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sum (gal_data_t @code{*in}, size_t @code{c_dim}, gal_data_t @code{*weight})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by summing all elements in that direction.
If @code{weight!=NULL}, it must be a single-dimensional array, with the same size as the dimension to be collapsed.
The respective weight will be multiplied to each element during the collapse.
For generality, the returned dataset will have a @code{GAL_TYPE_FLOAT64} type.
See @ref{Copying datasets} for converting the returned dataset to a desired type.
Also, for more on the application of this function, see the Arithmetic program's @option{collapse-sum} operator (which uses this function) in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mean (gal_data_t @code{*in}, size_t @code{c_dim}, gal_data_t @code{*weight})
Similar to @code{gal_dimension_collapse_sum} (above), but the collapse will
be done by calculating the mean along the requested dimension, not summing
over it.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_number (gal_data_t @code{*in}, size_t @code{c_dim})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by counting how many non-blank elements there are along that dimension.
For generality, the returned dataset will have a @code{GAL_TYPE_INT32} type.
See @ref{Copying datasets} for converting the returned dataset to a desired type.
Also, for more on the application of this function, see the Arithmetic program's @option{collapse-number} operator (which uses this function) in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_minmax (gal_data_t @code{*in}, size_t @code{c_dim}, int @code{max1_min0})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by using the largest/smallest non-blank value along that dimension.
If @code{max1_min0} is non-zero, then the collapsed dataset will have the maximum value along the given dimension and if it is zero, the minimum.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_median (gal_data_t @code{*in}, size_t @code{c_dim}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the median non-blank value along that dimension.
Since the median involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_std (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the standard deviation of pixels along that dimension after sigma-clipping.
Since sigma-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on sigma clipping, see @ref{Sigma clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_fill_std (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_sclip_std}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_mad (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the median absolute deviation (MAD) of pixels along that dimension after sigma-clipping.
Since sigma-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on sigma clipping, see @ref{Sigma clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_fill_mad (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_sclip_mad}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_mean (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the mean of pixels along that dimension after sigma-clipping.
Since sigma-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on sigma clipping, see @ref{Sigma clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_fill_mean (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_sclip_mean}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_median (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the median of pixels along that dimension after sigma-clipping.
Since sigma-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on sigma clipping, see @ref{Sigma clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_fill_median (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_sclip_median}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_number (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the number of pixels along that dimension after sigma-clipping.
Since sigma-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on sigma clipping, see @ref{Sigma clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_sclip_fill_number (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_sclip_number}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_std (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the standard deviation of pixels along that dimension after median absolute deviation (MAD) clipping.
Since MAD-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on MAD-clipping, see @ref{MAD clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_fill_std (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_mclip_std}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_mad (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the median absolute deviation (MAD) of pixels along that dimension after median absolute deviation (MAD) clipping.
Since MAD-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on MAD-clipping, see @ref{MAD clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_fill_mad (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_mclip_mad}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_mean (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the mean of pixels along that dimension after median absolute deviation (MAD) clipping.
Since MAD-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on MAD-clipping, see @ref{MAD clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_fill_mean (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_mclip_mean}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_median (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the median of pixels along that dimension after median absolute deviation (MAD) clipping.
Since MAD-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on MAD-clipping, see @ref{MAD clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_fill_median (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_mclip_median}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_number (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Collapse the input dataset (@code{in}) along the given dimension (@code{c_dim}, in C definition: starting from zero, from the slowest dimension), by finding the number of pixels along that dimension after median absolute deviation (MAD) clipping.
Since MAD-clipping involves sorting, this operator benefits from many threads (which needs to be set with @code{numthreads}).
For more on @code{minmapsize} and @code{quietmmap} see @ref{Memory management}.
For more on MAD-clipping, see @ref{MAD clipping}.
@end deftypefun
@deftypefun {gal_data_t *} gal_dimension_collapse_mclip_fill_number (gal_data_t @code{*in}, size_t @code{c_dim}, float @code{multip}, float @code{param}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_dimension_collapse_mclip_number}, but with filled re-clipping (see @ref{Contiguous outliers}).
@end deftypefun
@deftypefun size_t gal_dimension_remove_extra (size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs})
Remove extra dimensions (those that only have a length of 1) from the basic size information of a dataset.
If the total number of elements (in all dimensions) is 1, this function will not remove anything (because an ``extra'' is only meaningful when some dimensions have more than one element length).
@code{ndim} is the number of dimensions and @code{dsize} is an array with @code{ndim} elements containing the size along each dimension in the C dimension order.
When @code{wcs!=NULL}, the respective dimension will also be removed from the WCS.
This function will return the new number of dimensions and the @code{dsize} elements will contain the length along each new dimension.
@end deftypefun
@deffn {Function-like macro} GAL_DIMENSION_NEIGHBOR_OP (@code{index}, @code{ndim}, @code{dsize}, @code{connectivity}, @code{dinc}, @code{operation})
Parse the neighbors of the element located at @code{index} and do the requested operation on them.
This is defined as a macro to allow easy definition of any operation on the neighbors of a given element without having to use loops within your source code (the loops are implemented by this macro).
For an example of using this function, please see @ref{Library demo - inspecting neighbors}.
The input arguments to this function-like macro are described below:
@table @code
@item index
Distance of this element from the first element in the array on a contiguous patch of memory (starting from 0), see the discussion above.
@item ndim
The number of dimensions associated with the contiguous patch of memory.
@item dsize
The full array size along each dimension.
This must be an array and is assumed to have the same number elements as @code{ndim}.
See the discussion under the same element in @ref{Generic data container}.
@item connectivity
Most distant neighbors to consider.
Depending on the number of dimensions, different neighbors may be defined for each element.
This function-like macro distinguish between these different neighbors with this argument.
It has a value between @code{1} (one) and @code{ndim}.
For example, in a 2D dataset, 4-connected neighbors have a connectivity of @code{1} and 8-connected neighbors have a connectivity of @code{2}.
Note that this is inclusive, so in this example, a connectivity of @code{2} will also include connectivity @code{1} neighbors.
@item dinc
An array keeping the length necessary to increment along each dimension.
You can make this array with the following function.
Just do not forget to free the array after you are done with it:
@example
size_t *dinc=gal_dimension_increment(ndim, dsize);
free(dinc);
@end example
@code{dinc} depends on @code{ndim} and @code{dsize}, but it must be defined outside this function-like macro since it involves allocation to help in performance.
@item operation
Any C operation that you would like to do on the neighbor.
This macro will fill the @code{nind} variable that can be used as the index of the neighbor that is currently being studied.
It is defined as `@code{size_t nind;}'.
Your given @code{operation} will be repeated the number of times there is a neighbor for this element.
See the example in @ref{Library demo - inspecting neighbors} for a fully working demo.
@end table
This macro works fully within its own @code{@{@}} block and except for the @code{nind} variable that shows the neighbor's index, all the variables within this macro's block start with @code{gdn_}.
@end deffn
@node Linked lists, Array input output, Dimensions, Gnuastro library
@subsection Linked lists (@file{list.h})
@cindex Array
@cindex Linked list
An array is a contiguous region of memory that is very efficient and easy to use for recording and later accessing any random element as fast as any other.
This makes array the primary data container when you have many elements (for example, an image which has millions of pixels).
One major problem with an array is that the number of elements that go into it must be known in advance and adding or removing an element will require a re-set of all the other elements.
For example, if you want to remove the 3rd element in a 1000 element array, all 997 subsequent elements have to pulled back by one position, the reverse will happen if you need to add an element.
In many contexts such situations never come up, for example, you do not want to shift all the pixels in an image by one or two pixels from some random position in the image: their positions have scientific value.
But in other contexts you will find yourself frequently adding/removing an a-priori unknown number of elements.
Linked lists (or @emph{lists} for short) are the data-container of choice in such situations.
As in a chain, each @emph{node} in a list is an independent C structure, keeping its own data along with pointer(s) to its immediate neighbor(s).
Below, you can see one simple linked list node structure along with an ASCII art schematic of how we can use the @code{next} pointer to add any number of elements to the list that we want.
By convention, a list is terminated when @code{next} is the @code{NULL} pointer.
@c The second and last lines lines are pushed line space forward, because
@c the `@{' at the start of the line is only seen as `{' in the book.
@example
struct list_float /* --------- --------- */
@{ /* | Value | | Value | */
float value; /* | --- | | --- | */
struct list_float *next; /* | next-|--> | next-|--> NULL */
@} /* --------- --------- */
@end example
The schematic shows another great advantage of linked lists: it is very easy to add or remove/pop a node anywhere in the list.
If you want to modify the first node, you just have to change one pointer.
If it is in the middle, you just have to change two.
You initially define a variable of this type with a @code{NULL} pointer as shown below:
@example
struct list_float *list=NULL;
@end example
@noindent
To add or remove/pop a node from the list you can use functions provided
for the respective type in the sections below.
@cindex last-in-first-out
@cindex first-in-first-out
@noindent
When you add an element to the list, it is conventionally added to the ``top'' of the list: the general list pointer will point to the newly created node, which will point to the previously created node and so on.
So when you ``pop'' from the top of the list, you are actually retrieving the last value you put in and changing the list pointer to the next youngest node.
This is thus known as a ``last-in-first-out'' list.
This is the most efficient type of linked list (easier to implement and faster to process).
Alternatively, you can add each newly created node at the end of the list.
If you do that, you will get a ``first-in-first-out'' list.
But that will force you to go through the whole list for each new element that is created (this will slow down the processing)@footnote{A better way to get a first-in-first-out is to first keep the data as last-in-first-out until they are all read.
Afterwards, reverse the list by popping each node and immediately add it to the new list.
This practically reverses the last-in-first-out list to a first-in-first-out one.
All the list types discussed in this chapter have a function with a @code{_reverse} suffix for this job.}.
The node example above creates the simplest kind of a list.
We can define each node with two pointers to both the next and previous neighbors, this is called a ``Doubly linked list''.
In general, lists are very powerful and simple constructs that can be very useful.
But going into more detail would be out of the scope of this short introduction in this book.
@url{https://en.wikipedia.org/wiki/Linked_list, Wikipedia} has a nice and more thorough discussion of the various types of lists.
To appreciate/use the beauty and elegance of these powerful constructs even further, see Chapter 2 (Information Structures, in volume 1) of Donald Knuth's ``The art of computer programming''.
In this section we will review the functions and structures that are available in Gnuastro for working on lists.
They differ by the type of data that each node can keep.
For each linked-list node structure, we will first introduce the structure, then the functions for working on the structure.
All these structures and functions are defined and declared in @file{gnuastro/list.h}.
@menu
* List of strings:: Simply linked list of strings.
* List of int32_t:: Simply linked list of int32_ts.
* List of size_t:: Simply linked list of size_ts.
* List of float:: Simply linked list of floats.
* List of double:: Simply linked list of doubles
* List of size_t and double:: Simply linked list of a size_t and double.
* List of two size_ts and a double:: Simply linked list of two size_ts and a double.
* List of void:: Simply linked list of void * pointers.
* Ordered list of size_t:: Simply linked, ordered list of size_t.
* Doubly linked ordered list of size_t:: Definition and functions.
* List of gal_data_t:: Simply linked list Gnuastro's generic datatype.
@end menu
@node List of strings, List of int32_t, Linked lists, Linked lists
@subsubsection List of strings
Probably one of the most common lists you will be using are lists of strings.
They are the best tools when you are reading the user's inputs, or when adding comments to the output files.
Below you can see Gnuastro's string list type and several functions to help in adding, removing/popping, reversing and freeing the list.
@deftp {Type (C @code{struct})} gal_list_str_t
A single node in a list containing a string of characters.
@example
typedef struct gal_list_str_t
@{
char *v;
struct gal_list_str_t *next;
@} gal_list_str_t;
@end example
@end deftp
@deftypefun void gal_list_str_add (gal_list_str_t @code{**list}, char @code{*value}, int @code{allocate})
Add a new node to the list of strings (@code{list}) and update it.
The new node will contain the string @code{value}.
If @code{allocate} is not zero, space will be allocated specifically for the string of the new node and the contents of @code{value} will be copied into it.
This can be useful when your string may be changed later in the program, but you want your list to remain.
Here is one short/simple example of initializing and adding elements to a string list:
@example
gal_list_str_t *list=NULL;
gal_list_str_add(&list, "bottom of list.", 1);
gal_list_str_add(&list, "second last element of list.", 1);
@end example
@end deftypefun
@deftypefun {char *} gal_list_str_pop (gal_list_str_t @code{**list})
Pop the top element of @code{list}, change @code{list} to point to the next node in the list, and return the string that was in the popped node.
If @code{*list==NULL}, then this function will also return a @code{NULL} pointer.
@end deftypefun
@deftypefun size_t gal_list_str_number (gal_list_str_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun {gal_list_str_t *} gal_list_str_last (gal_list_str_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_str_print (gal_list_str_t @code{*list})
Print the strings within each node of @code{*list} on the standard output in the same order that they are stored.
Each string is printed on one line.
This function is mainly good for checking/debugging your program.
For program outputs, it is best to make your own implementation with a better, more user-friendly, format.
For example, the following code snippet.
@example
size_t i=0;
gal_list_str_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
printf("String %zu: %s\n", ++i, tmp->v);
@end example
@end deftypefun
@deftypefun void gal_list_str_reverse (gal_list_str_t @code{**list})
Reverse the order of the list such that the top node in the list before calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun void gal_list_str_free (gal_list_str_t @code{*list}, int @code{freevalue})
Free every node in @code{list}.
If @code{freevalue} is not zero, also free the string within the nodes.
@end deftypefun
@deftypefun {gal_list_str_t *} gal_list_str_extract (char @code{*string})
Extract space-separated components of the input string.
If any space element should be kept (and not considered as a delimiter between two tokens), precede it with a backslash (@code{\}).
Be aware that in C programming, when including a backslash character within a string literal, the correct format is indeed to use two backslashes ("\\") to represent a single backslash:
@example
gal_list_str_extract("bottom of\\ list");
@end example
@end deftypefun
@deftypefun {char *} gal_list_str_cat (gal_list_str_t @code{*list}, char @code{delimiter})
Concatenate (append) the input list of strings into a single string where each node is separated from the next with the given @code{delimiter}.
The space for the output string is allocated by this function and should be freed when you have finished with it.
If there is any delimiter characters are present in any of the elements, a backslash (@code{\}) will be printed before the SPACE character.
This is necessary, otherwise, a function like @code{gal_list_str_extract} will not be able to extract the elements back into separate elements in a list.
@end deftypefun
@node List of int32_t, List of size_t, List of strings, Linked lists
@subsubsection List of @code{int32_t}
Signed integers are the best types when you are dealing with a positive or negative integers.
The are generally useful in many contexts, for example when you want to keep the order of a series of states (each state stored as a given number in an @code{enum} for example).
On many modern systems, @code{int32_t} is just an alias for @code{int}, so you can use them interchangeably.
To make sure, check the size of @code{int} on your system:
@deftp {Type (C @code{struct})} gal_list_i32_t
A single node in a list containing a 32-bit signed integer (see
@ref{Numeric data types}).
@example
typedef struct gal_list_i32_t
@{
int32_t v;
struct gal_list_i32_t *next;
@} gal_list_i32_t;
@end example
@end deftp
@deftypefun void gal_list_i32_add (gal_list_i32_t @code{**list}, int32_t @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of @code{int32_t}s (@code{uint32_t} is equal to @code{int} on many modern systems), and update @code{list}.
Here is one short example of initializing and adding elements to a string list:
@example
gal_list_i32_t *list=NULL;
gal_list_i32_add(&list, 52);
gal_list_i32_add(&list, -4);
@end example
@end deftypefun
@deftypefun {int32_t} gal_list_i32_pop (gal_list_i32_t @code{**list})
Pop the top element of @code{list} and return the value.
This function will also change @code{list} to point to the next node in the list.
If @code{*list==NULL}, then this function will also return @code{GAL_BLANK_INT32} (see @ref{Library blank values}).
@end deftypefun
@deftypefun size_t gal_list_i32_number (gal_list_i32_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun size_t gal_list_i32_last (gal_list_i32_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_i32_print (gal_list_i32_t @code{*list})
Print the integers within each node of @code{*list} on the standard output in the same order that they are stored.
Each integer is printed on one line.
This function is mainly good for checking/debugging your program.
For program outputs, it is best to make your own implementation with a better, more user-friendly format.
For example, the following code snippet.
You can also modify it to print all values in one line, etc., depending on the context of your program.
@example
size_t i=0;
gal_list_i32_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
printf("Number %zu: %s\n", ++i, tmp->v);
@end example
@end deftypefun
@deftypefun void gal_list_i32_reverse (gal_list_i32_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun {int32_t *} gal_list_i32_to_array (gal_list_i32_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in @code{list}.
The function will return a pointer to the allocated array and put the number of elements in the array into the @code{num} pointer.
If @code{reverse} has a non-zero value, the array will be filled in the opposite order of elements in @code{list}.
This function can be useful after you have finished reading an initially unknown number of values and want to put them in an array for easy random access.
@end deftypefun
@deftypefun void gal_list_i32_free (gal_list_i32_t @code{*list})
Free every node in @code{list}.
@end deftypefun
@node List of size_t, List of float, List of int32_t, Linked lists
@subsubsection List of @code{size_t}
The @code{size_t} type is a unique type in C: as the name suggests it is defined to store sizes, or more accurately, the distances between memory locations.
Hence it is always positive (an @code{unsigned} type) and it is directly related to the address-able spaces on the host system: on 32-bit and 64-bit systems it is an alias for @code{uint32_t} and @code{uint64_t}, respectively (see @ref{Numeric data types}).
@code{size_t} is the default compiler type to index an array (recall that an array index in C is just a pointer increment of a given @emph{size}).
Since it is unsigned, it is a great type for counting (where negative is not defined), you are always sure it will never exceed the system's (virtual) memory and since its name has the word ``size'' inside it, it provides a good level of documentation@footnote{So you know that a variable of this type is not used to store some generic state for example.}.
In Gnuastro, we do all counting and array indexing with this type, so this list is very handy.
As discussed above, @code{size_t} maps to different types on different machines, so a portable way to print them with @code{printf} is to use C99's @code{%zu} format.
@deftp {Type (C @code{struct})} gal_list_sizet_t
A single node in a list containing a @code{size_t} value (which maps to
@code{uint32_t} or @code{uint64_t} on 32-bit and 64-bit systems), see
@ref{Numeric data types}.
@example
typedef struct gal_list_sizet_t
@{
size_t v;
struct gal_list_sizet_t *next;
@} gal_list_sizet_t;
@end example
@end deftp
@deftypefun void gal_list_sizet_add (gal_list_sizet_t @code{**list}, size_t @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of @code{size_t}s and update @code{list}.
Here is one short example of initializing and adding elements to a string list:
@example
gal_list_sizet_t *list=NULL;
gal_list_sizet_add(&list, 45493);
gal_list_sizet_add(&list, 930484);
@end example
@end deftypefun
@deftypefun {sizet_t} gal_list_sizet_pop (gal_list_sizet_t @code{**list})
Pop the top element of @code{list} and return the value.
This function will also change @code{list} to point to the next node in the list.
If @code{*list==NULL}, then this function will also return @code{GAL_BLANK_SIZE_T} (see @ref{Library blank values}).
@end deftypefun
@deftypefun size_t gal_list_sizet_number (gal_list_sizet_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun size_t gal_list_sizet_last (gal_list_sizet_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_sizet_print (gal_list_sizet_t @code{*list})
Print the values within each node of @code{*list} on the standard output in the same order that they are stored.
Each integer is printed on one line.
This function is mainly good for checking/debugging your program.
For program outputs, it is best to make your own implementation with a better, more user-friendly format.
For example, the following code snippet.
You can also modify it to print all values in one line, etc., depending on the context of your program.
@example
size_t i=0;
gal_list_sizet_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
printf("Number %zu: %zu\n", ++i, tmp->v);
@end example
@end deftypefun
@deftypefun void gal_list_sizet_reverse (gal_list_sizet_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun {size_t *} gal_list_sizet_to_array (gal_list_sizet_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in @code{list}.
The function will return a pointer to the allocated array and put the number of elements in the array into the @code{num} pointer.
If @code{reverse} has a non-zero value, the array will be filled in the inverse of the order of elements in @code{list}.
This function can be useful after you have finished reading an initially unknown number of values and want to put them in an array for easy random access.
@end deftypefun
@deftypefun void gal_list_sizet_free (gal_list_sizet_t @code{*list})
Free every node in @code{list}.
@end deftypefun
@node List of float, List of double, List of size_t, Linked lists
@subsubsection List of @code{float}
Single precision floating point numbers can accurately store real number until 7.2 decimals and only consume 4 bytes (32-bits) of memory, see @ref{Numeric data types}.
Since astronomical data rarely reach that level of precision, single precision floating points are the type of choice to keep and read data.
However, when processing the data, it is best to use double precision floating points (since errors propagate).
@deftp {Type (C @code{struct})} gal_list_f32_t
A single node in a list containing a 32-bit single precision @code{float}
value: see @ref{Numeric data types}.
@example
typedef struct gal_list_f32_t
@{
float v;
struct gal_list_f32_t *next;
@} gal_list_f32_t;
@end example
@end deftp
@deftypefun void gal_list_f32_add (gal_list_f32_t @code{**list}, float @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of @code{float}s and update @code{list}.
Here is one short example of initializing and adding elements to a string list:
@example
gal_list_f32_t *list=NULL;
gal_list_f32_add(&list, 3.89);
gal_list_f32_add(&list, 1.23e-20);
@end example
@end deftypefun
@deftypefun {float} gal_list_f32_pop (gal_list_f32_t @code{**list})
Pop the top element of @code{list} and return the value.
This function will also change @code{list} to point to the next node in the list.
If @code{*list==NULL}, then this function will return @code{GAL_BLANK_FLOAT32} (NaN, see @ref{Library blank values}).
@end deftypefun
@deftypefun size_t gal_list_f32_number (gal_list_f32_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun size_t gal_list_f32_last (gal_list_f32_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_f32_print (gal_list_f32_t @code{*list})
Print the values within each node of @code{*list} on the standard output in the same order that they are stored.
Each floating point number is printed on one line.
This function is mainly good for checking/debugging your program.
For program outputs, it is best to make your own implementation with a better, more user-friendly format.
For example, in the following code snippet.
You can also modify it to print all values in one line, etc., depending on the context of your program.
@example
size_t i=0;
gal_list_f32_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
printf("Number %zu: %f\n", ++i, tmp->v);
@end example
@end deftypefun
@deftypefun void gal_list_f32_reverse (gal_list_f32_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun {float *} gal_list_f32_to_array (gal_list_f32_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in @code{list}.
The function will return a pointer to the allocated array and put the number of elements in the array into the @code{num} pointer.
If @code{reverse} has a non-zero value, the array will be filled in the inverse of the order of elements in @code{list}.
This function can be useful after you have finished reading an initially unknown number of values and want to put them in an array for easy random access.
@end deftypefun
@deftypefun void gal_list_f32_free (gal_list_f32_t @code{*list})
Free every node in @code{list}.
@end deftypefun
@node List of double, List of size_t and double, List of float, Linked lists
@subsubsection List of @code{double}
Double precision floating point numbers can accurately store real number until 15.9 decimals and consume 8 bytes (64-bits) of memory, see @ref{Numeric data types}.
This level of precision makes them very good for serious processing in the middle of a program's execution: in many cases, the propagation of errors will still be insignificant compared to actual observational errors in a data set.
But since they consume 8 bytes and more CPU processing power, they are often not the best choice for storing and transferring of data.
@deftp {Type (C @code{struct})} gal_list_f64_t
A single node in a list containing a 64-bit double precision @code{double}
value: see @ref{Numeric data types}.
@example
typedef struct gal_list_f64_t
@{
double v;
struct gal_list_f64_t *next;
@} gal_list_f64_t;
@end example
@end deftp
@deftypefun void gal_list_f64_add (gal_list_f64_t @code{**list}, double @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of @code{double}s and update @code{list}.
Here is one short example of initializing and adding elements to a string list:
@example
gal_list_f64_t *list=NULL;
gal_list_f64_add(&list, 3.8129395763193);
gal_list_f64_add(&list, 1.239378923931e-20);
@end example
@end deftypefun
@deftypefun {double} gal_list_f64_pop (gal_list_f64_t @code{**list})
Pop the top element of @code{list} and return the value.
This function will also change @code{list} to point to the next node in the list.
If @code{*list==NULL}, then this function will return @code{GAL_BLANK_FLOAT64} (NaN, see @ref{Library blank values}).
@end deftypefun
@deftypefun size_t gal_list_f64_number (gal_list_f64_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun size_t gal_list_f64_last (gal_list_f64_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_f64_print (gal_list_f64_t @code{*list})
Print the values within each node of @code{*list} on the standard output in the same order that they are stored.
Each floating point number is printed on one line.
This function is mainly good for checking/debugging your program.
For program outputs, it is best to make your own implementation with a better, more user-friendly format.
For example, in the following code snippet.
You can also modify it to print all values in one line, etc., depending on the context of your program.
@example
size_t i=0;
gal_list_f64_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
printf("Number %zu: %f\n", ++i, tmp->v);
@end example
@end deftypefun
@deftypefun void gal_list_f64_reverse (gal_list_f64_t @code{**list})
Reverse the order of the list such that the top node in the list before calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun {double *} gal_list_f64_to_array (gal_list_f64_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in @code{list}.
The function will return a pointer to the allocated array and put the number of elements in the array into the @code{num} pointer.
If @code{reverse} has a non-zero value, the array will be filled in the inverse of the order of elements in @code{list}.
This function can be useful after you have finished reading an initially unknown number of values and want to put them in an array for easy random access.
@end deftypefun
@deftypefun {gal_data_t *} gal_list_f64_to_data (gal_list_f64_t @code{*list}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap})
Write the values in the given @code{list} into a @code{gal_data_t} dataset of the requested @code{type}.
The order of the values in the dataset will be the same as the order from the top of the list.
@end deftypefun
@deftypefun void gal_list_f64_free (gal_list_f64_t @code{*list})
Free every node in @code{list}.
@end deftypefun
@node List of size_t and double, List of two size_ts and a double, List of double, Linked lists
@subsubsection List of @code{size_t} and @code{double}
The list structure below is useful when you need to keep floating point values and indices (most common usage of @code{size_t} in Gnuastro) together within a higher-level program.
@deftp {Type (C @code{struct})} gal_list_sizetf64_t
Each node in this singly-linked list contains a @code{size_t} value and a floating point value with a type of double-precision floating point.
@end deftp
@example
typedef struct gal_list_sizetf64_t
@{
size_t i;
double v;
struct gal_list_sizetf64_t *next;
@} gal_list_sizetf64_t;
@end example
@deftypefun void gal_list_sizetf64_add (gal_list_sizetf64_t @code{**list}, size_t @code{index}, double @code{value})
Allocate space for a new node in @code{list}, to store the @code{index} and @code{value} into it.
As a simply-linked list, the added node will be placed at the top of the list.
@end deftypefun
@deftypefun void gal_list_sizetf64_pop (gal_list_sizetf64_t @code{**list}, size_t @code{*index}, double @code{*value})
Pop a node from the top of @code{list}, return the node's index and value in the respective places that @code{index} and @code{value} point to.
This function will also free the allocated space for the popped node and after this function, @code{list} will point to the next node (which has a larger @code{tosort} element).
@end deftypefun
@deftypefun void gal_list_sizetf64_free (gal_list_sizetf64_t @code{**list})
Free any element within the input list.
@end deftypefun
@deftp {Type (C @code{struct})} gal_list_sizetsizetf64_t
Each node in this singly-linked list contains a @code{size_t} value and a floating point value with a type of double-precision floating point.
@end deftp
@node List of two size_ts and a double, List of void, List of size_t and double, Linked lists
@subsubsection List of two @code{size_t}s and a @code{double}
When it is necessary to keep two indices and a double value, the structure and respective functions below can be used.
@example
typedef struct gal_list_sizetsizetf64_t
@{
size_t i;
size_t j;
double v;
struct gal_list_sizetsizetf64_t *next;
@} gal_list_sizetsizetf64_t;
@end example
@deftypefun void gal_list_sizetsizetf64_add (gal_list_sizetsizetf64_t @code{**list}, size_t @code{i}, size_t @code{i}, double @code{v})
Allocate space for a new node in @code{list}, to store the @code{i}, @code{j} and @code{v} into it.
As a simply-linked list, the added node will be placed at the top of the list.
@end deftypefun
@deftypefun void gal_list_sizetsizetf64_pop (gal_list_sizetsizetf64_t @code{**list}, size_t @code{*i}, size_t @code{*j}, double @code{*v})
Pop a node from the top of @code{list}, return the node's index and value in the respective places that @code{i}, @code{j} and @code{v} point to.
This function will also free the allocated space for the popped node and after this function, @code{list} will point to the next node (which has a larger @code{tosort} element).
@end deftypefun
@deftypefun void gal_list_sizetsizetf64_free (gal_list_sizetsizetf64_t @code{**list})
Free any element within the input list.
@end deftypefun
@node List of void, Ordered list of size_t, List of two size_ts and a double, Linked lists
@subsubsection List of @code{void *}
In C, @code{void *} is the most generic pointer.
Usually pointers are associated with the type of content they point to.
For example, @code{int *} means a pointer to an integer.
This ancillary information about the contents of the memory location is very useful for the compiler, catching bad errors and also documentation (it helps the reader see what the address in memory actually contains).
However, @code{void *} is just a raw address (pointer), it contains no information on the contents it points to.
These properties make the @code{void *} very useful when you want to treat the contents of an address in different ways.
You can use the @code{void *} list defined in this section and its function on any kind of data: for example, you can use it to keep a list of custom data structures that you have built for your own separate program.
Each node in the list can keep anything and this gives you great versatility.
But in using @code{void *}, please beware that ``with great power comes great responsibility''.
@deftp {Type (C @code{struct})} gal_list_void_t
A single node in a list containing a @code{void *} pointer.
@example
typedef struct gal_list_void_t
@{
void *v;
struct gal_list_void_t *next;
@} gal_list_void_t;
@end example
@end deftp
@deftypefun void gal_list_void_add (gal_list_void_t @code{**list}, void @code{*value})
Add a new node (containing @code{value}) to the top of the @code{list} of @code{void *}s and update @code{list}.
Here is one short example of initializing and adding elements to a string list:
@example
gal_list_void_t *list=NULL;
gal_list_f64_add(&list, some_pointer);
gal_list_f64_add(&list, another_pointer);
@end example
@end deftypefun
@deftypefun {void *} gal_list_void_pop (gal_list_void_t @code{**list})
Pop the top element of @code{list} and return the value.
This function will also change @code{list} to point to the next node in the list.
If @code{*list==NULL}, then this function will return @code{NULL}.
@end deftypefun
@deftypefun size_t gal_list_void_number (gal_list_void_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun size_t gal_list_void_last (gal_list_void_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_void_reverse (gal_list_void_t @code{**list})
Reverse the order of the list such that the top node in the list before calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun void gal_list_void_free (gal_list_void_t @code{*list})
Free every node in @code{list}.
@end deftypefun
@node Ordered list of size_t, Doubly linked ordered list of size_t, List of void, Linked lists
@subsubsection Ordered list of @code{size_t}
Positions/sizes in a dataset are conventionally in the @code{size_t} type (see @ref{List of size_t}) and it sometimes occurs that you want to parse and read the values in a specific order.
For example, you want to start from one pixel and add pixels to the list based on their distance to that pixel.
So that ever time you pop an element from the list, you know it is the nearest that has not yet been studied.
The @code{gal_list_osizet_t} type and its functions in this section are designed to facilitate such operations.
@deftp {Type (C @code{struct})} gal_list_osizet_t
@cindex @code{size_t}
Each node in this singly-linked list contains a @code{size_t} value and a floating point value.
The floating point value is used as a reference to add new nodes in a sorted manner.
At any moment, the first popped node in this list will have the smallest @code{tosort} value, and subsequent nodes will have larger to values.
@end deftp
@example
typedef struct gal_list_osizet_t
@{
size_t v; /* The actual value. */
float s; /* The parameter to sort by. */
struct gal_list_osizet_t *next;
@} gal_list_osizet_t;
@end example
@deftypefun void gal_list_osizet_add (gal_list_osizet_t @code{**list}, size_t @code{value}, float @code{tosort})
Allocate space for a new node in @code{list}, and store @code{value} and @code{tosort} into it.
The new node will not necessarily be at the ``top'' of the list.
If @code{*list!=NULL}, then the @code{tosort} values of existing nodes is inspected and the given node is placed in the list such that the top element (which is popped with @code{gal_list_osizet_pop}) has the smallest @code{tosort} value.
@end deftypefun
@deftypefun size_t gal_list_osizet_pop (gal_list_osizet_t @code{**list}, float @code{*sortvalue})
Pop a node from the top of @code{list}, return the node's @code{value} and put its sort value in the space that @code{sortvalue} points to.
This function will also free the allocated space for the popped node and after this function, @code{list} will point to the next node (which has a larger @code{tosort} element).
@end deftypefun
@deftypefun void gal_list_osizet_to_sizet_free (gal_list_osizet_t @code{*in}, gal_list_sizet_t @code{**out})
Convert the ordered list of @code{size_t}s into an ordinary @code{size_t} linked list.
This can be useful when all the elements have been added and you just need to pop-out elements and do not care about the sorting values any more.
After the conversion is done, this function will free the input list.
Note that the @code{out} list does not have to be empty.
If it already contains some nodes, the new nodes will be added on top of them.
@end deftypefun
@node Doubly linked ordered list of size_t, List of gal_data_t, Ordered list of size_t, Linked lists
@subsubsection Doubly linked ordered list of @code{size_t}
An ordered list of indices is required in many contexts, one example was discussed at the beginning of @ref{Ordered list of size_t}.
But the list that was introduced there only has one point of entry: you can always only parse the list from smallest to largest.
In this section, the doubly-linked @code{gal_list_dosizet_t} node is defined which will allow us to parse the values in ascending or descending order.
@deftp {Type (C @code{struct})} gal_list_dosizet_t
@cindex @code{size_t}
Doubly-linked, ordered @code{size_t} list node structure.
Each node in this Doubly-linked list contains a @code{size_t} value and a floating point value.
The floating point value is used as a reference to add new nodes in a sorted manner.
In the functions here, this linked list can be pointed to by two pointers (largest and smallest) with the following format:
@example
largest pointer
|
NULL <-- (v0,s0) <--> (v1,s1) <--> ... (vn,sn) --> NULL
|
smallest pointer
@end example
At any moment, the two pointers will point to the nodes containing the ``largest'' and ``smallest'' values and the rest of the nodes will be sorted.
This is useful when an unknown number of nodes are being added continuously and during the operations it is important to have the nodes in a sorted format.
@example
typedef struct gal_list_dosizet_t
@{
size_t v; /* The actual value. */
float s; /* The parameter to sort by. */
struct gal_list_dosizet_t *prev;
struct gal_list_dosizet_t *next;
@} gal_list_dosizet_t;
@end example
@end deftp
@deftypefun void gal_list_dosizet_add (gal_list_dosizet_t @code{**largest}, gal_list_dosizet_t @code{**smallest}, size_t @code{value}, float @code{tosort})
Allocate space for a new node in @code{list}, and store @code{value} and @code{tosort} into it.
If the list is empty, both @code{largest} and @code{smallest} must be @code{NULL}.
@end deftypefun
@deftypefun size_t gal_list_dosizet_pop_smallest (gal_list_dosizet_t @code{**largest}, gal_list_dosizet_t @code{**smallest}, float @code{tosort})
Pop the value with the smallest reference from the doubly linked list and store the reference into the space pointed to by @code{tosort}.
Note that even though only the smallest pointer will be popped, when there was only one node in the list, the @code{largest} pointer also has to change, so we need both.
@end deftypefun
@deftypefun void gal_list_dosizet_print (gal_list_dosizet_t @code{*largest}, gal_list_dosizet_t @code{*smallest})
Print the largest and smallest values sequentially until the list is parsed.
@end deftypefun
@deftypefun void gal_list_dosizet_to_sizet (gal_list_dosizet_t @code{*in}, gal_list_sizet_t @code{**out})
Convert the doubly linked, ordered @code{size_t} list into a singly-linked list of @code{size_t}.
@end deftypefun
@deftypefun void gal_list_dosizet_free (gal_list_dosizet_t @code{*largest})
Free the doubly linked, ordered @code{sizet_t} list.
@end deftypefun
@node List of gal_data_t, , Doubly linked ordered list of size_t, Linked lists
@subsubsection List of @code{gal_data_t}
Gnuastro's generic data container has a @code{next} element which enables it to be used as a singly-linked list (see @ref{Generic data container}).
The ability to connect the different data containers offers great advantages.
For example, each column in a table in an independent dataset: with its own name, units, numeric data type (see @ref{Numeric data types}).
Another application is in Tessellating an input dataset into separate tiles or only studying particular regions, or tiles, of a larger dataset (see @ref{Tessellation} and @ref{Tessellation library}).
Each independent tile over the dataset can be connected to the others as a linked list and thus any number of tiles can be represented with one variable.
@deftypefun void gal_list_data_add (gal_data_t @code{**list}, gal_data_t @code{*newnode})
Add an already allocated dataset (@code{newnode}) to top of @code{list}.
Note that if @code{newnode->next!=NULL} (@code{newnode} is itself a list), then @code{list} will be added to its end.
In this example multiple images are linked together as a list:
@example
int quietmmap=1;
size_t minmapsize=-1;
gal_data_t *tmp, *list=NULL;
tmp = gal_fits_img_read("file1.fits", "1", minmapsize, quietmmap,
NULL);
gal_list_data_add( &list, tmp );
tmp = gal_fits_img_read("file2.fits", "1", minmapsize, quietmmap,
NULL);
gal_list_data_add( &list, tmp );
@end example
@end deftypefun
@deftypefun void gal_list_data_add_alloc (gal_data_t @code{**list}, void @code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*name}, char @code{*unit}, char @code{*comment})
Allocate a new dataset (with @code{gal_data_alloc} in @ref{Dataset allocation}) and put it as the first element of @code{list}.
Note that if this is the first node to be added to the list, @code{list} must be @code{NULL}.
@end deftypefun
@deftypefun {gal_data_t *} gal_list_data_pop (gal_data_t @code{**list})
Pop the top node from @code{list} and return it.
@end deftypefun
@deftypefun void gal_list_data_remove (gal_data_t @code{**list}, gal_data_t @code{*node})
Remove @code{node} from the given @code{list}.
After finding the given node, this function will just set @code{node->next=NULL} and correct the @code{next} node of its previous element to its next element (thus ``removing'' it from the list).
If @code{node} doesn't exist in the list, this function won't make any change to list.
@end deftypefun
@deftypefun {gal_data_t *} gal_list_data_select_by_name (gal_data_t @code{*list}, char @code{*name})
Select the dataset within the list, that has a @code{name} element that is identical (case-sensitive) to the given @code{name}.
If not found, a @code{NULL} pointer will be returned.
Note that this dataset will not be popped from the list, only a pointer to it will be returned and if you free it or change its @code{next} element, it may harm your original list.
@end deftypefun
@deftypefun {gal_data_t *} gal_list_data_select_by_id (gal_data_t @code{*table}, char @code{*idstr}, size_t @code{*index})
Select the dataset within the list that can be identified with the string given to @code{idstr} (which can be a counter, starting from 1, or a name).
If not found, a @code{NULL} pointer will be returned.
Note that this dataset will not be popped from the list, only a pointer to it will be returned and if you free it or change its @code{next} element, it may harm your original list.
@end deftypefun
@deftypefun void gal_list_data_reverse (gal_data_t @code{**list})
Reverse the order of the list such that the top node in the list before calling this function becomes the bottom node after it.
@end deftypefun
@deftypefun {gal_data_t **} gal_list_data_to_array_ptr (gal_data_t @code{*list}, size_t @code{*num})
Allocate and return an array of @code{gal_data_t *} pointers with the same number of elements as the nodes in @code{list}.
The pointers will be put in the same order that the list is parsed.
Hence the N-th element in the array will point to the same dataset that the N-th node in the list points to.
@end deftypefun
@deftypefun size_t gal_list_data_number (gal_data_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun
@deftypefun {gal_data_t *} gal_list_data_last (gal_data_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun
@deftypefun void gal_list_data_free (gal_data_t @code{*list})
Free all the datasets in @code{list} along with all the allocated spaces in
each.
@end deftypefun
@node Array input output, Table input output, Linked lists, Gnuastro library
@subsection Array input output
Getting arrays (commonly images or cubes) from a file into your program or writing them after the processing into an output file are some of the most common operations.
The functions in this section are designed for such operations with the known file types.
The functions here are thus just wrappers around functions of lower-level file type functions of this library, for example, @ref{FITS files} or @ref{TIFF files}.
If the file type of the input/output file is already known, you can use the functions in those sections respectively.
@deftypefun int gal_array_name_recognized (char @code{*filename})
Return 1 if the given file name corresponds to one of the recognized file
types for reading arrays.
@end deftypefun
@deftypefun int gal_array_name_recognized_multiext (char @code{*filename})
Return 1 if the given file name corresponds to one of the recognized file
types for reading arrays which may contain multiple extensions (for example
FITS or TIFF) formats.
@end deftypefun
@deftypefun int gal_array_file_recognized (char @code{*filename})
Similar to @code{gal_array_name_recognized}, but for FITS files, it will also check the contents of the file if the recognized file name suffix is not found.
See the description of @code{gal_fits_file_recognized} for more (@ref{FITS macros errors filenames}).
@end deftypefun
@deftypefun gal_data_t gal_array_read (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Read the array within the given extension (@code{extension}) of @code{filename}, or the @code{lines} list (see below).
If the array is larger than @code{minmapsize} bytes, then it will not be read into RAM, but a file on the HDD/SSD (no difference for the programmer).
Messages about the memory-mapped file can be disabled with @code{quietmmap}.
@code{extension} will be ignored for files that do not support them (for example JPEG or text).
For FITS files, @code{extension} can be a number or a string (name of the extension), but for TIFF files, it has to be number.
In both cases, counting starts from zero.
For multi-channel formats (like RGB images in JPEG or TIFF), this function will return a @ref{List of gal_data_t}: one data structure per channel.
Thus if you just want a single array (and want to check if the user has not given a multi-channel input), you can check the @code{next} pointer of the returned @code{gal_data_t}.
@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see @ref{Text files}).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
@code{hdu_option_name} is used in error messages related to extensions (e.g., HDUs in FITS) and is expected to be the command-line option name that users of your program can specify to select an input HDU for this particular input; for example, if the kernel is used as the input, users should determine @option{--khdu} for this option.
If the given @code{extension} doesn't exist in @file{filename}, a descriptive error message is printed instructing the users how to find and fix the problem.
This error message also suggests the option to use in order to help users of your program to inform them what option they should give a HDU to.
If you don't have an option that is configured by the users of your program, you can set this to @code{NONE} or @code{NULL}.
@end deftypefun
@deftypefun void gal_array_read_to_type (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Similar to @code{gal_array_read}, but the output data structure(s) will
have a numeric data type of @code{type}, see @ref{Numeric data types}.
@end deftypefun
@deftypefun void gal_array_read_one_ch (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
@cindex Channel
@cindex Color channel
Read the dataset within @code{filename} (extension/hdu/dir @code{extension}) and make sure it is only a single channel.
This is just a simple wrapper around @code{gal_array_read} that checks if there was more than one dataset and aborts with an informative error if there is more than one channel in the dataset.
Formats like JPEG or TIFF support multiple channels per input, but it may happen that your program only works on a single dataset.
This function can be a convenient way to make sure that the data that comes into your program is only one channel.
Regarding @code{*hdu_option_name}, see the description for the same argument in the description of @code{gal_array_read}.
@end deftypefun
@deftypefun void gal_array_read_one_ch_to_type (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Similar to @code{gal_array_read_one_ch}, but the output data structure will
has a numeric data type of @code{type}, see @ref{Numeric data types}.
@end deftypefun
@node Table input output, FITS files, Array input output, Gnuastro library
@subsection Table input output (@file{table.h})
Tables are a collection of one dimensional datasets that are packed together into one file.
They are the single most common format to store high-level (processed) information, hence they play a very important role in Gnuastro.
For a more thorough introduction, please see @ref{Table}.
Gnuastro's Table program, and all the other programs that can read from and write into tables, use the functions of this section for reading and writing their input/output tables.
For a simple demonstration of using the constructs introduced here, see @ref{Library demo - reading and writing table columns}.
Currently only plain text (see @ref{Gnuastro text table format}) and FITS (ASCII and binary) tables are supported by Gnuastro.
However, the low-level table infra-structure is written such that accommodating other formats is also possible and in future releases more formats will hopefully be supported.
Please do not hesitate to suggest your favorite format so it can be implemented when possible.
@deffn Macro GAL_TABLE_DEF_WIDTH_STR
@deffnx Macro GAL_TABLE_DEF_WIDTH_INT
@deffnx Macro GAL_TABLE_DEF_WIDTH_LINT
@deffnx Macro GAL_TABLE_DEF_WIDTH_FLT
@deffnx Macro GAL_TABLE_DEF_WIDTH_DBL
@deffnx Macro GAL_TABLE_DEF_PRECISION_INT
@deffnx Macro GAL_TABLE_DEF_PRECISION_FLT
@deffnx Macro GAL_TABLE_DEF_PRECISION_DBL
@cindex @code{printf}
The default width and precision for generic types to use in writing numeric types into a text file (plain text and FITS ASCII tables).
When the dataset does not have any pre-set width and precision (see @code{disp_width} and @code{disp_precision} in @ref{Generic data container}) these will be directly used in C's @code{printf} command to write the number as a string.
@end deffn
@deffn Macro GAL_TABLE_DISPLAY_FMT_STRING
@deffnx Macro GAL_TABLE_DISPLAY_FMT_DECIMAL
@deffnx Macro GAL_TABLE_DISPLAY_FMT_UDECIMAL
@deffnx Macro GAL_TABLE_DISPLAY_FMT_OCTAL
@deffnx Macro GAL_TABLE_DISPLAY_FMT_HEX
@deffnx Macro GAL_TABLE_DISPLAY_FMT_FIXED
@deffnx Macro GAL_TABLE_DISPLAY_FMT_EXP
@deffnx Macro GAL_TABLE_DISPLAY_FMT_GENERAL
The display format used in C's @code{printf} to display data of different types.
The @code{_STRING} and @code{_DECIMAL} are unique for printing strings and signed integers, they are mainly here for completeness.
However, unsigned integers and floating points can be displayed in multiple formats:
@table @asis
@item Unsigned integer
For unsigned integers, it is possible to choose from @code{_UDECIMAL}
(unsigned decimal), @code{_OCTAL} (octal notation, for example, @code{125}
in decimal will be displayed as @code{175}), and @code{_HEX} (hexadecimal
notation, for example, @code{125} in decimal will be displayed as
@code{7D}).
@item Floating point
For floating point, it is possible to display the number in @code{_FLOAT}
(floating point, for example, @code{1500.345}), @code{_EXP} (exponential,
for example, @code{1.500345e+03}), or @code{_GENERAL} which is the best of
the two for the given number.
@end table
@end deffn
@deffn Macro GAL_TABLE_FORMAT_INVALID
@deffnx Macro GAL_TABLE_FORMAT_TXT
@deffnx Macro GAL_TABLE_FORMAT_AFITS
@deffnx Macro GAL_TABLE_FORMAT_BFITS
All the current acceptable table formats to Gnuastro.
The @code{AFITS} and @code{BFITS} represent FITS ASCII tables and FITS Binary tables.
You can use these anywhere you see the @code{tableformat} variable.
@end deffn
@deffn Macro GAL_TABLE_SEARCH_INVALID
@deffnx Macro GAL_TABLE_SEARCH_NAME
@deffnx Macro GAL_TABLE_SEARCH_UNIT
@deffnx Macro GAL_TABLE_SEARCH_COMMENT
When the desired column is not a number, these values determine if the string to match, or regular expression to search, be in the @emph{name}, @emph{units} or @emph{comments} of the column metadata.
These values should be used for the @code{searchin} variables of the functions.
@end deffn
@deftypefun uint8_t gal_table_displayflt_from_str (char @code{*string})
Convert the input @code{string} into one of the @code{GAL_TABLE_DISPLAY_FMT_FIXED} (for fixed-point notation) or @code{GAL_TABLE_DISPLAY_FMT_EXP} (for exponential notation).
@end deftypefun
@deftypefun {char *} gal_table_displayflt_to_str (uint8_t @code{id})
Convert the input identifier (one of the @code{GAL_TABLE_DISPLAY_FMT_FIXED}; for fixed-point notation, or @code{GAL_TABLE_DISPLAY_FMT_EXP}; for exponential notation) into a standard string that is used to identify them.
@end deftypefun
@deftypefun {gal_data_t *} gal_table_info (char @code{*filename}, char @code{*hdu}, gal_list_str_t @code{*lines}, size_t @code{*numcols}, size_t @code{*numrows}, int @code{*tableformat})
Store the information of each column of a table into an array of meta-data @code{gal_data_t}s.
In a metadata @code{gal_data_t}, the size elements are zero (@code{ndim=size=0} and @code{dsize=NULL}) but other relevant elements are filled).
See the end of this description for the exact components of each @code{gal_data_t} that are filled.
The returned array of @code{gal_data_t}s has @code{numcols} datasets (one data structure for each column).
The number of rows in each dataset is stored in @code{numrows} (in a table, all the columns have the same number of rows).
The format of the table (e.g., ASCII text file, or FITS binary or ASCII table) will be put in @code{tableformat} (macros defined above).
If the @code{filename} is not a FITS file, then @code{hdu} will not be used (can be @code{NULL}).
The input must be either a file (specified by @code{filename}) or a list of strings (@code{lines}).
@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see @ref{Text files}).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
In the output datasets, only the meta-data strings (column name, units and comments), will be allocated and set as shown below.
This function is just for column information (meta-data), not column contents.
@example
*restrict array -> Blank value (if present, in col's own type).
type -> Type of column data.
ndim -> 0
*dsize -> NULL
size -> 0
quietmmap -> ------------
*mmapname -> ------------
minmapsize -> Repeat (length of vector; 1 if not vector).
nwcs -> ------------
*wcs -> ------------
flag -> 'GAL_TABLEINTERN_FLAG_*' macros.
status -> ------------
*name -> Column name.
*unit -> Column unit.
*comment -> Column comments.
disp_fmt -> 'GAL_TABLE_DISPLAY_FMT' macros.
disp_width -> Width of string columns.
disp_precision -> ------------
*next -> Pointer to next column's metadata
*block -> ------------
@end example
@end deftypefun
@deftypefun void gal_table_print_info (gal_data_t @code{*allcols}, size_t @code{numcols}, size_t @code{numrows}, char @code{*hdu_option_name})
Print the column information for all the columns (output of @code{gal_table_info}) to standard output.
The output is in the same format as this command with Gnuastro Table program (see @ref{Invoking asttable}):
@example
$ asttable --info table.fits
@end example
@end deftypefun
@deftypefun {gal_data_t *} gal_table_read (char @code{*filename}, char @code{*hdu}, gal_list_str_t @code{*lines}, gal_list_str_t @code{*cols}, int @code{searchin}, int @code{ignorecase}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{*colmatch}, char @code{*hdu_option_name})
Read the specified columns in a file (named @code{filename}), or list of strings (@code{lines}) into a linked list of data structures.
If the file is FITS, then @code{hdu} will also be used, otherwise, @code{hdu} is ignored.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see @ref{Text files}).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
@cindex AWK
@cindex GNU AWK
The information to search for columns should be specified by the @code{cols} list of strings (see @ref{List of strings}).
The string in each node of the list may be a number, an exact match to a column name, or a regular expression (in GNU AWK format) enclosed in @code{/ /}.
The @code{searchin} value must be one of the macros defined above.
If @code{cols} is NULL, then this function will read the full table.
Also, the @code{ignorecase} value should be 1 if you want to ignore the case of alphabetic characters while matching/searching column meta-data (see @ref{Input output options}).
For FITS tables, each column will be read independently.
Therefore they will be read in @code{numthreads} CPU threads to greatly speed up the reading when there are many columns and rows.
However, this only happens if CFITSIO was configured with @option{--enable-reentrant}.
This test has been done at Gnuastro's configuration time; if so, @code{GAL_CONFIG_HAVE_FITS_IS_REENTRANT} will have a value of 1, otherwise, it will have a value of 0.
For more on this macro, see @ref{Configuration information}).
Multi-threaded table reading is not currently applicable to other table formats (only for FITS tables).
The output is an individually allocated list of datasets (see @ref{List of gal_data_t}) with the same order of the @code{cols} list.
Note that one column node in the @code{cols} list might give multiple columns (for example, from regular expressions), in this case, the order of output columns that correspond to that one input, are in order of the table (which column was read first).
So the first requested column is the first popped data structure and so on.
if @code{colmatch!=NULL}, it is assumed to be an array that has at least the same number of elements as nodes in the @code{cols} list.
The number of columns that matched each input column will be stored in each element.
@end deftypefun
@deftypefun {gal_list_sizet_t *} gal_table_list_of_indexs (gal_list_str_t @code{*cols}, gal_data_t @code{*allcols}, size_t @code{numcols}, int @code{searchin}, int @code{ignorecase}, char @code{*filename}, char @code{*hdu}, size_t @code{*colmatch})
Returns a list of indices (starting from 0) of the input columns that match the names/numbers given to @code{cols}.
This is a low-level operation which is called by @code{gal_table_read} (described above), see there for more on each argument's description.
@code{allcols} is the returned array of @code{gal_table_info}.
@end deftypefun
@cindex Git
@deftypefun void gal_table_comments_add_intro (gal_list_str_t @code{**comments}, char @code{*program_string}, time_t @code{*rawtime})
Add some basic information to the list of @code{comments}.
This basic information includes the following information
@itemize
@item
If the program is run in a Git version controlled directory, Git's description is printed (see description under @code{COMMIT} in @ref{Output FITS files}).
@item
The calendar time that is stored in @code{rawtime} (@code{time_t} is C's calendar time format defined in @file{time.h}).
You can calculate the time in this format with the following expressions:
@example
time_t rawtime;
time(&rawtime);
@end example
@item
The name of your program in @code{program_string}.
If it is @code{NULL}, this line is ignored.
@end itemize
@end deftypefun
@deftypefun void gal_table_write (gal_data_t @code{*cols}, struct gal_fits_list_key_t @code{**keylist}, gal_list_str_t @code{*comments}, int @code{tableformat}, char @code{*filename}, char @code{*extname}, uint8_t @code{colinfoinstdout}, int @code{freekeys})
Write @code{cols} (a list of datasets, see @ref{List of gal_data_t}) into a table stored in @code{filename}.
The format of the table can be determined with @code{tableformat} that accepts the macros defined above.
When @code{filename==NULL}, the column information will be printed on the standard output (command-line).
If @code{comments!=NULL}, the list of comments (see @ref{List of strings}) will also be printed into the output table.
When the output table is a plain text file, every node of @code{comments} will be printed after a @code{#} (so it can be considered as a comment) and in FITS table they will follow a @code{COMMENT} keyword.
If a file named @code{filename} already exists, the operation depends on the type of output.
When @code{filename} is a FITS file, the table will be added as a new extension after all existing extensions.
If @code{filename} is a plain text file, this function will abort with an error.
If @code{filename} is a FITS file, the table extension will have the name @code{extname}.
When @code{colinfoinstdout!=0} and @code{filename==NULL} (columns are printed in the standard output), the dataset metadata will also printed in the standard output.
When printing to the standard output, the column information can be piped into another program for further processing and thus the meta-data (lines starting with a @code{#}) must be ignored.
In such cases, you only print the column values by passing @code{0} to @code{colinfoinstdout}.
@end deftypefun
@deftypefun void gal_table_write_log (gal_data_t @code{*logll}, char @code{*program_string}, time_t @code{*rawtime}, gal_list_str_t @code{*comments}, char @code{*filename}, int @code{quiet}, int @code{format})
Write the @code{logll} list of datasets into a table in @code{filename} (see @ref{List of gal_data_t}).
This function is just a wrapper around @code{gal_table_comments_add_intro} and @code{gal_table_write} (see above).
The @code{format} should be one of the @code{GAL_TABLE_FORMAT_*} macros above.
If @code{quiet} is non-zero, this function will print a message saying that the @code{filename} has been created.
@end deftypefun
@deftypefun {gal_data_t *} gal_table_col_vector_extract (gal_data_t @code{*vector}, gal_list_sizet_t @code{*indexs})
Given the ``vector'' column @code{vector} (which is assumed to be a 2D dataset), extract the tokens that are identified in the @code{indexs} list into a list of one dimensional datasets.
For more on vector columns in tables, see @ref{Vector columns}.
@end deftypefun
@deftypefun {gal_data_t *} gal_table_cols_to_vector (gal_data_t @code{*list})
Merge the one-dimensional datasets in the given list into one 2-dimensional dataset that can be treated as a vector column.
All the input datasets have to have the same size and type.
For more on vector columns in tables, see @ref{Vector columns}.
@end deftypefun
@deftypefun void gal_table_sort (gal_data_t @code{*table}, gal_data_t @code{*sortcol}, uint8_t @code{descending})
Sort the given @code{table} by the @code{sortcol} column in ascending order (descending, if @code{descending} is non-zero).
To sort only a single column in place, see @code{gal_statistics_sort_*} of @ref{Statistical operations}.
@end deftypefun
@node FITS files, File input output, Table input output, Gnuastro library
@subsection FITS files (@file{fits.h})
@cindex FITS
@cindex CFITSIO
The FITS format is the most common format to store data (images and tables) in astronomy.
The CFITSIO library already provides a very good low-level collection of functions for manipulating FITS data.
The low-level nature of CFITSIO is defined for versatility and portability.
As a result, even a simple and basic operation, like reading an image or table column into memory, will require a special sequence of CFITSIO function calls which can be inconvenient and buggy to manage in separate locations.
To ease this process, Gnuastro's library provides wrappers for CFITSIO functions.
With these, it much easier to read, write, or modify FITS file data, header keywords and extensions.
Hence, if you feel these functions do not exactly do what you want, we strongly recommend reading the CFITSIO manual to use its great features directly (afterwards, send us your wrappers so we can include it here for others to benefit also).
All the functions and macros introduced in this section are declared in @file{gnuastro/fits.h}.
When you include this header, you are also including CFITSIO's @file{fitsio.h} header.
So you do not need to explicitly include @file{fitsio.h} anymore and can freely use any of its macros or functions in your code along with those discussed here.
@menu
* FITS macros errors filenames:: General macros, errors and checking names.
* CFITSIO and Gnuastro types:: Conversion between FITS and Gnuastro types.
* FITS HDUs:: Opening and getting information about HDUs.
* FITS header keywords:: Reading and writing FITS header keywords.
* FITS arrays - images or cubes:: Reading and writing FITS images or cubes.
* FITS tables:: Reading and writing FITS tables.
@end menu
@node FITS macros errors filenames, CFITSIO and Gnuastro types, FITS files, FITS files
@subsubsection FITS Macros, errors and filenames
Some general constructs provided by Gnuastro's FITS handling functions are discussed here.
In particular there are several useful functions about FITS file names.
@deffn Macro GAL_FITS_MAX_NDIM
The maximum number of dimensions a dataset can have in FITS format, according to the FITS standard this is 999.
@end deffn
@deftypefun void gal_fits_io_error (int @code{status}, char @code{*message})
If @code{status} is non-zero, this function will print the CFITSIO error message corresponding to status, print @code{message} (optional) in the next line and abort the program.
If @code{message==NULL}, it will print a default string after the CFITSIO error.
@end deftypefun
@deftypefun int gal_fits_name_is_fits (char @code{*name})
If the @code{name} is an acceptable CFITSIO FITS filename return @code{1} (one), otherwise return @code{0} (zero).
The currently acceptable FITS suffixes are @file{.fits}, @file{.fit}, @file{.fits.gz}, @file{.fits.Z}, @file{.imh}, @file{.fits.fz}.
IMH is the IRAF format which is acceptable to CFITSIO.
@end deftypefun
@deftypefun int gal_fits_suffix_is_fits (char @code{*suffix})
Similar to @code{gal_fits_name_is_fits}, but only for the suffix.
The suffix does not have to start with `@key{.}': this function will return @code{1} (one) for both @code{fits} and @code{.fits}.
@end deftypefun
@deftypefun int gal_fits_file_recognized (char @code{*name})
Return @code{1} if the given file name (possibly including its contents) is a FITS file.
This is necessary when the contents of a FITS file do follow the FITS standard, but the file does not have a Gnuastro-recognized FITS suffix.
Therefore, it will first call @code{gal_fits_name_is_fits}, if the result is negative, then this function will attempt to open the file with CFITSIO and if it works, it will close it again and return 1.
In the process of opening the file, CFITSIO will just open the file, and no reading will take place, so it should have a minimal CPU footprint.
@end deftypefun
@deftypefun {char *} gal_fits_name_save_as_string (char @code{*filename}, char @code{*hdu})
If the name is a FITS name, then put a @code{(hdu: ...)} after it and return the string.
If it is not a FITS file, just print the name, if @code{filename==NULL}, then return the string @code{stdin}.
Note that the output string's space is allocated.
This function is useful when you want to report a random file to the user which may be FITS or not (for a FITS file, simply the filename is not enough, the HDU is also necessary).
@end deftypefun
@node CFITSIO and Gnuastro types, FITS HDUs, FITS macros errors filenames, FITS files
@subsubsection CFITSIO and Gnuastro types
Both Gnuastro and CFITSIO have special and different identifiers for each type that they accept.
Gnuastro's type identifiers are fully described in @ref{Library data types} and are usable for all kinds of datasets (images, table columns, etc) as part of Gnuastro's @ref{Generic data container}.
However, following the FITS standard, CFITSIO has different identifiers for images and tables.
Following CFITSIO's own convention, we will use @code{bitpix} for image type identifiers and @code{datatype} for its internal identifiers (and mainly used in tables).
The functions introduced in this section can be used to convert between CFITSIO and Gnuastro's type identifiers.
One important issue to consider is that CFITSIO's types are not fixed width (for example, @code{long} may be 32-bits or 64-bits on different systems).
However, Gnuastro's types are defined by their width.
These functions will use information on the host system to do the proper conversion.
To have a portable (usable on different systems) code, is thus recommended to use these functions and not to assume a fixed correspondence between CFITSIO and Gnuastro's types.
@deftypefun uint8_t gal_fits_bitpix_to_type (int @code{bitpix})
Return the Gnuastro type identifier that corresponds to CFITSIO's
@code{bitpix} on this system.
@end deftypefun
@deftypefun int gal_fits_type_to_bitpix (uint8_t @code{type})
Return the CFITSIO @code{bitpix} value that corresponds to Gnuastro's
@code{type}.
@end deftypefun
@deftypefun char gal_fits_type_to_bin_tform (uint8_t @code{type})
Return the FITS standard binary table @code{TFORM} character that
corresponds to Gnuastro's @code{type}.
@end deftypefun
@deftypefun int gal_fits_type_to_datatype (uint8_t @code{type})
Return the CFITSIO @code{datatype} that corresponds to Gnuastro's
@code{type} on this machine.
@end deftypefun
@deftypefun uint8_t gal_fits_datatype_to_type (int @code{datatype}, int @code{is_table_column})
Return Gnuastro's type identifier that corresponds to the CFITSIO @code{datatype}.
Note that when dealing with CFITSIO's @code{TLONG}, the fixed width type differs between tables and images.
So if the corresponding dataset is a table column, put a non-zero value into @code{is_table_column}.
@end deftypefun
@node FITS HDUs, FITS header keywords, CFITSIO and Gnuastro types, FITS files
@subsubsection FITS HDUs
A FITS file can contain multiple HDUs/extensions.
The functions in this section can be used to get basic information about the extensions or open them.
Note that @code{fitsfile} is defined in CFITSIO's @code{fitsio.h} which is automatically included by Gnuastro's @file{gnuastro/fits.h}.
@deftypefun {fitsfile *} gal_fits_open_to_write (char @code{*filename})
If @file{filename} exists, open it and return the @code{fitsfile} pointer that corresponds to it.
If @file{filename} does not exist, the file will be created which contains a blank first extension and the pointer to its next extension will be returned.
@end deftypefun
@deftypefun size_t gal_fits_hdu_num (char @code{*filename})
Return the number of HDUs/extensions in @file{filename}.
@end deftypefun
@deftypefun {unsigned long} gal_fits_hdu_datasum (char @code{*filename}, char @code{*hdu}, char @code{*hdu_option_name})
@cindex @code{DATASUM}: FITS keyword
Return the @code{DATASUM} of the given HDU in the given FITS file.
For more on @code{DATASUM} in the FITS standard, see @ref{Keyword inspection and manipulation} (under the @code{checksum} component of @option{--write}).
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@deftypefun {unsigned long} gal_fits_hdu_datasum_encoded (char @code{*filename}, char @code{*hdu}, char @code{*hdu_option_name})
Similar to @code{gal_fits_hdu_datasum}, but the returned value is always a 16-character string following the encoding that is described in the FITS standard (primarily for the @code{CHECKSUM} keyword, but can also be used for @code{DATASUM}.
@end deftypefun
@deftypefun {unsigned long} gal_fits_hdu_datasum_ptr (fitsfile @code{*fptr})
@cindex @code{DATASUM}: FITS keyword
Return the @code{DATASUM} of the already opened HDU in @code{fptr}.
For more on @code{DATASUM} in the FITS standard, see @ref{Keyword inspection and manipulation} (under the @code{checksum} component of @option{--write}).
@end deftypefun
@deftypefun int gal_fits_hdu_format (char @code{*filename}, char @code{*hdu}, char @code{*hdu_option_name})
Return the format of the HDU as one of CFITSIO's recognized macros:
@code{IMAGE_HDU}, @code{ASCII_TBL}, or @code{BINARY_TBL}.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@deftypefun int gal_fits_hdu_is_healpix (fitsfile @code{*fptr})
@cindex HEALPix
Return @code{1} if the dataset may be a HEALpix grid and @code{0} otherwise.
Technically, it is considered to be a HEALPix if the HDU is not an ASCII table, and has the @code{NSIDE}, @code{FIRSTPIX} and @code{LASTPIX}.
@end deftypefun
@deftypefun {fitsfile *} gal_fits_hdu_open (char @code{*filename}, char @code{*hdu}, int @code{iomode}, int @code{exitonerror}, char @code{*hdu_option_name})
Open the HDU/extension @code{hdu} from @file{filename} and return a pointer to CFITSIO's @code{fitsfile}.
@code{iomode} determines how the FITS file will be opened using CFITSIO's macros: @code{READONLY} or @code{READWRITE}.
The string in @code{hdu} will be appended to @file{filename} in square brackets so CFITSIO only opens this extension.
You can use any formatting for the @code{hdu} that is acceptable to CFITSIO.
See the description under @option{--hdu} in @ref{Input output options} for more.
If @code{exitonerror!=0} and the given HDU cannot be opened for any reason, the function will exit the program, and print an informative message.
Otherwise, when the HDU cannot be opened, it will just return a NULL pointer.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@deftypefun {fitsfile *} gal_fits_hdu_open_format (char @code{*filename}, char @code{*hdu}, int @code{img0_tab1}, char @code{*hdu_option_name})
Open (in read-only format) the @code{hdu} HDU/extension of @file{filename} as an image or table.
When @code{img0_tab1} is @code{0}(zero) but the HDU is a table, this function will abort with an error.
It will also abort with an error when @code{img0_tab1} is @code{1} (one), but the HDU is an image.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
A FITS HDU may contain both tables or images.
When your program needs one of these formats, you can call this function so if the user provided the wrong HDU/file, it will abort and inform the user that the file/HDU is has the wrong format.
@end deftypefun
@node FITS header keywords, FITS arrays - images or cubes, FITS HDUs, FITS files
@subsubsection FITS header keywords
Each FITS extension/HDU contains a raw dataset which can either be a table or an image along with some header keywords.
The keywords can be used to store meta-data about the actual dataset.
The functions in this section describe Gnuastro's high-level functions for reading and writing FITS keywords.
Similar to all Gnuastro's FITS-related functions, these functions are all wrappers for CFITSIO's low-level functions.
The necessary meta-data (header keywords) for a particular dataset are commonly numerous, it is much more efficient to list them in one variable and call the reading/writing functions once.
Hence the functions in this section use linked lists, a thorough introduction to them is given in @ref{Linked lists}.
To reading FITS keywords, these functions use a list of Gnuastro's generic dataset format that is discussed in @ref{List of gal_data_t}.
To write FITS keywords we define the @code{gal_fits_list_key_t} node that is defined below.
@deftp {Type (C @code{struct})} gal_fits_list_key_t
@cindex Linked list
@cindex last-in-first-out
@cindex first-in-first-out
Structure for writing FITS keywords.
This structure is used for one keyword and you do not need to set all elements.
With the @code{next} element, you can link it to another keyword thus creating a linked list to add any number of keywords easily and at any step during your program (see @ref{Linked lists} for an introduction on lists).
See the functions below for adding elements to the list.
@example
typedef struct gal_fits_list_key_t
@{
int tfree; /* ==1, free title string. */
int kfree; /* ==1, free keyword name. */
int vfree; /* ==1, free keyword value. */
int cfree; /* ==1, free comment. */
int ufree; /* ==1, free unit. */
uint8_t type; /* Keyword value type. */
char *title; /* !=NULL, only print title.*/
char *keyname; /* Keyword Name. */
void *value; /* Keyword value. */
char *comment; /* Keyword comment. */
char *unit; /* Keyword unit. */
struct gal_fits_list_key_t *next; /* Pointer next keyword. */
@} gal_fits_list_key_t;
@end example
@end deftp
@deftypefun int gal_fits_key_exists_fptr (fitsfile @code{*fptr}, char @code{*keyname})
Return 1 (true) if the opened FITS file pointer contains the requested keyword and 0 (false) otherwise.
@end deftypefun
@deftypefun {void *} gal_fits_key_img_blank (uint8_t @code{type})
Returns a pointer to an allocated space containing the value to the FITS @code{BLANK} header keyword, when the input array has a type of @code{type}.
This is useful when you want to write the @code{BLANK} keyword using CFITSIO's @code{fits_write_key} function.
According to the FITS standard: ``If the @code{BSCALE} and @code{BZERO} keywords do not have the default values of 1.0 and 0.0, respectively, then the value of the @code{BLANK} keyword must equal the actual value in the FITS data array that is used to represent an undefined pixel and not the corresponding physical value''.
Therefore a special @code{BLANK} value is needed for datasets containing signed 8-bit, unsigned 16-bit, unsigned 32-bit, and unsigned 64-bit integers (types that are defined with @code{BSCALE} and @code{BZERO} in the FITS standard).
@cartouche
@noindent
@strong{Not usable when reading a dataset:} As quoted from the FITS standard above, the value returned by this function can only be generically used for the writing of the @code{BLANK} keyword header.
It @emph{must not} be used as the blank pointer when when reading a FITS array using CFITSIO.
When reading an array with CFITSIO, you can use @code{gal_blank_alloc_write} to generate the necessary pointer.
@end cartouche
@end deftypefun
@deftypefun void gal_fits_key_clean_str_value (char @code{*string})
Remove the single quotes and possible extra spaces around the keyword values that CFITSIO returns when reading a string keyword.
CFITSIO does not remove the two single quotes around the string value of a keyword.
Hence the strings it reads are like: @code{'value '}, or @code{'some_very_long_value'}.
To use the value during your processing, it is commonly necessary to remove the single quotes (and possible extra spaces).
This function will do this within the allocated space of the string.
@end deftypefun
@deftypefun {char *} gal_fits_key_date_to_struct_tm (char @code{*fitsdate}, struct tm @code{*tp})
@cindex Date: FITS format
Parse @code{fitsdate} as a FITS date format string (most generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) into the C library's broken-down time structure, or @code{struct tm} (declared in @file{time.h}) and return a pointer to a newly allocated array for the sub-second part of the format (@code{.ddd...}).
Therefore it needs to be freed afterwards (if it is not @code{NULL})
When there is no sub-second portion, this pointer will be @code{NULL}.
This is a relatively low-level function, an easier function to use is @code{gal_fits_key_date_to_seconds} which will return the sub-seconds as double precision floating point.
Note that the FITS date format mentioned above is the most complete representation.
The following two formats are also acceptable: @code{YYYY-MM-DDThh:mm:ss} and @code{YYYY-MM-DD}.
This option can also interpret the older FITS date format where only two characters are given to the year and the date format is reversed (@code{DD/MM/YYThh:mm:ss.ddd...}).
In this case (following the GNU C Library), this option will make the following assumption: values 68 to 99 correspond to the years 1969 to 1999, and values 0 to 68 as the years 2000 to 2068.
@end deftypefun
@deftypefun size_t gal_fits_key_date_to_seconds (char @code{*fitsdate}, char @code{**subsecstr}, double @code{*subsec})
@cindex Unix epoch time
@cindex Epoch time, Unix
Return the Unix epoch time (number of seconds that have passed since 00:00:00 Thursday, January 1st, 1970) corresponding to the FITS date format string @code{fitsdate} (see description of @code{gal_fits_key_date_to_struct_tm} above).
This function will return @code{GAL_BLANK_SIZE_T} if the broken-down time could not be converted to seconds.
The Unix epoch time is in units of seconds, but the FITS date format allows sub-second accuracy.
The last two arguments are for the optional sub-second portion.
If you do not want sub-second information, just set the second argument to @code{NULL}.
If @code{fitsdate} contains sub-second accuracy and @code{subsecstr!=NULL}, then the starting of the sub-second part's string is stored in @code{subsecstr} (malloc'ed), and @code{subsec} will be the corresponding numerical value (between 0 and 1, in double precision floating point).
So to avoid leaking memory, if a sub-second string is requested, it must be freed after calling this function.
When a sub-second string does not exist (and it is requested), then a value of @code{NULL} and NaN will be written in @code{*subsecstr} and @code{*subsec} respectively.
This is a very useful function for operations on the FITS date values, for example, sorting FITS files by their dates, or finding the time difference between two FITS files.
The advantage of working with the Unix epoch time is that you do not have to worry about calendar details (such as the number of days in different months or leap years).
@end deftypefun
@deftypefun void gal_fits_key_read_from_ptr (fitsfile @code{*fptr}, gal_data_t @code{*keysll}, int @code{readcomment}, int @code{readunit})
Read the list of keyword values from a FITS pointer.
The input should be a linked list of Gnuastro's generic data container (@code{gal_data_t}).
Before calling this function, you just have to set the @code{name}, and optionally, the desired @code{type} of the value of each keyword.
The given @code{name} value will be directly passed to CFITSIO to read the desired keyword name.
This function will allocate space to keep the value.
If no pre-defined type is requested for a certain keyword's value, the smallest possible type to host the value will be found and used.
If @code{readcomment} and @code{readunit} are non-zero, this function will also try to read the possible comments and units of the keyword.
Here is one example of using this function:
@example
/* Allocate an array of datasets. */
gal_data_t *keysll=gal_data_array_calloc(N);
/* Make the array usable as a list too (by setting `next'). */
for(i=0;i<N-1;++i) keysll[i].next=&keysll[i+1];
/* Fill the datasets with a `name' and a `type'. */
keysll[0].name="NAME1"; keysll[0].type=GAL_TYPE_INT32;
keysll[1].name="NAME2"; keysll[1].type=GAL_TYPE_STRING;
...
...
/* Call this function. */
gal_fits_key_read_from_ptr(fptr, keysll, 0, 0);
/* Use the values as you like... */
/* Free all the allocated spaces. Note that `name' was not
allocated in this example, so we should explicitly set
it to NULL before calling `gal_data_array_free'. */
for(i=0;i<N;++i) keysll[i].name=NULL;
gal_data_array_free(keysll, N, 1);
@end example
If the @code{array} pointer of each keyword's dataset is not @code{NULL}, then it is assumed that the space to keep the value has already been allocated.
If it is @code{NULL}, space will be allocated for the value by this function.
Strings need special consideration: the reason is that generally, @code{gal_data_t} needs to also allow for array of strings (as it supports arrays of integers for example).
Hence when reading a string value, two allocations may be done by this function (one if @code{array!=NULL}).
Therefore, when using the values of strings after this function, @code{keysll[i].array} must be interpreted as @code{char **}: one allocation for the pointer, one for the actual characters.
If you use something like the example, above you do not have to worry about the freeing, @code{gal_data_array_free} will free both allocations.
So to read a string, one easy way would be the following:
@example
char *str, **strarray;
strarr = keysll[i].array;
str = strarray[0];
@end example
If CFITSIO is unable to read a keyword for any reason the @code{status} element of the respective @code{gal_data_t} will be non-zero.
If it is zero, then the keyword was found and successfully read.
Otherwise, it is a CFITSIO status value.
You can use CFITSIO's error reporting tools or @code{gal_fits_io_error} (see @ref{FITS macros errors filenames}) for reporting the reason of the failure.
A tip: when the keyword does not exist, CFITSIO's status value will be @code{KEY_NO_EXIST}.
CFITSIO will start searching for the keywords from the last place in the header that it searched for a keyword.
So it is much more efficient if the order that you ask for keywords is based on the order they are stored in the header.
@end deftypefun
@deftypefun void gal_fits_key_read (char @code{*filename}, char @code{*hdu}, gal_data_t @code{*keysll}, int @code{readcomment}, int @code{readunit}, char @code{*hdu_option_name})
Same as @code{gal_fits_read_keywords_fptr} (see above), but accepts the
filename and HDU as input instead of an already opened CFITSIO
@code{fitsfile} pointer.
@end deftypefun
@deftypefun void gal_fits_key_list_add (gal_fits_list_key_t @code{**list}, uint8_t @code{type}, char @code{*keyname}, int @code{kfree}, void @code{*value}, int @code{vfree}, char @code{*comment}, int @code{cfree}, char @code{*unit}, int @code{ufree})
Add a keyword to the top of list of header keywords that need to be written into a FITS file.
In the end, the keywords will have to be freed, so it is important to know beforehand if they were allocated or not (hence the presence of the arguments ending in @code{free}).
If the space for the respective element is not allocated, set these arguments to @code{0} (zero).
You can call this function multiple times on a single list add several keys that will be written in one call to @code{gal_fits_key_write} or @code{gal_fits_key_write_in_ptr}.
However, the resulting list will be a last-in-first-out list (for more on lists, see @ref{Linked lists}).
Hence, the written keys will have the inverse order of your calls to this function.
To avoid this problem, you can either use @code{gal_fits_key_list_add_end} instead (which will add each key to the end of the list, not to the top like this function).
Alternatively, you can use @code{gal_fits_key_list_reverse} after adding all the keys with this function.
@strong{Important note for strings}: the value should be the pointer to the string itself (@code{char *}), not a pointer to a pointer (@code{char **}).
@end deftypefun
@deftypefun void gal_fits_key_list_add_end (gal_fits_list_key_t @code{**list}, uint8_t @code{type}, char @code{*keyname}, int @code{kfree}, void @code{*value}, int @code{vfree}, char @code{*comment}, int @code{cfree}, char @code{*unit}, int @code{ufree})
Similar to @code{gal_fits_key_list_add}, but add the given keyword to the end of the list, see the description of @code{gal_fits_key_list_add} for more.
Use this function if you want the keywords to be written in the same order that you add nodes to the list of keywords.
@end deftypefun
@deftypefun void gal_fits_key_list_title_add (gal_fits_list_key_t @code{**list}, char @code{*title}, int @code{tfree})
Add a special ``title'' keyword (with the @code{title} string) to the top of the keywords list.
If @code{cfree} is non-zero, the space allocated for @code{comment} will be freed immediately after writing the keyword (in another function).
@end deftypefun
@deftypefun void gal_fits_key_list_title_add_end (gal_fits_list_key_t @code{**list}, char @code{*title}, int @code{tfree})
Similar to @code{gal_fits_key_list_title_add}, but put the comments at the end of the list.
@end deftypefun
@deftypefun void gal_fits_key_list_fullcomment_add (gal_fits_list_key_t @code{**list}, char @code{*comment}, int @code{fcfree})
Add a @code{COMMENT} keyword to the top of the keywords list.
If the comment is longer than 70 characters, CFITSIO will automatically break it into multiple @code{COMMENT} keywords.
If @code{fcfree} is non-zero, the space allocated for @code{comment} will be freed immediately after writing the keyword (in another function).
@end deftypefun
@deftypefun void gal_fits_key_list_fullcomment_add_end (gal_fits_list_key_t @code{**list}, char @code{*comment}, int @code{fcfree})
Similar to @code{gal_fits_key_list_comment_add}, but put the comments at the end of the list.
@end deftypefun
@deftypefun void gal_fits_key_list_add_date (gal_fits_list_key_t @code{**keylist}, char @code{*comment})
Add a @code{DATE} keyword to the input list of keywords containing the date this function was activated in the format of @code{YYYY-MM-DDThh:mm:ss}.
This function will also add a @code{DATEUTC} keyword that specifies if the date is in UTC or local time (this depends on CFITSIO being able to detect UTC in the running operating system or not).
The comment of the keyword should also be specified as the second argument.
The comment is useful to inform users what this date refers to; for example the program starting time, its ending time, or etc.
For more, see the description under @code{DATE} in @ref{Output FITS files}.
@end deftypefun
@deftypefun void gal_fits_key_list_add_software_versions (gal_fits_list_key_t @code{**keylist})
Add the version of Gnuastro @ref{Mandatory dependencies} to the list of keywords.
Each software's keyword has the same name as the software itself (for example @code{GNUASTRO} or @code{GSL}.
For the full list of software, see @ref{Output FITS files}.
@end deftypefun
@deftypefun void gal_fits_key_list_add_git_commit (gal_fits_list_key_t @code{**keylist})
If the optional libgit2 dependency is installed and your program is being run in a directory that is under version control, a @code{COMMIT} keyword will be added on the top of the list of keywords.
For more, see the description of @code{COMMIT} in @ref{Output FITS files}.
@end deftypefun
@deftypefun void gal_fits_key_list_reverse (gal_fits_list_key_t @code{**list})
Reverse the input list of keywords.
@end deftypefun
@deftypefun void gal_fits_key_write_title_in_ptr (char @code{*title}, fitsfile @code{*fptr})
Add two lines of ``title'' keywords to the given CFITSIO @code{fptr} pointer.
The first line will be blank and the second will have the string in @code{title} roughly in the middle of the line (a fixed distance from the start of the keyword line).
A title in the list of keywords helps in classifying the keywords into groups and inspecting them by eye.
If @code{title==NULL}, this function will not do anything.
@end deftypefun
@deftypefun void gal_fits_key_write_filename (char @code{*keynamebase}, char @code{*filename}, gal_fits_list_key_t @code{**list}, int @code{top1end0}, int @code{quiet})
Put @file{filename} into the @code{gal_fits_list_key_t} list (possibly broken up into multiple keywords) to later write into a HDU header.
The @code{keynamebase} string will be appended with a @code{_N} (N>0) and used as the keyword name.
If @code{top1end0!=0}, then the keywords containing the filename will be added to the top of the list.
The FITS standard sets a maximum length of 69 characters for the string values of a keyword@footnote{The limit is actually 71 characters (which is the full 80 character length, subtracted by 8 for the keyword name and one character for the @key{=}).
However, for strings, FITS also requires two single quotes.}.
This creates problems with file names (which include directories) because file names/addresses can become longer than the maximum number of characters in a FITS keyword (around 70 characters).
Therefore, when @code{filename} is longer than the maximum length of a FITS keyword value, this function will break it into several keywords (breaking up the string on directory separators).
So the full file/directory address (including directories) can be longer than 69 characters.
However, if a single file or directory name (within a larger address) is longer than 69 characters, this function will truncate the name and print a warning.
If @code{quiet!=0}, then the warning will not be printed.
@end deftypefun
@deftypefun void gal_fits_key_write_wcsstr (fitsfile @code{*fptr}, struct wcsprm @code{wcs}, char @code{*wcsstr}, int @code{nkeyrec})
Write the WCS header string (produced with WCSLIB's @code{wcshdo} function) into the CFITSIO @code{fitsfile} pointer.
@code{nkeyrec} is the number of FITS header keywords in @code{wcsstr}.
This function will put a few blank keyword lines along with a comment @code{WCS information} before writing each keyword record.
@end deftypefun
@deftypefun void gal_fits_key_write (gal_fits_list_key_t @code{*keylist}, char @code{*filename}, char @code{*hdu}, char @code{*hdu_option_name}, int @code{freekeys}, int @code{create_fits_not_exists})
Write the list of keywords in @code{keylist} into the @code{hdu} extension of the file called @code{filename}.
If the file may not exist before this function is activated, set @code{create_fits_not_exists} to non-zero and set the HDU to @code{"0"}.
If the keywords should be freed after they are written, set the @code{freekeys} value to non-zero.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
The list nodes are meant to be dynamically allocated (because they will be freed after being written).
We thus recommend using the @code{gal_fits_key_list_add} or @code{gal_fits_key_list_add_end} to create and fill the list.
Below is one fully working example of using this function to write a keyword into an existing FITS file.
@example
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/fits.h>
int main()
@{
char *filename="test.fits";
gal_fits_list_key_t *keylist=NULL;
char *unit="unit";
float value=123.456;
char *keyname="MYKEY";
char *comment="A good description of the key";
gal_fits_key_list_add_end(&keylist, GAL_TYPE_FLOAT32, keyname, 0,
&value, 0, comment, 0, unit, 0);
gal_fits_key_list_title_add(&keylist, "Matching metadata", 0);
gal_fits_key_write(keylist, filename, "1", "NONE", 1, 0);
return EXIT_SUCCESS;
@}
@end example
@end deftypefun
@deftypefun void gal_fits_key_write_in_ptr (gal_fits_list_key_t @code{*keylist}, fitsfile @code{*fptr}, int @code{freekeys})
Write the list of keywords in @code{keylist} into the given CFITSIO @code{fitsfile} pointer and free keylist.
For more on the input @code{keylist}, see the description and example for @code{gal_fits_key_write}, above.
@end deftypefun
@deftypefun {gal_list_str_t *} gal_fits_with_keyvalue (gal_list_str_t *files, char *hdu, char *name, gal_list_str_t *values, char *hdu_option_name)
Given a list of FITS file names (@code{files}), a certain HDU (@code{hdu}), a certain keyword name (@code{name}), and a list of acceptable values (@code{values}), return the subset of file names where the requested keyword name has one of the acceptable values.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@deftypefun {gal_list_str_t *} gal_fits_unique_keyvalues (gal_list_str_t @code{*files}, char @code{*hdu}, char @code{*name}, char @code{*hdu_option_name})
Given a list of FITS file names (@code{files}), a certain HDU (@code{hdu}), a certain keyword name (@code{name}), return the list of unique values to that keyword name in all the files.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@node FITS arrays - images or cubes, FITS tables, FITS header keywords, FITS files
@subsubsection FITS arrays (images)
Images (or multi-dimensional arrays in general) are one of the common data formats that is stored in FITS files.
Only one image may be stored in each FITS HDU/extension.
The functions described here can be used to get the information of, read, or write images in FITS files.
@deftypefun void gal_fits_img_info (fitsfile @code{*fptr}, int @code{*type}, size_t @code{*ndim}, size_t @code{**dsize}, char @code{**name}, char @code{**unit})
Read the type (see @ref{Library data types}), number of dimensions, and size along each dimension of the CFITSIO @code{fitsfile} into the @code{type}, @code{ndim}, and @code{dsize} pointers respectively.
If @code{name} and @code{unit} are not @code{NULL} (point to a @code{char *}), then if the image has a name and units, the respective string will be put in these pointers.
@end deftypefun
@deftypefun {size_t *} gal_fits_img_info_dim (char @code{*filename}, char @code{*hdu}, size_t @code{*ndim}, char @code{*hdu_option_name})
Put the number of dimensions in the @code{hdu} extension of @file{filename}
in the space that @code{ndim} points to and return the size of the dataset
along each dimension as an allocated array with @code{*ndim} elements.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@deftypefun {gal_data_t *} gal_fits_img_read (char @code{*filename}, char @code{*hdu}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Read the contents of the @code{hdu} extension/HDU of @code{filename} into a Gnuastro generic data container (see @ref{Generic data container}) and return it.
If the necessary space is larger than @code{minmapsize}, then do not keep the data in RAM, but in a file on the HDD/SSD.
For more on @code{minmapsize} and @code{quietmmap} see the description under the same name in @ref{Generic data container}.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
Note that this function only reads the main data within the requested FITS extension, the WCS will not be read into the returned dataset.
To read the WCS, you can use @code{gal_wcs_read} function as shown below.
Afterwards, the @code{gal_data_free} function will free both the dataset and any WCS structure (if there are any).
@example
data=gal_fits_img_read(filename, hdu, -1, 1, NULL);
data->wcs=gal_wcs_read(filename, hdu, 0, 0, 0, &data->wcs->nwcs,
NULL);
@end example
@end deftypefun
@deftypefun {gal_data_t *} gal_fits_img_read_to_type (char @code{*inputname}, char @code{*inhdu}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Read the contents of the @code{hdu} extension/HDU of @code{filename} into a Gnuastro generic data container (see @ref{Generic data container}) of type @code{type} and return it.
This is just a wrapper around @code{gal_fits_img_read} (to read the image/array of any type) and @code{gal_data_copy_to_new_type_free} (to convert it to @code{type} and free the initially read dataset).
See the description there for more.
@end deftypefun
@cindex NaN
@cindex Convolution kernel
@cindex Kernel, convolution
@deftypefun {gal_data_t *} gal_fits_img_read_kernel (char @code{*filename}, char @code{*hdu}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Read the @code{hdu} of @code{filename} as a convolution kernel.
A convolution kernel must have an odd size along all dimensions, it must not have blank (NaN in floating point types) values and must be flipped around the center to make the proper convolution (see @ref{Convolution process}).
If there are blank values, this function will change the blank values to @code{0.0}.
If the input image does not have the other two requirements, this function will abort with an error describing the condition to the user.
The finally returned dataset will have a @code{float32} type.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
@end deftypefun
@deftypefun {fitsfile *} gal_fits_img_write_to_ptr (gal_data_t @code{*input}, char @code{*filename}, gal_fits_list_key_t @code{*keylist}, int @code{freekeys})
Write the @code{input} dataset into a FITS file named @file{filename} and return the corresponding CFITSIO @code{fitsfile} pointer.
This function will not close @code{fitsfile}, so you can still add other extensions to it after this function or make other modifications.
In case you want to add keywords into the HDU that contain the data, you can use the second two arguments (see the description of @code{gal_fits_key_write}).
These keywords will be written into the HDU before writing the data: when there are more than roughly 5 keywords (assuming your dataset has WCS) and your dataset is large, this can result in significant optimization of the running time (because adding a keyword beyond the 36 key slots will cause the whole data to shift for another block of 36 keywords).
@end deftypefun
@deftypefun void gal_fits_img_write (gal_data_t @code{*input}, char @code{*filename}, gal_fits_list_key_t @code{*keylist}, int @code{freekeys})
Write the @code{input} dataset into the FITS file named @file{filename}.
Also add the list of header keywords (@code{keylist}) to the newly created HDU/extension
The list of keywords will be freed after writing into the HDU, if you need them later, keep a separate copy of the list before calling this function.
For the importance of why it is better to add your keywords in this function (before writing the data) or after it, see the description of @code{gal_fits_img_write_to_ptr}.
@end deftypefun
@deftypefun void gal_fits_img_write_to_type (gal_data_t @code{*input}, char @code{*filename}, gal_fits_list_key_t @code{*keylist}, int @code{type}, int @code{freekeys})
Convert the @code{input} dataset into @code{type}, then write it into the FITS file named @file{filename}.
Also add the @code{keylist} keywords to the newly created HDU/extension along with your program's name (@code{program_string}).
After the FITS file is written, this function will free the copied dataset (with type @code{type}) from memory.
For the importance of why it is better to add your keywords in this function (before writing the data) or after it, see the description of @code{gal_fits_img_write_to_ptr}.
This is just a wrapper for the @code{gal_data_copy_to_new_type} and @code{gal_fits_img_write} functions.
@end deftypefun
@deftypefun void gal_fits_img_write_corr_wcs_str (gal_data_t @code{*input}, char @code{*filename}, char @code{*wcsstr}, int @code{nkeyrec}, double @code{*crpix}, gal_fits_list_key_t @code{*keylist}, int @code{freekeys})
Write the @code{input} dataset into @file{filename} using the @code{wcsstr} while correcting the @code{CRPIX} values.
For the importance of why it is better to add your keywords in this function (before writing the data) or after it, see the description of @code{gal_fits_img_write_to_ptr}.
This function is mainly useful when you want to make FITS files in parallel (from one main WCS structure, with just a differing CRPIX), for more on the arguments, see the description of @code{gal_fits_img_write}.
This can happen in the following cases for example:
@itemize
@item
When a large number of FITS images (with WCS) need to be created in parallel, it can be much more efficient to write the header's WCS keywords once at first, write them in the FITS file, then just correct the CRPIX values.
@item
WCSLIB's header writing function is not thread safe.
So when writing FITS images in parallel, we cannot write the header keywords in each thread.
@end itemize
@end deftypefun
@node FITS tables, , FITS arrays - images or cubes, FITS files
@subsubsection FITS tables
Tables are one of the common formats of data that is stored in FITS files.
Only one table may be stored in each FITS HDU/extension, but each table column must be viewed as a different dataset (with its own name, units and numeric data type for example).
The only constraint of the column datasets in a table is that they must be one-dimensional and have the same number of elements as the other columns.
The functions described here can be used to get the information of, read, or write columns into FITS tables.
@deftypefun void gal_fits_tab_size (fitsfile @code{*fitsptr}, size_t @code{*nrows}, size_t @code{*ncols})
Read the number of rows and columns in the table within CFITSIO's @code{fitsptr}.
@end deftypefun
@deftypefun int gal_fits_tab_format (fitsfile @code{*fitsptr})
Return the format of the FITS table contained in CFITSIO's @code{fitsptr}.
Recall that FITS tables can be in binary or ASCII formats.
This function will return @code{GAL_TABLE_FORMAT_AFITS} or @code{GAL_TABLE_FORMAT_BFITS} (defined in @ref{Table input output}).
If the @code{fitsptr} is not a table, this function will abort the program with an error message informing the user of the problem.
@end deftypefun
@deftypefun {gal_data_t *} gal_fits_tab_info (char @code{*filename}, char @code{*hdu}, size_t @code{*numcols}, size_t @code{*numrows}, int @code{*tableformat}, char @code{*hdu_option_name})
Store the information of each column in @code{hdu} of @code{filename} into an array of data structures with @code{numcols} elements (one data structure for each column) see @ref{Arrays of datasets}.
The total number of rows in the table is also put into the memory that @code{numrows} points to.
The format of the table (e.g., FITS binary or ASCII table) will be put in @code{tableformat} (macros defined in @ref{Table input output}).
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
This function is just for column information.
Therefore it only stores meta-data like column name, units and comments.
No actual data (contents of the columns for example, the @code{array} or @code{dsize} elements) will be allocated by this function.
This is a low-level function particular to reading tables in FITS format.
To be generic, it is recommended to use @code{gal_table_info} which will allow getting information from a variety of table formats based on the filename (see @ref{Table input output}).
@end deftypefun
@deftypefun {gal_data_t *} gal_fits_tab_read (char @code{*filename}, char @code{*hdu}, size_t @code{numrows}, gal_data_t @code{*colinfo}, gal_list_sizet_t @code{*indexll}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*hdu_option_name})
Read the columns given in the list @code{indexll} from a FITS table (in @file{filename} and HDU/extension @code{hdu}) into the returned linked list of data structures, see @ref{List of size_t} and @ref{List of gal_data_t}.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
Each column will be read independently, therefore they will be read in @code{numthreads} CPU threads to greatly speed up the reading when there are many columns and rows.
However, this only happens if CFITSIO was configured with @option{--enable-reentrant}.
This test has been done at Gnuastro's configuration time; if so, @code{GAL_CONFIG_HAVE_FITS_IS_REENTRANT} will have a value of 1, otherwise, it will have a value of 0.
For more on this macro, see @ref{Configuration information}).
If the necessary space for each column is larger than @code{minmapsize}, do not keep it in the RAM, but in a file in the HDD/SSD.
For more on @code{minmapsize} and @code{quietmmap}, see the description under the same name in @ref{Generic data container}.
Each column will have @code{numrows} rows and @code{colinfo} contains any further information about the columns (returned by @code{gal_fits_tab_info}, described above).
Note that this is a low-level function, so the output data linked list is the inverse of the input indexes linked list.
It is recommended to use @code{gal_table_read} for generic reading of tables, see @ref{Table input output}.
@end deftypefun
@deftypefun void gal_fits_tab_write (gal_data_t @code{*cols}, gal_list_str_t @code{*comments}, int @code{tableformat}, char @code{*filename}, char @code{*extname}, gal_fits_list_key_t @code{*keywords}, int @code{freekeys})
Write the list of datasets in @code{cols} (see @ref{List of gal_data_t}) as separate columns in a FITS table in @code{filename}.
If @code{filename} already exists then this function will write the table as a new extension called @code{extname}, after all existing ones.
The format of the table (ASCII or binary) may be specified with the @code{tableformat} (see @ref{Table input output}).
If @code{comments!=NULL}, each node of the list of strings will be written as a @code{COMMENT} keywords in the output FITS file (see @ref{List of strings}.
In case your table needs metadata keywords, you can use the @code{listkeys} and @code{freekeys}.
For more on these, see the description of @code{gal_fits_key_write_in_ptr}.
This is a low-level function for tables.
It is recommended to use @code{gal_table_write} for generic writing of tables in a variety of formats, see @ref{Table input output}.
@end deftypefun
@node File input output, World Coordinate System, FITS files, Gnuastro library
@subsection File input output
The most commonly used file format in astronomical data analysis is the FITS format (see @ref{Fits} for an introduction), therefore Gnuastro's library provides a large and separate collection of functions to read/write data from/to them (see @ref{FITS files}).
However, FITS is not well recognized outside the astronomical community and cannot be imported into documents or slides.
Therefore, in this section, we discuss the other different file formats that Gnuastro's library recognizes.
@menu
* Text files:: Reading and writing from/to plain text files.
* TIFF files:: Reading and writing from/to TIFF files.
* JPEG files:: Reading and writing from/to JPEG files.
* EPS files:: Writing to EPS files.
* PDF files:: Writing to PDF files.
@end menu
@node Text files, TIFF files, File input output, File input output
@subsubsection Text files (@file{txt.h})
The most universal and portable format for data storage are plain text files.
They can be viewed and edited on any text editor or even on the command-line.
This section are describes some functions that help in reading from and writing to plain text files.
@cindex CRLF line terminator
@cindex Line terminator, CRLF
Lines are one of the most basic building blocks (delimiters) of a text file.
Some operating systems like Microsoft Windows, terminate their ASCII text lines with a carriage return character and a new-line character (two characters, also known as CRLF line terminators).
While Unix-like operating systems just use a single new-line character.
The functions below that read an ASCII text file are able to identify lines with both kinds of line terminators.
Gnuastro defines a simple format for metadata of table columns in a plain text file that is discussed in @ref{Gnuastro text table format}.
The functions to get information from, read from and write to plain text files also follow those conventions.
@deffn Macro GAL_TXT_LINESTAT_INVALID
@deffnx Macro GAL_TXT_LINESTAT_BLANK
@deffnx Macro GAL_TXT_LINESTAT_COMMENT
@deffnx Macro GAL_TXT_LINESTAT_DATAROW
Status codes for lines in a plain text file that are returned by @code{gal_txt_line_stat}.
Lines which have a @key{#} character as their first non-white character are considered to be comments.
Lines with nothing but white space characters are considered blank.
The remaining lines are considered as containing data.
@end deffn
@deftypefun int gal_txt_line_stat (char @code{*line})
Check the contents of @code{line} and see if it is a blank, comment, or data line.
The returned values are the macros that start with @code{GAL_TXT_LINESTAT}.
@end deftypefun
@deftypefun {char *} gal_txt_trim_space (char @code{*str})
Trim the white space characters before and after the given string.
The operation is done within the allocated space of the string, so if you need the string untouched, please pass an allocated copy of the string to this function.
The returned pointer is within the input string.
If the input pointer is @code{NULL}, or the string only has white-space characters, the returned pointer will be @code{NULL}.
@end deftypefun
@deftypefun int gal_txt_contains_string (char @code{*full}, char @code{*match})
Return 1 if the string that @code{match} points to, can be exactly found within the string that @code{full} points to (character by character).
The to-match string can be in any part of the full string.
If any of the two strings have zero length or are a @code{NULL} pointer, this function will return 0.
@end deftypefun
@deftypefun {gal_data_t *} gal_txt_table_info (char @code{*filename}, gal_list_str_t @code{*lines}, size_t @code{*numcols}, size_t @code{*numrows})
Store the information of each column in a text file @code{filename}, or list of strings (@code{lines}) into an array of data structures with @code{numcols} elements (one data structure for each column) see @ref{Arrays of datasets}.
The total number of rows in the table is also put into the memory that @code{numrows} points to.
@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see below).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
This function is just for column information.
Therefore it only stores meta-data like column name, units and comments.
No actual data (contents of the columns for example, the @code{array} or @code{dsize} elements) will be allocated by this function.
This is a low-level function particular to reading tables in plain text format.
To be generic, it is recommended to use @code{gal_table_info} which will allow getting information from a variety of table formats based on the filename (see @ref{Table input output}).
@end deftypefun
@deftypefun {gal_data_t *} gal_txt_table_read (char @code{*filename}, gal_list_str_t @code{*lines}, size_t @code{numrows}, gal_data_t @code{*colinfo}, gal_list_sizet_t @code{*indexll}, size_t @code{minmapsize}, int @code{quietmmap})
Read the columns given in the list @code{indexll} from a plain text file (@code{filename}) or list of strings (@code{lines}), into a linked list of data structures (see @ref{List of size_t} and @ref{List of gal_data_t}).
If the necessary space for each column is larger than @code{minmapsize}, do not keep it in the RAM, but in a file on the HDD/SSD.
For more one @code{minmapsize} and @code{quietmmap}, see the description under the same name in @ref{Generic data container}.
@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see below).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
Note that this is a low-level function, so the output data list is the inverse of the input indices linked list.
It is recommended to use @code{gal_table_read} for generic reading of tables in any format, see @ref{Table input output}.
@end deftypefun
@deftypefun {gal_data_t *} gal_txt_image_read (char @code{*filename}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap})
Read the 2D plain text dataset in file (@code{filename}) or list of strings (@code{lines}) into a dataset and return the dataset.
If the necessary space for the image is larger than @code{minmapsize}, do not keep it in the RAM, but in a file on the HDD/SSD.
For more on @code{minmapsize} and @code{quietmmap}, see the description under the same name in @ref{Generic data container}.
@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see below).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
@end deftypefun
@deftypefun {gal_list_str_t *} gal_txt_stdin_read (long @code{timeout_microsec})
@cindex Standard input
Read the complete standard input and return a list of strings with each line (including the new-line character) as one node of that list.
If the standard input is already filled (for example, connected to another program's output with a pipe), then this function will parse the whole stream.
If Standard input is not pre-configured and the @emph{first line} is typed/written in the terminal before @code{timeout_microsec} micro-seconds, it will continue parsing until reaches an end-of-file character (@key{CTRL-D} after a new-line on the keyboard) with no time limit.
If nothing is entered before @code{timeout_microsec} micro-seconds, it will return @code{NULL}.
All the functions that can read plain text tables will accept a filename as well as a list of strings (intended to be the output of this function for using Standard input).
The reason for keeping the standard input is that once something is read from the standard input, it is hard to put it back.
We often need to read a text file several times: once to count how many columns it has and which ones are requested, and another time to read the desired columns.
So it easier to keep it all in allocated memory and pass it on from the start for each round.
@end deftypefun
@deftypefun {gal_list_str_t *} gal_txt_read_to_list (char *filename)
Read the contents of the given plain-text file and put each word (separated by a SPACE character, into a new node of the output list.
The order of nodes in the output is the same as the input.
Any new-line character at the end of a word is removed in the output list.
@end deftypefun
@deftypefun void gal_txt_write (gal_data_t @code{*cols}, struct gal_fits_list_key_t @code{**keylist}, gal_list_str_t @code{*comment}, char @code{*filename}, uint8_t @code{colinfoinstdout}, int @code{tab0_img1}, int @code{freekeys})
Write @code{cols} in a plain text file @code{filename} (table when @code{tab0_img1==0} and image when @code{tab0_img1==1}).
@code{cols} may have one or two dimensions which determines the output:
@table @asis
@item 1D
@code{cols} is treated as a column and a list of datasets (see @ref{List of gal_data_t}): every node in the list is written as one column in a table.
@item 2D
@code{cols} is a two dimensional array, it cannot be treated as a list (only one 2D array can currently be written to a text file).
So if @code{cols->next!=NULL} the next nodes in the list are ignored and will not be written.
@end table
This is a low-level function for tables.
It is recommended to use @code{gal_table_write} for generic writing of tables in a variety of formats, see @ref{Table input output}.
It is possible to add two types of metadata to the printed table: comments and keywords.
Each string in the list given to @code{comments} will be printed into the file as a separate line, starting with @code{#}.
Keywords have a more specific and computer-parsable format and are passed through @code{keylist}.
Each keyword is also printed in one line, but with the format below.
Because of the various components in a keyword, it is thus necessary to use the @code{gal_fits_list_key_t} data structure.
For more, see @ref{FITS header keywords}.
@example
# [key] NAME: VALUE / [UNIT] KEYWORD COMMENT.
@end example
If @code{filename} already exists this function will abort with an error and will not write over the existing file.
Before calling this function make sure if the file exists or not.
If @code{comments!=NULL}, a @code{#} will be put at the start of each node of the list of strings and will be written in the file before the column meta-data in @code{filename} (see @ref{List of strings}).
When @code{filename==NULL}, the column information will be printed on the standard output (command-line).
When @code{colinfoinstdout!=0} and @code{filename==NULL} (columns are printed in the standard output), the dataset metadata will also printed in the standard output.
When printing to the standard output, the column information can be piped into another program for further processing and thus the meta-data (lines starting with a @code{#}) must be ignored.
In such cases, you only print the column values by passing @code{0} to @code{colinfoinstdout}.
@end deftypefun
@node TIFF files, JPEG files, Text files, File input output
@subsubsection TIFF files (@file{tiff.h})
@cindex TIFF format
Outside of astronomy, the TIFF standard is arguably the most commonly used format to store high-precision data/images.
Unlike FITS however, the TIFF standard only supports images (not tables), but like FITS, it has support for all standard data types (see @ref{Numeric data types}) which is the primary reason other fields use it.
Another similarity of the TIFF and FITS standards is that TIFF supports multiple images in one file.
The TIFF standard calls each one of these images (and their accompanying meta-data) a `directory' (roughly equivalent to the FITS extensions).
Unlike FITS however, the directories can only be identified by their number (counting from zero), recall that in FITS you can also use the extension name to identify it.
The functions described here allow easy reading (and later writing) of TIFF files within Gnuastro or for users of Gnuastro's libraries.
Currently only reading is supported, but if you are interested, please get in touch with us.
@deftypefun {int} gal_tiff_name_is_tiff (char @code{*name})
Return @code{1} if @code{name} has a TIFF suffix.
This can be used to make sure that a given input file is TIFF.
See @code{gal_tiff_suffix_is_tiff} for a list of recognized suffixes.
@end deftypefun
@deftypefun {int} gal_tiff_suffix_is_tiff (char @code{*name})
Return @code{1} if @code{suffix} is a recognized TIFF suffix.
The recognized suffixes are @file{tif}, @file{tiff}, @file{TIFF} and @file{TIFF}.
@end deftypefun
@deftypefun {size_t} gal_tiff_dir_string_read (char @code{*string})
Return the number within @code{string} as a @code{size_t} number to identify a TIFF directory.
Note that the directories start counting from zero.
@end deftypefun
@deftypefun {gal_data_t *} gal_tiff_read (char @code{*filename}, size_t @code{dir}, size_t @code{minmapsize}, int @code{quietmmap})
Read the @code{dir} directory within the TIFF file @code{filename} and return the contents of that TIFF directory as @code{gal_data_t}.
If the directory's image contains multiple channels, the output will be a list (see @ref{List of gal_data_t}).
@end deftypefun
@deftypefun {void} gal_tiff_write (gal_data_t @code{*in}, char @code{*filename}, int @code{widthinpx}, int @code{heightinpix}, int @code{bitspersample}, int @code{numimg})
Write the given dataset (@code{in}) into @file{filename} (a TIFF file) with the specified image width in pixels (@code{widthinpix}),height in pixels (@code{heightinpix}), bits per sample (@code{bitspersample}), and number of images (@code{numimg}).
@end deftypefun
@node JPEG files, EPS files, TIFF files, File input output
@subsubsection JPEG files (@file{jpeg.h})
@cindex JPEG format
The JPEG file format is one of the most common formats for storing and transferring images, recognized by almost all image rendering and processing programs.
In particular, because of its lossy compression algorithm, JPEG files can have low volumes, making it used heavily on the internet.
For more on this file format, and a comparison with others, please see @ref{Recognized file formats}.
For scientific purposes, the lossy compression and very limited dynamic range (8-bit integers) make JPEG very unattractive for storing of valuable data.
However, because of its commonality, it will inevitably be needed in some situations.
The functions here can be used to read and write JPEG images into Gnuastro's @ref{Generic data container}.
If the JPEG file has more than one color channel, each channel is treated as a separate node in a list of datasets (see @ref{List of gal_data_t}).
@deftypefun {int} gal_jpeg_name_is_jpeg (char @code{*name})
Return @code{1} if @code{name} has a JPEG suffix.
This can be used to make sure that a given input file is JPEG.
See @code{gal_jpeg_suffix_is_jpeg} for a list of recognized suffixes.
@end deftypefun
@deftypefun {int} gal_jpeg_suffix_is_jpeg (char @code{*name})
Return @code{1} if @code{suffix} is a recognized JPEG suffix.
The recognized suffixes are @code{.jpg}, @code{.JPG}, @code{.jpeg}, @code{.JPEG}, @code{.jpe}, @code{.jif}, @code{.jfif} and @code{.jfi}.
@end deftypefun
@deftypefun {gal_data_t *} gal_jpeg_read (char @code{*filename}, size_t @code{minmapsize}, int @code{quietmmap})
Read the JPEG file @code{filename} and return the contents as @code{gal_data_t}.
If the directory's image contains multiple colors/channels, the output will be a list with one node per color/channel (see @ref{List of gal_data_t}).
@end deftypefun
@cindex JPEG compression quality
@deftypefun {void} gal_jpeg_write (gal_data_t @code{*in}, char @code{*filename}, uint8_t @code{quality}, float @code{widthincm})
Write the given dataset (@code{in}) into @file{filename} (a JPEG file).
If @code{in} is a list, then each node in the list will be a color channel, therefore there can only be 1, 3 or 4 nodes in the list.
If the number of nodes is different, then this function will abort the program with a message describing the cause.
The lossy JPEG compression level can be set through @code{quality} which is a value between 0 and 100 (inclusive, 100 being the best quality).
The display width of the JPEG file in units of centimeters (to suggest to viewers/users, only a meta-data) can be set through @code{widthincm}.
@end deftypefun
@node EPS files, PDF files, JPEG files, File input output
@subsubsection EPS files (@file{eps.h})
The Encapsulated PostScript (EPS) format is commonly used to store images (or individual/single-page parts of a document) in the PostScript documents.
For a more complete introduction, please see @ref{Recognized file formats}.
To provide high quality graphics, the Postscript language is a vectorized format, therefore pixels (elements of a ``rasterized'' format) are not defined in their context.
To display rasterized images, PostScript does allow arrays of pixels.
However, since the over-all EPS file may contain many vectorized elements (for example, borders, text, or other lines over the text) and interpreting them is not trivial or necessary within Gnuastro's scope, Gnuastro only provides some functions to write a dataset (in the @code{gal_data_t} format, see @ref{Generic data container}) into EPS.
@deffn Macro GAL_EPS_MARK_COLNAME_TEXT
@deffnx Macro GAL_EPS_MARK_COLNAME_FONT
@deffnx Macro GAL_EPS_MARK_COLNAME_XPIX
@deffnx Macro GAL_EPS_MARK_COLNAME_YPIX
@deffnx Macro GAL_EPS_MARK_COLNAME_SHAPE
@deffnx Macro GAL_EPS_MARK_COLNAME_COLOR
@deffnx Macro GAL_EPS_MARK_COLNAME_SIZE1
@deffnx Macro GAL_EPS_MARK_COLNAME_SIZE2
@deffnx Macro GAL_EPS_MARK_COLNAME_ROTATE
@deffnx Macro GAL_EPS_MARK_COLNAME_FONTSIZE
@deffnx Macro GAL_EPS_MARK_COLNAME_LINEWIDTH
Name of column that the required property will be read from.
@end deffn
@deffn Macro GAL_EPS_MARK_DEFAULT_SHAPE
@deffnx Macro GAL_EPS_MARK_DEFAULT_COLOR
@deffnx Macro GAL_EPS_MARK_DEFAULT_SIZE1
@deffnx Macro GAL_EPS_MARK_DEFAULT_SIZE2
@deffnx Macro GAL_EPS_MARK_DEFAULT_SIZE2_ELLIPSE
@deffnx Macro GAL_EPS_MARK_DEFAULT_ROTATE
@deffnx Macro GAL_EPS_MARK_DEFAULT_LINEWIDTH
@deffnx Macro GAL_EPS_MARK_DEFAULT_FONT
@deffnx Macro GAL_EPS_MARK_DEFAULT_FONTSIZE
Default values for the various mark properties.
These constants will be used if the caller has not provided any of the given property.
@end deffn
@deftypefun {int} gal_eps_name_is_eps (char @code{*name})
Return @code{1} if @code{name} has an EPS suffix.
This can be used to make sure that a given input file is EPS.
See @code{gal_eps_suffix_is_eps} for a list of recognized suffixes.
@end deftypefun
@deftypefun {int} gal_eps_suffix_is_eps (char @code{*name})
Return @code{1} if @code{suffix} is a recognized EPS suffix.
The recognized suffixes are @code{.eps}, @code{.EPS}, @code{.epsf}, @code{.epsi}.
@end deftypefun
@deftypefun {void} gal_eps_to_pt (float @code{widthincm}, size_t @code{*dsize}, size_t @code{*w_h_in_pt})
Given a specific width in centimeters (@code{widthincm} and the number of he dataset's pixels in each dimension (@code{dsize}) calculate the size of he output in PostScript points.
The output values are written in the @code{w_h_in_pt} array (which has to be allocated before calling this unction).
The first element in @code{w_h_in_pt} is the width and the second is the height of the image.
@end deftypefun
@deftypefun uint8_t gal_eps_shape_name_to_id (char *name)
Return the shape ID of a mark from its name (which is not case-sensitive).
@end deftypefun
@deftypefun uint8_t gal_eps_shape_id_to_name (uint8_t id)
Return the shape name from its ID.
@end deftypefun
@deftypefun {void} gal_eps_write (gal_data_t @code{*in}, char @code{*filename}, float @code{widthincm}, uint32_t @code{borderwidth}, uint8_t @code{bordercolor}, int @code{hex}, int @code{dontoptimize}, int @code{forps}, gal_data_t @code{*marks})
Write the @code{in} dataset into an EPS file called @code{filename}.
@code{in} has to be an unsigned 8-bit character type @code{GAL_TYPE_UINT8}, see @ref{Numeric data types}).
The desired width of the image in human/non-pixel units can be set with he @code{widthincm} argument.
If @code{borderwidth} is non-zero, it is interpreted as the width (in points) of a solid black border around the mage.
A border can helpful when importing the EPS file into a document.
The color of the border can be set with @code{bordercolor}, use the macros in @ref{Color functions}.
If @code{forpdf} is not zero, the output can be imported into a Postscript file directly (not as an ``encapsulated'' postscript, which is the default).
@cindex ASCII85 encoding
@cindex Hexadecimal encoding
EPS files are plain-text (can be opened/edited in a text editor), therefore there are different encodings to store the data (pixel values) within them.
Gnuastro supports the Hexadecimal and ASCII85 encoding.
ASCII85 is more efficient (producing small file sizes), so it is the default encoding.
To use Hexadecimal encoding, set @code{hex} to a non-zero value.
@cindex PDF
@cindex EPS
@cindex PostScript
By default, when the dataset only has two values, this function will use the PostScript optimization that allows setting the pixel values per bit, not byte (@ref{Recognized file formats}).
This can greatly help reduce the file size.
However, when @option{dontoptimize!=0}, this optimization is disabled: even though there are only two values (is binary), the difference between them does not correspond to the full contrast of black and white.
If @code{marks!=NULL}, it is assumed to contain multiple columns of information to draw marks over the background image.
The multiple columns are a linked list of 1D @code{gal_data_t} of the same size (number of rows) that are connected to each other through the @code{next} element (this is the same format that Gnuastro's library uses for tables, see @ref{Table input output} or @ref{Library demo - reading and writing table columns}).
The macros defined above that have the format of @code{GAL_EPS_MARK_COLNAME_*} show all the possible columns that you can provide in this linked list.
Only the two coordinate columns are mandatory (@code{GAL_EPS_MARK_COLNAME_XPIX} and @code{GAL_EPS_MARK_COLNAME_YPIX}.
If any of the other properties is not in the linked list, then the default properties of the @code{GAL_EPS_MARK_DEFAULT_*} macros will be used (also defined above.
The columns are identified based on the @code{name} element of Gnuastro's generic data structure (see @ref{Generic data container}).
The names must have the pre-defined names of the @code{GAL_EPS_MARK_COLNAME_*} macros (case sensitive).
Therefore, the order of columns in the list is irrelevant!
@end deftypefun
@node PDF files, , EPS files, File input output
@subsubsection PDF files (@file{pdf.h})
The portable document format (PDF) has arguably become the most common format used for distribution of documents.
In practice, a PDF file is just a compiled PostScript file.
For a more complete introduction, please see @ref{Recognized file formats}.
To provide high quality graphics, the PDF is a vectorized format, therefore pixels (elements of a ``rasterized'' format) are not defined in their context.
As a result, similar to @ref{EPS files}, Gnuastro only writes datasets to a PDF file, not vice-versa.
@deftypefun {int} gal_pdf_name_is_pdf (char @code{*name})
Return @code{1} if @code{name} has an PDF suffix.
This can be used to make sure that a given input file is PDF.
See @code{gal_pdf_suffix_is_pdf} for a list of recognized suffixes.
@end deftypefun
@deftypefun {int} gal_pdf_suffix_is_pdf (char @code{*name})
Return @code{1} if @code{suffix} is a recognized PDF suffix.
The recognized suffixes are @code{.pdf} and @code{.PDF}.
@end deftypefun
@deftypefun {void} gal_pdf_write (gal_data_t @code{*in}, char @code{*filename}, float @code{widthincm}, uint32_t @code{borderwidth}, uint8_t @code{bordercolor}, int @code{dontoptimize}, gal_data_t @code{*marks})
Write the @code{in} dataset into an EPS file called @code{filename}.
@code{in} has to be an unsigned 8-bit character type (@code{GAL_TYPE_UINT8}, see @ref{Numeric data types}).
The desired width of the image in human/non-pixel units can be set with the @code{widthincm} argument.
If @code{borderwidth} is non-zero, it is interpreted as the width (in points) of a solid black border around the image.
A border can helpful when importing the PDF file into a document.
The color of the border can be set with @code{bordercolor}, use the macros in @ref{Color functions}.
This function is just a wrapper for the @code{gal_eps_write} function in @ref{EPS files}.
After making the EPS file, Ghostscript (with a version of 9.10 or above, see @ref{Optional dependencies}) will be used to compile the EPS file to a PDF file.
Therefore if Ghostscript does not exist, does not have the proper version, or fails for any other reason, the EPS file will remain.
It can be used to find the cause, or use another converter or PostScript compiler.
@cindex PDF
@cindex EPS
@cindex PostScript
By default, when the dataset only has two values, this function will use the PostScript optimization that allows setting the pixel values per bit,not byte (@ref{Recognized file formats}).
This can greatly help reduce the file size.
However, when @option{dontoptimize!=0}, this optimization is disabled: even though there are only two values (is binary), the difference between them does not correspond to the full contrast of black and white.
If @code{marks!=NULL}, it is assumed to contain information on how to draw marks over the image.
This is directly fed to the @code{gal_eps_write} function, so for more on how to provide the mark information, see the description of @code{gal_eps_write} in @ref{EPS files}.
@end deftypefun
@node World Coordinate System, Arithmetic on datasets, File input output, Gnuastro library
@subsection World Coordinate System (@file{wcs.h})
The FITS standard defines the world coordinate system (WCS) as a mechanism to associate physical values to positions within a dataset.
For example, it can be used to convert pixel coordinates in an image to celestial coordinates like the right ascension and declination.
The functions in this section are mainly just wrappers over CFITSIO, WCSLIB and GSL library functions to help in common applications.
@cindex Thread safety
@cindex WCSLIB thread safety
[@strong{Tread safety}] Since WCSLIB version 5.18 (released in January 2018), most WCSLIB functions are thread safe@footnote{@url{https://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/threads.html}}.
Gnuastro has high-level functions to easily spin-off threads and speed up your programs.
For a fully working example see @ref{Library demo - multi-threaded operation}.
However you still need to be cautious in the following scenarios below.
@itemize
@item
Many users or operating systems may still use an older version.
@item
The @code{wcsprm} structure of WCSLIB is not thread-safe: you can't use the same pointer on multiple threads.
For example, if you use @code{gal_wcs_img_to_world} simultaneously on multiple threads, you shouldn't pass the same @code{wcsprm} structure pointer.
You can use @code{gal_wcs_copy} to keep and use separate copies the main structure within each thread, and later free the copies with @code{gal_wcs_free}.
@end itemize
The full set of functions and global constants that are defined by Gnuastro's @file{gnuastro/wcs.h} are described below.
@deffn {Global integer} GAL_WCS_DISTORTION_TPD
@deffnx {Global integer} GAL_WCS_DISTORTION_SIP
@deffnx {Global integer} GAL_WCS_DISTORTION_TPV
@deffnx {Global integer} GAL_WCS_DISTORTION_DSS
@deffnx {Global integer} GAL_WCS_DISTORTION_WAT
@deffnx {Global integer} GAL_WCS_DISTORTION_INVALID
@cindex WCS distortion
@cindex Distortion, WCS
@cindex TPD WCS distortion
@cindex SIP WCS distortion
@cindex TPV WCS distortion
@cindex DSS WCS distortion
@cindex WAT WCS distortion
@cindex Prior WCS distortion
@cindex sequent WCS distortion
Gnuastro identifiers of the various WCS distortion conventions, for more, see Calabretta et al. (2004, preprint)@footnote{@url{https://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf}}.
Among these, SIP is a prior distortion, the rest other are sequent distortions.
TPD is a superset of all these, hence it has both prior and sequeal distortion coefficients.
More information is given in the documentation of @code{dis.h}, from the WCSLIB manual@footnote{@url{https://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/dis_8h.html}}.
@end deffn
@deffn {Global integer} GAL_WCS_COORDSYS_EQB1950
@deffnx {Global integer} GAL_WCS_COORDSYS_EQJ2000
@deffnx {Global integer} GAL_WCS_COORDSYS_ECB1950
@deffnx {Global integer} GAL_WCS_COORDSYS_ECJ2000
@deffnx {Global integer} GAL_WCS_COORDSYS_GALACTIC
@deffnx {Global integer} GAL_WCS_COORDSYS_SUPERGALACTIC
@deffnx {Global integer} GAL_WCS_COORDSYS_INVALID
@cindex Galactic coordinate system
@cindex Ecliptic coordinate system
@cindex Equatorial coordinate system
@cindex Supergalactic coordinate system
@cindex Coordinate system: Galactic
@cindex Coordinate system: Ecliptic
@cindex Coordinate system: Equatorial
@cindex Coordinate system: Supergalactic
Recognized WCS coordinate systems in Gnuastro.
@code{EQ} and @code{EC} stand for the EQuatorial and ECliptic coordinate systems.
In the equatorial and ecliptic coordinates, @code{B1950} stands for the Besselian 1950 epoch and @code{J2000} stands for the Julian 2000 epoch.
@end deffn
@deffn {Global integer} GAL_WCS_LINEAR_MATRIX_PC
@deffnx {Global integer} GAL_WCS_LINEAR_MATRIX_CD
@deffnx {Global integer} GAL_WCS_LINEAR_MATRIX_INVALID
Identifiers of the linear transformation matrix: either in the @code{PCi_j} or the @code{CDi_j} formalism.
For more, see the description of @option{--wcslinearmatrix} in @ref{Input output options}.
@end deffn
@deffn {Global integer} GAL_WCS_PROJECTION_AZP
@deffnx {Global integer} GAL_WCS_PROJECTION_SZP
@deffnx {Global integer} GAL_WCS_PROJECTION_TAN
@deffnx {Global integer} GAL_WCS_PROJECTION_STG
@deffnx {Global integer} GAL_WCS_PROJECTION_SIN
@deffnx {Global integer} GAL_WCS_PROJECTION_ARC
@deffnx {Global integer} GAL_WCS_PROJECTION_ZPN
@deffnx {Global integer} GAL_WCS_PROJECTION_ZEA
@deffnx {Global integer} GAL_WCS_PROJECTION_AIR
@deffnx {Global integer} GAL_WCS_PROJECTION_CYP
@deffnx {Global integer} GAL_WCS_PROJECTION_CEA
@deffnx {Global integer} GAL_WCS_PROJECTION_CAR
@deffnx {Global integer} GAL_WCS_PROJECTION_MER
@deffnx {Global integer} GAL_WCS_PROJECTION_SFL
@deffnx {Global integer} GAL_WCS_PROJECTION_PAR
@deffnx {Global integer} GAL_WCS_PROJECTION_MOL
@deffnx {Global integer} GAL_WCS_PROJECTION_AIT
@deffnx {Global integer} GAL_WCS_PROJECTION_COP
@deffnx {Global integer} GAL_WCS_PROJECTION_COE
@deffnx {Global integer} GAL_WCS_PROJECTION_COD
@deffnx {Global integer} GAL_WCS_PROJECTION_COO
@deffnx {Global integer} GAL_WCS_PROJECTION_BON
@deffnx {Global integer} GAL_WCS_PROJECTION_PCO
@deffnx {Global integer} GAL_WCS_PROJECTION_TSC
@deffnx {Global integer} GAL_WCS_PROJECTION_CSC
@deffnx {Global integer} GAL_WCS_PROJECTION_QSC
@deffnx {Global integer} GAL_WCS_PROJECTION_HPX
@deffnx {Global integer} GAL_WCS_PROJECTION_XPH
The various types of recognized FITS WCS projections; for more details see @ref{Align pixels with WCS considering distortions}.
@end deffn
@deffn Macro GAL_WCS_FLTERROR
Limit of rounding for floating point errors.
@end deffn
@deftypefun int gal_wcs_distortion_name_to_id (char @code{*name})
Convert the given string (assumed to be a FITS-standard, string-based distortion identifier) to a Gnuastro's integer-based distortion identifier (one of the @code{GAL_WCS_DISTORTION_*} macros defined above).
The sting-based distortion identifiers have three characters and are all in capital letters.
@end deftypefun
@deftypefun int gal_wcs_distortion_name_from_id (int @code{id})
Convert the given Gnuastro integer-based distortion identifier (one of the @code{GAL_WCS_DISTORTION_*} macros defined above) to the string-based distortion identifier) of the FITS standard.
The sting-based distortion identifiers have three characters and are all in capital letters.
@end deftypefun
@deftypefun int gal_wcs_coordsys_name_to_id (char @code{*name})
Convert the given string to Gnuastro's integer-based WCS coordinate system identifier (one of the @code{GAL_WCS_COORDSYS_*}, listed above).
The expected strings can be seen in the description of the @option{--wcscoordsys} option of the Fits program, see @ref{Keyword inspection and manipulation}.
@end deftypefun
@deftypefun int gal_wcs_distortion_name_to_id (char @code{*name})
Convert the given string (assumed to be a FITS-standard, string-based distortion identifier) to a Gnuastro's integer-based distortion identifier (one of the @code{GAL_WCS_DISTORTION_*} macros defined above).
The sting-based distortion identifiers have three characters and are all in capital letters.
@end deftypefun
@deftypefun int gal_wcs_projection_name_from_id (int @code{id})
Convert the given Gnuastro integer-based projection identifier (one of the @code{GAL_WCS_PROJECTION_*} macros defined above) to the string-based distortion identifier) of the FITS standard.
The string-based projection identifiers have three characters and are all in capital letters.
For a description of the various projections, see @ref{Align pixels with WCS considering distortions}.
@end deftypefun
@deftypefun int gal_wcs_projection_name_to_id (char @code{*name})
Convert the given string (assumed to be a FITS-standard, string-based projection identifier) to a Gnuastro's integer-based projection identifier (one of the @code{GAL_WCS_PROJECTION_*} macros defined above).
The string-based projection identifiers have three characters and are all in capital letters.
For a description of the various projections, see @ref{Align pixels with WCS considering distortions}.
@end deftypefun
@deftypefun {struct wcsprm *} gal_wcs_create (double @code{*crpix}, double @code{*crval}, double @code{*cdelt}, double @code{*pc}, char @code{**cunit}, char @code{**ctype}, size_t @code{ndim}, int @code{linearmatrix})
Given all the most common standard components of the WCS standard, construct a @code{struct wcsprm}, initialize and set it for future processing.
See the FITS WCS standard for more on these keywords.
All the arrays must have @code{ndim} elements with them except for @code{pc} which should have @code{ndim*ndim} elements (a square matrix).
Also, @code{cunit} and @code{ctype} are arrays of strings.
If @code{GAL_WCS_LINEAR_MATRIX_CD} is passed to @code{linearmatrix} then the output WCS structure will have a CD matrix (even though you have given a PC and CDELT matrix as input to this function).
Otherwise, the output will have a PC and CDELT matrix (which is the recommended format by WCSLIB).
@example
@verbatim
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/wcs.h>
int
main(void)
{
int status;
size_t ndim=2;
struct wcsprm *wcs;
double crpix[]={50, 50};
double pc[]={-1, 0, 0, 1};
double cdelt[]={0.4, 0.4};
double crval[]={178.23, 36.98};
char *cunit[]={"deg", "deg"};
char *ctype[]={"RA---TAN", "DEC--TAN"};
int linearmatrix = GAL_WCS_LINEAR_MATRIX_PC;
/* Allocate and fill the 'wcsprm' structure. */
wcs=gal_wcs_create(crpix, crval, cdelt, pc, cunit,
ctype, ndim, linearmatrix);
printf("WCS structure created.\n");
/*... Add any operation with the WCS structure here ...*/
/* Free the WCS structure. */
gal_wcs_free(wcs);
printf("WCS structure freed.\n");
/* Return successfully. */
return EXIT_SUCCESS;
}
@end verbatim
@end example
@end deftypefun
@deftypefun {struct wcsprm *} gal_wcs_read_fitsptr (fitsfile @code{*fptr}, int @code{linearmatrix}, size_t @code{hstartwcs}, size_t @code{hendwcs}, int @code{*nwcs})
Return the WCSLIB @code{wcsprm} structure that is read from the CFITSIO @code{fptr} pointer to an opened FITS file.
With older WCSLIB versions (in particular below version 5.18) this function may not be thread-safe.
Also put the number of coordinate representations found into the space that @code{nwcs} points to.
To read the WCS structure directly from a filename, see @code{gal_wcs_read} below.
After processing has finished, you should free the WCS structure that this function returns with @code{gal_wcs_free}.
The @code{linearmatrix} argument takes one of three values: @code{0}, @code{GAL_WCS_LINEAR_MATRIX_PC} and @code{GAL_WCS_LINEAR_MATRIX_CD}.
It will determine the format of the WCS when it is later written to file with @code{gal_wcs_write} or @code{gal_wcs_write_in_fitsptr} (which is called by @code{gal_fits_img_write})
So if you do not want to write the WCS into a file later, just give it a value of @code{0}.
For more on the difference between these modes, see the description of @option{--wcslinearmatrix} in @ref{Input output options}.
If you do not want to search the full FITS header for WCS-related FITS keywords (for example, due to conflicting keywords), but only a specific range of the header keywords you can use the @code{hstartwcs} and @code{hendwcs} arguments to specify the keyword number range (counting from zero).
If @code{hendwcs} is larger than @code{hstartwcs}, then only keywords in the given range will be checked.
Hence, to ignore this feature (and search the full FITS header), give both these arguments the same value.
If the WCS information could not be read from the FITS file, this function will return a @code{NULL} pointer and put a zero in @code{nwcs}.
A WCSLIB error message will also be printed in @code{stderr} if there was an error.
This function is just a wrapper over WCSLIB's @code{wcspih} function which is not thread-safe.
Therefore, be sure to not call this function simultaneously (over multiple threads).
@end deftypefun
@deftypefun {struct wcsprm *} gal_wcs_read (char @code{*filename}, char @code{*hdu}, int @code{linearmatrix}, size_t @code{hstartwcs}, size_t @code{hendwcs}, int @code{*nwcs}, char @code{*hdu_option_name})
[@strong{Not thread-safe}] Return the WCSLIB structure that is read from the HDU/extension @code{hdu} of the file @code{filename}.
Also put the number of coordinate representations found into the space that @code{nwcs} points to.
Please see @code{gal_wcs_read_fitsptr} for more.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
After processing has finished, you should free the WCS structure that this function returns with @code{gal_wcs_free}.
@end deftypefun
@deftypefun void gal_wcs_free (struct wcsprm @code{*wcs})
Free the contents @emph{and} the space that @code{wcs} points to.
WCSLIB's @code{wcsfree} function only frees the contents of the @code{wcsprm} structure, not the actual pointer.
However, Gnuastro's @code{wcsprm} creation and reading functions allocate the structure also.
This higher-level function therefore simplifies the processing.
A complete working example is given in the description of @code{gal_wcs_create}.
@end deftypefun
@deftypefun {char *} gal_wcs_dimension_name (struct wcsprm @code{*wcs}, size_t @code{dimension})
Return an allocated string array (that should be freed later) containing the first part of the @code{CTYPEi} FITS keyword (which contains the dimension name in the FITS standard).
For example, if @code{CTYPE1} is @code{RA---TAN}, the string that function returns will be @code{RA}.
Recall that the second component of @code{CTYPEi} contains the type of projection.
@end deftypefun
@deftypefun {char *} gal_wcs_write_wcsstr (struct wcsprm @code{*wcs}, int @code{*nkeyrec})
Return an allocated string which contains the respective FITS keywords for the given WCS structure into it.
The number of keywords is written in the space pointed by @code{nkeyrec}.
Each FITS keyword is 80 characters wide (according to the FITS standard), and the next one is placed immediately after it, so the full string has @code{80*nkeyrec} bytes.
The output of this function can later be written into an opened FITS file using @code{gal_fits_key_write_wcsstr} (see @ref{FITS header keywords}).
@end deftypefun
@deftypefun void gal_wcs_write (struct wcsprm @code{*wcs}, char @code{*filename}, char @code{*extname}, gal_fits_list_key_t @code{*keylist}, int @code{freekeys})
Write the given WCS structure into the second extension of an empty FITS header.
The first/primary extension will be empty like the default format of all Gnuastro outputs.
When @code{extname!=NULL} it will be used as the FITS extension name.
Any set of extra headers can also be written through the @code{keylist} list.
If @code{freekeys!=0} then the list of keywords will be freed after they are written.
@end deftypefun
@deftypefun void gal_wcs_write_in_fitsptr (fitsfile @code{*fptr}, struct wcsprm @code{*wcs})
Convert the input @code{wcs} structure (keeping the WCS programmatically) into FITS keywords and write them into the given FITS file pointer.
This is a relatively low-level function which assumes the FITS file has already been opened with CFITSIO.
If you just want to write the WCS into an empty file, you can use @code{gal_wcs_write} (which internally calls this function after creating the FITS file and later closes it safely).
@end deftypefun
@deftypefun {struct wcsprm *} gal_wcs_copy (struct wcsprm @code{*wcs})
Return a fully allocated (independent) copy of @code{wcs}.
@end deftypefun
@deftypefun {struct wcsprm *} gal_wcs_copy_new_crval (struct wcsprm @code{*wcs}, double @code{*crval})
Return a fully allocated (independent) copy of @code{wcs} with a new set of @code{CRVAL} values.
WCSLIB keeps a lot of extra information within @code{wcsprm} and for optimizations, those extra information are used in its calculations.
Therefore, if you want to change parameters like the reference point's sky coordinate values (@code{CRVAL}), simply changing the values in @code{wcs->crval[0]} or @code{wcs->crval[1]} will not affect WCSLIB's calculations; you need to call this function.
@end deftypefun
@deftypefun void gal_wcs_remove_dimension (struct wcsprm @code{*wcs}, size_t @code{fitsdim})
Remove the given FITS dimension from the given @code{wcs} structure.
@end deftypefun
@deftypefun void gal_wcs_on_tile (gal_data_t @code{*tile})
Create a WCSLIB @code{wcsprm} structure for @code{tile} using WCS parameters of the tile's allocated block dataset, see @ref{Tessellation library} for the definition of tiles.
If @code{tile} already has a WCS structure, this function will not do anything.
In many cases, tiles are created for internal/low-level processing.
Hence for performance reasons, when creating the tiles they do not have any WCS structure.
When needed, this function can be used to add a WCS structure to each tile tile by copying the WCS structure of its block and correcting the reference point's coordinates within the tile.
@end deftypefun
@deftypefun {double *} gal_wcs_warp_matrix (struct wcsprm @code{*wcs})
Return the Warping matrix of the given WCS structure as an array of double precision floating points.
This will be the final matrix, irrespective of the type of storage in the WCS structure.
Recall that the FITS standard has several methods to store the matrix.
The output is an allocated square matrix with each side equal to the number of dimensions.
@end deftypefun
@deftypefun void gal_wcs_clean_small_errors (struct wcsprm @code{*wcs})
Errors can make small differences between the pixel-scale elements (@code{CDELT}) and can also lead to extremely small values in the @code{PC} matrix.
With this function, such errors will be ``cleaned'' as follows: 1) if the maximum difference between the @code{CDELT} elements is smaller than the reference error, it will be set to the mean value.
When the FITS keyword @code{CRDER} (optional) is defined it will be used as a reference, if not the default value is @code{GAL_WCS_FLTERROR}.
2) If any of the PC elements differ from 0, 1 or -1 by less than @code{GAL_WCS_FLTERROR}, they will be rounded to the respective value.
@end deftypefun
@deftypefun void gal_wcs_decompose_pc_cdelt (struct wcsprm @code{*wcs})
Decompose the @code{PCi_j} and @code{CDELTi} elements of @code{wcs}.
According to the FITS standard, in the @code{PCi_j} WCS formalism, the rotation matrix elements @mymath{m_{ij}} are encoded in the @code{PCi_j} keywords and the scale factors are encoded in the @code{CDELTi} keywords.
There is also another formalism (the @code{CDi_j} formalism) which merges the two into one matrix.
However, WCSLIB's internal operations are apparently done in the @code{PCi_j} formalism.
So its outputs are also all in that format by default.
When the input is a @code{CDi_j}, WCSLIB will still read the matrix directly into the @code{PCi_j} matrix and the @code{CDELTi} values are set to @code{1} (one).
This function is designed to correct such issues: after it is finished, the @code{CDELTi} values in @code{wcs} will correspond to the pixel scale, and the @code{PCi_j} will correction show the rotation.
@end deftypefun
@deftypefun void gal_wcs_to_cd (struct wcsprm @code{*wcs})
Make sure that the WCS structure's @code{PCi_j} and @code{CDi_j} keywords have the same value and that the @code{CDELTi} keywords have a value of 1.0.
Also, set the @code{wcs->altlin=2} (for the @code{CDi_j} formalism).
With these changes @code{gal_wcs_write_in_fitsptr} (and thus @code{gal_wcs_write} and @code{gal_fits_img_write} and its derivatives) will have an output file in the format of @code{CDi_j}.
@end deftypefun
@deftypefun int gal_wcs_coordsys_identify (struct wcsprm @code{*wcs})
Read the given WCS structure and return its coordinate system as one of Gnuastro's WCS coordinate system identifiers (the macros @code{GAL_WCS_COORDSYS_*}, listed above).
@end deftypefun
@deftypefun {struct wcsprm *} gal_wcs_coordsys_convert (struct wcsprm @code{*inwcs}, int @code{coordsysid})
Return a newly allocated WCS structure with the @code{coordsysid} coordinate system identifier.
The Gnuastro WCS distortion identifiers are defined in the @code{GAL_WCS_COORDSYS_*} macros mentioned above.
Since the returned dataset is newly allocated, if you do not need the original dataset after this, use the WCSLIB library function @code{wcsfree} to free the input, for example, @code{wcsfree(inwcs)}.
@end deftypefun
@deftypefun void gal_wcs_coordsys_convert_points (int @code{sys1}, double @code{*lng1_d}, double @code{*lat1_d}, int @code{sys2}, double @code{*lng2_d}, double @code{*lat2_d}, size_t @code{number})
Convert the input set of longitudes (@code{lng1_d}, in degrees) and latitudes (@code{lat1_d}, in degrees) within a recognized coordinate system (@code{sys1}; one of the @code{GAL_WCS_COORDSYS_*} macros above) into an output coordinate system (@code{sys2}).
The output values are written in @code{lng2_d} and @code{lng2_d}.
The total number of points should be given in @code{number}.
If you want the operation to be done in place (without allocating a new dataset), give the same pointers to the coordinate arguments.
@end deftypefun
@deftypefun void gal_wcs_coordsys_sys1_ref_in_sys2 (int @code{sys1}, int @code{sys2}, double @code{*lng2}, double @code{*lat2})
Return the longitude and latitude of the reference point (on the equator) of the first coordinate system (@code{sys1}) within the second system (@code{sys2}).
Coordinate systems are identified by the @code{GAL_WCS_COORDSYS_*} macros above.
@end deftypefun
@cindex WCS distortion
@cindex Distortion, WCS
@deftypefun {int} gal_wcs_distortion_identify (struct wcsprm @code{*wcs})
Returns the Gnuastro identifier for the distortion of the input WCS structure.
The returned value is one of the @code{GAL_WCS_DISTORTION_*} macros defined above.
When the input pointer to a structure is @code{NULL}, or it does not contain a distortion, the returned value will be @code{GAL_WCS_DISTORTION_INVALID}.
@end deftypefun
@cindex SIP WCS distortion
@cindex TPV WCS distortion
@deftypefun {struct wcsprm *} gal_wcs_distortion_convert(struct wcsprm @code{*inwcs}, int @code{outdisptype}, size_t @code{*fitsize})
Return a newly allocated WCS structure, where the distortion is implemented in a different standard, identified by the identifier @code{outdisptype}.
The Gnuastro WCS distortion identifiers are defined in the @code{GAL_WCS_DISTORTION_*} macros mentioned above.
The available conversions in this function will grow.
Currently it only supports converting TPV to SIP and vice versa, following the recipe of Shupe et al. (2012)@footnote{Proc. of SPIE Vol. 8451 84511M-1. @url{https://doi.org/10.1117/12.925460}, also available at @url{http://web.ipac.caltech.edu/staff/shupe/reprints/SIP_to_PV_SPIE2012.pdf}.}.
Please get in touch with us if you need other types of conversions.
For some conversions, direct analytical conversions do not exist.
It is thus necessary to model and fit the two types.
In such cases, it is also necessary to specify the @code{fitsize} array that is the size of the array along each C-ordered dimension, so you can simply pass the @code{dsize} element of your @code{gal_data_t} dataset, see @ref{Generic data container}.
Currently this is only necessary when converting TPV to SIP.
For other conversions you may simply pass a @code{NULL} pointer.
For example, if you want to convert the TPV coefficients of your input @file{image.fits} to SIP coefficients, you can use the following functions (which are also available as a command-line operation in @ref{Fits}).
@example
int nwcs;
gal_data_t *data=gal_fits_img_read("image.fits", "1", -1, 1, NULL);
inwcs=gal_wcs_read("image.fits", "1", 0, 0, 0, &nwcs, NULL);
data->wcs=gal_wcs_distortion_convert(inwcs, GAL_WCS_DISTORTION_TPV,
NULL);
wcsfree(inwcs);
gal_fits_img_write(data, "tpv.fits", NULL, 0);
@end example
@end deftypefun
@deftypefun double gal_wcs_angular_distance_deg (double @code{r1}, double @code{d1}, double @code{r2}, double @code{d2})
Return the angular distance (in degrees) between a point located at (@code{r1}, @code{d1}) to (@code{r2}, @code{d2}).
All input coordinates are in degrees.
The distance (along a great circle) on a sphere between two points is calculated with the equation below.
@dispmath {\cos(d)=\sin(d_1)\sin(d_2)+\cos(d_1)\cos(d_2)\cos(r_1-r_2)}
However, since the pixel scales are usually very small numbers, this function will not use that direct formula.
It will be use the @url{https://en.wikipedia.org/wiki/Haversine_formula, Haversine formula} which is better considering floating point errors:
@dispmath{{\sin^2(d)\over 2}=\sin^2\left( {d_1-d_2\over 2} \right)+\cos(d_1)\cos(d_2)\sin^2\left( {r_1-r_2\over 2} \right)}
@end deftypefun
@deftypefun void gal_wcs_box_vertices_from_center (double @code{ra_center}, double @code{dec_center}, double @code{ra_delta}, double @code{dec_delta}, double @code{*out})
Calculate the vertices of a rectangular box given the central RA and Dec and delta of each.
The vertice coordinates are written in the space that @code{out} points to (assuming it has space for eight @code{double}s).
Given the spherical nature of the coordinate system, the vertice lengths can't be calculated with a simple addition/subtraction.
For the declination, a simple addition/subtraction is enough.
Also, on the equator (where the RA is defined), a simple addition/subtraction along the RA is fine.
However, at other declinations, the new RA after a shift needs special treatment, such that close to the poles, a shift of 1 degree can correspond to a new RA that is much more distant than the original RA.
Assuming a point at Right Ascension (RA) and Declination of @mymath{\alpha} and @mymath{\delta}, a shift of @mymath{R} degrees along the positive RA direction corresponds to a right ascension of @mymath{\alpha+\frac{R}{\cos(\delta)}}.
For more, see the description of @code{box-vertices-on-sphere} in @ref{Coordinate and border operators}.
The 8 coordinates of the 4 vertices of the box are written in the order below.
Where ``bottom'' corresponds to a lower declination and ``top'' to higher declination, ``left'' corresponds to a larger RA and ``right'' corresponds to lower RA.
@example
out[0]: bottom-left RA
out[1]: bottom-left Dec
out[2]: bottom-right RA
out[3]: bottom-right Dec
out[4]: top-right RA
out[5]: top-right Dec
out[6]: top-left RA
out[7]: top-left Dec
@end example
@end deftypefun
@deftypefun {double *} gal_wcs_pixel_scale (struct wcsprm @code{*wcs})
Return the pixel scale for each dimension of @code{wcs} in degrees.
The output is an allocated array of double precision floating point type with one element for each dimension.
If it is not successful, this function will return @code{NULL}.
@end deftypefun
@deftypefun double gal_wcs_pixel_area_arcsec2 (struct wcsprm @code{*wcs})
Return the pixel area of @code{wcs} in arc-second squared.
This only works when the input dataset has at least two dimensions and the units of the first two dimensions (@code{CUNIT} keywords) are @code{deg} (for degrees).
In other cases, this function will return a NaN.
@end deftypefun
@deftypefun int gal_wcs_coverage (char @code{*filename}, char @code{*hdu}, size_t @code{*ondim}, double @code{**ocenter}, double @code{**owidth}, double @code{**omin}, double @code{**omax}, char @code{*hdu_option_name})
Find the sky coverage of the image HDU (@code{hdu}) within @file{filename}.
The number of dimensions is written into @code{ndim}, and space for the various output arrays is internally allocated and filled with the respective values.
Therefore you need to free them afterwards.
For more on @code{hdu_option_name} see the description of @code{gal_array_read} in @ref{Array input output}.
Currently this function only supports images that are less than 180 degrees in width (which is usually the case!).
This requirement has been necessary to account for images that cross the RA=0 hour circle on the sky.
Please get in touch with us at @url{mailto:bug-gnuastro@@gnu.org} if you have an image that is larger than 180 degrees so we try to find a solution based on need.
@end deftypefun
@deftypefun {gal_data_t *} gal_wcs_world_to_img (gal_data_t @code{*coords}, struct wcsprm @code{*wcs}, int @code{inplace})
Convert the linked list of world coordinates in @code{coords} to a linked list of image coordinates given the input WCS structure.
@code{coords} must be a linked list of data structures of float64 (`double') type, see@ref{Linked lists} and @ref{List of gal_data_t}.
The top (first popped/read) node of the linked list must be the first WCS coordinate (RA in an image usually) etc.
Similarly, the top node of the output will be the first image coordinate (in the FITS standard).
In case WCSLIB fails to convert any of the coordinates (for example, the RA of one coordinate is given as 400!), the respective element in the output will be written as NaN.
If @code{inplace} is zero, then the output will be a newly allocated list and the input list will be untouched.
However, if @code{inplace} is non-zero, the output values will be written into the input's already allocated array and the returned pointer will be the same pointer to @code{coords} (in other words, you can ignore the returned value).
Note that in the latter case, only the values will be changed, things like units or name (if present) will be untouched.
@end deftypefun
@deftypefun {gal_data_t *} gal_wcs_img_to_world (gal_data_t @code{*coords}, struct wcsprm @code{*wcs}, int @code{inplace})
Convert the linked list of image coordinates in @code{coords} to a linked list of world coordinates given the input WCS structure.
See the description of @code{gal_wcs_world_to_img} for more details.
@end deftypefun
@node Arithmetic on datasets, Tessellation library, World Coordinate System, Gnuastro library
@subsection Arithmetic on datasets (@file{arithmetic.h})
When the dataset's type and other information are already known, any programming language (including C) provides some very good tools for various operations (including arithmetic operations like addition) on the dataset with a simple loop.
However, as an author of a program, making assumptions about the type of data, its dimensions and other basic characteristics will come with a large processing burden.
For example, if you always read your data as double precision floating points for a simple operation like addition with an integer constant, you will be wasting a lot of CPU and memory when the input dataset is @code{int32} type for example, (see @ref{Numeric data types}).
This overhead may be small for small images, but as you scale your process up and work with hundred/thousands of files that can be very large, this overhead will take a significant portion of the processing power.
The functions and macros in this section are designed precisely for this purpose: to allow you to do any of the defined operations on any dataset with no overhead (in the native type of the dataset).
Gnuastro's Arithmetic program uses the functions and macros of this section, so please also have a look at the @ref{Arithmetic} program and in particular @ref{Arithmetic operators} for a better description of the operators discussed here.
The main function of this library is @code{gal_arithmetic} that is described below.
It can take an arbitrary number of arguments as operands (depending on the operator, similar to @code{printf}).
Its first two arguments are integers specifying the flags and operator.
So first we will review the constants for the recognized flags and operators and discuss them, then introduce the actual function.
@deffn Macro GAL_ARITHMETIC_FLAG_INPLACE
@deffnx Macro GAL_ARITHMETIC_FLAG_FREE
@deffnx Macro GAL_ARITHMETIC_FLAG_NUMOK
@deffnx Macro GAL_ARITHMETIC_FLAG_ENVSEED
@deffnx Macro GAL_ARITHMETIC_FLAG_QUIET
@deffnx Macro GAL_ARITHMETIC_FLAGS_BASIC
@cindex Bitwise Or
Bitwise flags to pass onto @code{gal_arithmetic} (see below).
To pass multiple flags, use the bitwise OR operator.
For example, if you pass @code{GAL_ARITHMETIC_FLAG_INPLACE | GAL_ARITHMETIC_FLAG_NUMOK}, then the operation will be done in-place (without allocating a new array), and a single number will also be acceptable (that will be applied to all the pixels).
Each flag is described below:
@table @code
@item GAL_ARITHMETIC_FLAG_INPLACE
Do the operation in-place (in the input dataset, thus modifying it) to improve CPU and memory usage.
If this flag is used, after @code{gal_arithmetic} finishes, the input dataset will be modified.
It is thus useful if you have no more need for the input after the operation.
@item GAL_ARITHMETIC_FLAG_FREE
Free (all the) input dataset(s) after the operation is done.
Hence the inputs are no longer usable after @code{gal_arithmetic}.
@item GAL_ARITHMETIC_FLAG_NUMOK
It is acceptable to use a number and an array together.
For example, if you want to add all the pixels in an image with a single number you can pass this flag to avoid having to allocate a constant array the size of the image (with all the pixels having the same number).
@item GAL_ARITHMETIC_FLAG_ENVSEED
Use the pre-defined environment variable for setting the random number generator seed when an operator needs it (for example, @code{mknoise-sigma}).
For more on random number generation in Gnuastro see @ref{Generating random numbers}.
@item GAL_ARITHMETIC_FLAG_QUIET
Do not print any warnings or messages for operators that may benefit from it.
For example, by default the @code{mknoise-sigma} operator prints the random number generator function and seed that it used (in case the user wants to reproduce this result later).
By activating this bit flag to the call, that extra information is not printed on the command-line.
@item GAL_ARITHMETIC_FLAGS_BASIC
A wrapper for activating the three ``basic'' operations that are commonly necessary together: @code{GAL_ARITHMETIC_FLAG_INPLACE}, @code{GAL_ARITHMETIC_FLAG_FREE} and @code{GAL_ARITHMETIC_FLAG_NUMOK}.
@end table
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_PLUS
@deffnx Macro GAL_ARITHMETIC_OP_MINUS
@deffnx Macro GAL_ARITHMETIC_OP_MULTIPLY
@deffnx Macro GAL_ARITHMETIC_OP_DIVIDE
@deffnx Macro GAL_ARITHMETIC_OP_LT
@deffnx Macro GAL_ARITHMETIC_OP_LE
@deffnx Macro GAL_ARITHMETIC_OP_GT
@deffnx Macro GAL_ARITHMETIC_OP_GE
@deffnx Macro GAL_ARITHMETIC_OP_EQ
@deffnx Macro GAL_ARITHMETIC_OP_NE
@deffnx Macro GAL_ARITHMETIC_OP_AND
@deffnx Macro GAL_ARITHMETIC_OP_OR
Binary operators (requiring two operands) that accept datasets of any recognized type (see @ref{Numeric data types}).
When @code{gal_arithmetic} is called with any of these operators, it expects two datasets as arguments.
For a full description of these operators with the same name, see @ref{Arithmetic operators}.
The first dataset/operand will be put on the left of the operator and the second will be put on the right.
The output type of the first four is determined from the input types (largest type of the inputs).
The rest (which are all conditional operators) will output a binary @code{uint8_t} (or @code{unsigned char}) dataset with values of either @code{0} (zero) or @code{1} (one).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_NOT
The logical NOT operator.
When @code{gal_arithmetic} is called with this operator, it only expects one operand (dataset), since this is a unary operator.
The output is @code{uint8_t} (or @code{unsigned char}) dataset of the same size as the input.
Any non-zero element in the input will be @code{0} (zero) in the output and any @code{0} (zero) will have a value of @code{1} (one).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_ISBLANK
A unary operator with output that is @code{1} for any element in the input that is blank, and @code{0} for any non-blank element.
When @code{gal_arithmetic} is called with this operator, it will only expect one input dataset.
The output dataset will have @code{uint8_t} (or @code{unsigned char}) type.
@code{gal_arithmetic} with this operator is just a wrapper for the @code{gal_blank_flag} function of @ref{Library blank values} and this operator is just included for completeness in arithmetic operations.
So in your program, it might be easier to just call @code{gal_blank_flag}.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_WHERE
The three-operand @emph{where} operator thoroughly discussed in @ref{Arithmetic operators}.
When @code{gal_arithmetic} is called with this operator, it will only expect three input datasets: the first (which is the same as the returned dataset) is the array that will be modified.
The second is the condition dataset (that must have a @code{uint8_t} or @code{unsigned char} type), and the third is the value to be used if condition is non-zero.
As a result, note that the order of operands when calling @code{gal_arithmetic} with @code{GAL_ARITHMETIC_OP_WHERE} is the opposite of running Gnuastro's Arithmetic program with the @code{where} operator (see @ref{Arithmetic}).
This is because the latter uses the reverse-Polish notation which is not necessary when calling a function (see @ref{Reverse polish notation}).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_SQRT
@deffnx Macro GAL_ARITHMETIC_OP_LOG
@deffnx Macro GAL_ARITHMETIC_OP_LOG10
Unary operator functions for calculating the square root (@mymath{\sqrt{i}}), @mymath{ln(i)} and @mymath{log(i)} mathematical operators on each element of the input dataset.
The returned dataset will have a floating point type, but its precision is determined from the input: if the input is a 64-bit floating point, the output will also be 64-bit.
Otherwise, the returned dataset will be 32-bit floating point: you do not gain precision by using these operators, but you gain in operating speed if you use the sufficient precision.
See @ref{Numeric data types} for more on the precision of floating point numbers to help in selecting your required floating point precision.
If you want your output to be 64-bit floating point but your input is a different type, you can convert the input to a 64-bit floating point type with @code{gal_data_copy_to_new_type} or @code{gal_data_copy_to_new_type_free}(see @ref{Copying datasets}).
Alternatively, you can use the @code{GAL_ARITHMETIC_OP_TO_FLOAT64} operators in the arithmetic library.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_SIN
@deffnx Macro GAL_ARITHMETIC_OP_COS
@deffnx Macro GAL_ARITHMETIC_OP_TAN
@deffnx Macro GAL_ARITHMETIC_OP_ASIN
@deffnx Macro GAL_ARITHMETIC_OP_ACOS
@deffnx Macro GAL_ARITHMETIC_OP_ATAN
@deffnx Macro GAL_ARITHMETIC_OP_ATAN2
Trigonometric functions (and their inverse).
All the angles, either inputs or outputs, are in units of degrees.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_SINH
@deffnx Macro GAL_ARITHMETIC_OP_COSH
@deffnx Macro GAL_ARITHMETIC_OP_TANH
@deffnx Macro GAL_ARITHMETIC_OP_ASINH
@deffnx Macro GAL_ARITHMETIC_OP_ACOSH
@deffnx Macro GAL_ARITHMETIC_OP_ATANH
Hyperbolic functions (and their inverse).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_RA_TO_DEGREE
@deffnx Macro GAL_ARITHMETIC_OP_DEC_TO_DEGREE
@deffnx Macro GAL_ARITHMETIC_OP_DEGREE_TO_RA
@deffnx Macro GAL_ARITHMETIC_OP_DEGREE_TO_DEC
@cindex Sexagesimal
@cindex Declination
@cindex Right Ascension
Unary operators to convert between degrees (as a single floating point number) to the sexagesimal Right Ascension and Declination format (as strings, respectively in the format of @code{_h_m_s} and @code{_d_m_s}).
The first two operators expect a string operand (in the sexagesimal formats mentioned above, but also in the @code{_:_:_}) and will return a double-precision floating point operand.
The latter two are the opposite.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_COUNTS_TO_MAG
@deffnx Macro GAL_ARITHMETIC_OP_MAG_TO_COUNTS
@deffnx Macro GAL_ARITHMETIC_OP_MAG_TO_SB
@deffnx Macro GAL_ARITHMETIC_OP_SB_TO_MAG
@deffnx Macro GAL_ARITHMETIC_OP_COUNTS_TO_JY
@deffnx Macro GAL_ARITHMETIC_OP_JY_TO_COUNTS
@deffnx Macro GAL_ARITHMETIC_OP_MAG_TO_JY
@deffnx Macro GAL_ARITHMETIC_OP_JY_TO_MAG
@deffnx Macro GAL_ARITHMETIC_OP_MAG_TO_NANOMAGGY
@deffnx Macro GAL_ARITHMETIC_OP_NANOMAGGY_TO_MAG
@cindex Surface Brightness
Binary operators for converting brightness and surface brightness units to and from each other.
The first operand to all of them are the values in the input unit (left of the @code{-TO-}, for example counts in @code{COUNTS_TO_MAG}).
The second popped operand is the zero point (right of the @code{-TO-}, for example magnitudes in @code{COUNTS_TO_MAG}).
The exceptions are the operators that involve surface brightness (those with @code{SB}).
For the surface brightness related operators, the second popped operand is the area in units of arcsec@mymath{^2} and the third popped operand is the final unit.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_COUNTS_TO_SB
@deffnx Macro GAL_ARITHMETIC_OP_SB_TO_COUNTS
Operators for converting counts to surface brightness and vice-versa.
These operators take three operands: 1) the input dataset in units of counts or surface brightness (depending on the operator), 2) the zero point, 3) the area in units of arcsec@mymath{^2}.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_AU_TO_PC
@deffnx Macro GAL_ARITHMETIC_OP_PC_TO_AU
@deffnx Macro GAL_ARITHMETIC_OP_LY_TO_PC
@deffnx Macro GAL_ARITHMETIC_OP_PC_TO_LY
@deffnx Macro GAL_ARITHMETIC_OP_LY_TO_AU
@deffnx Macro GAL_ARITHMETIC_OP_AU_TO_LY
Unary operators to convert various distance units to and from each other: Astronomical Units (AU), Parsecs (PC) and Light years (LY).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_MINVAL
@deffnx Macro GAL_ARITHMETIC_OP_MAXVAL
@deffnx Macro GAL_ARITHMETIC_OP_NUMBERVAL
@deffnx Macro GAL_ARITHMETIC_OP_SUMVAL
@deffnx Macro GAL_ARITHMETIC_OP_MEANVAL
@deffnx Macro GAL_ARITHMETIC_OP_STDVAL
@deffnx Macro GAL_ARITHMETIC_OP_MEDIANVAL
Unary operand statistical operators that will return a single value for datasets of any size.
These are just wrappers around similar functions in @ref{Statistical operations} and are included in @code{gal_arithmetic} only for completeness (to use easily in @ref{Arithmetic}).
In your programs, it will probably be easier if you use those @code{gal_statistics_} functions directly.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_UNIQUE
@deffnx Macro GAL_ARITHMETIC_OP_NOBLANK
Unary operands that will remove some elements from the input dataset.
The first will return the unique elements, and the second will return the non-blank elements.
Due to the removal of elements, the dimensionality of the output will be lost.
These are just wrappers over the @code{gal_statistics_unique} and @code{gal_blank_remove}.
These are just wrappers around similar functions in @ref{Statistical operations} and are included in @code{gal_arithmetic} only for completeness (to use easily in @ref{Arithmetic}).
In your programs, it will probably be easier if you use those @code{gal_statistics_} functions directly.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_LABEL_AREA
@deffnx Macro GAL_ARITHMETIC_OP_LABEL_MINIMUM
@deffnx Macro GAL_ARITHMETIC_OP_LABEL_MAXIMUM
Operators that will do the named measurement over the input labeled image (possibly using a values image as a second argument) and write the result in the label's pixels in the output.
The area operator does not need a values dataset, but the rest do.
For usage examples within the Arithmetic program, see the @code{label-*} operators of @ref{Statistical operators}.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_ABS
Unary operand absolute-value operator.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_MIN
@deffnx Macro GAL_ARITHMETIC_OP_MAX
@deffnx Macro GAL_ARITHMETIC_OP_NUMBER
@deffnx Macro GAL_ARITHMETIC_OP_SUM
@deffnx Macro GAL_ARITHMETIC_OP_MEAN
@deffnx Macro GAL_ARITHMETIC_OP_STD
@deffnx Macro GAL_ARITHMETIC_OP_MEDIAN
Multi-operand statistical operations.
When @code{gal_arithmetic} is called with any of these operators, it will expect only a single operand that will be interpreted as a list of datasets (see @ref{List of gal_data_t}).
These operators can work on multiple threads using the @code{numthreads} argument.
See the discussion under the @code{min} operator in @ref{Arithmetic operators}.
The output will be a single dataset with each of its elements replaced by the respective statistical operation on the whole list.
The type of the output is determined from the operator (irrespective of the input type): for @code{GAL_ARITHMETIC_OP_MIN} and @code{GAL_ARITHMETIC_OP_MAX}, it will be the same type as the input, for @code{GAL_ARITHMETIC_OP_NUMBER}, the output will be @code{GAL_TYPE_UINT32} and for the rest, it will be @code{GAL_TYPE_FLOAT32}.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_QUANTILE
Similar to the operands above (including @code{GAL_ARITHMETIC_MIN}), except that when @code{gal_arithmetic} is called with these operators, it requires two arguments.
The first is the list of datasets like before, and the second is the 1-element dataset with the quantile value.
The output type is the same as the inputs.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_SIGCLIP_STD
@deffnx Macro GAL_ARITHMETIC_OP_SIGCLIP_MEAN
@deffnx Macro GAL_ARITHMETIC_OP_SIGCLIP_MEDIAN
@deffnx Macro GAL_ARITHMETIC_OP_SIGCLIP_NUMBER
Similar to the operands above (including @code{GAL_ARITHMETIC_MIN}), except that when @code{gal_arithmetic} is called with these operators, it requires two arguments.
The first is the list of datasets like before, and the second is the 2-element list of @mymath{\sigma}-clipping parameters.
The first element in the parameters list is the multiple of sigma and the second is the termination criteria (see @ref{Sigma clipping}).
The output type of @code{GAL_ARITHMETIC_OP_SIGCLIP_NUMBER} will be @code{GAL_TYPE_UINT32} and for the rest it will be @code{GAL_TYPE_FLOAT32}.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_MKNOISE_SIGMA
@deffnx Macro GAL_ARITHMETIC_OP_MKNOISE_POISSON
@deffnx Macro GAL_ARITHMETIC_OP_MKNOISE_UNIFORM
Add noise to the input dataset.
These operators take two arguments: the first is the input data set (can have any dimensionality or number of elements.
The second argument is the noise specifier (a single element, of any type): for a fixed-sigma noise, it is the Gaussian standard deviation, for the Poisson noise, it is the background (see @ref{Photon counting noise}) and for the uniform distribution it is the width of the interval around each element of the input dataset.
By default, a separate random number generator seed will be used on each separate run of these operators.
Therefore two identical runs on the same input will produce different results.
You can get reproducible results by setting the @code{GAL_RNG_SEED} environment variable and activating the @code{GAL_ARITHMETIC_FLAG_ENVSEED} flag.
For more on random number generation in Gnuastro, see @ref{Generating random numbers}.
By default these operators will print the random number generator function and seed (in case the user wants to reproduce the result later), but this can be disabled by activating the bit-flag @code{GAL_ARITHMETIC_FLAG_QUIET} described above.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_RANDOM_FROM_HIST
@deffnx Macro GAL_ARITHMETIC_OP_RANDOM_FROM_HIST_RAW
Select random values from a custom distribution (defined by a histogram).
For more, see the description of the respective operators in @ref{Generating random numbers}.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_STITCH
Stitch a list of input datasets along the requested dimension.
See the description of the @code{stitch} operator in Arithmetic (@ref{Dimensionality changing operators}).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_POW
Binary operator to-power operator.
When @code{gal_arithmetic} is called with any of these operators, it will expect two operands: raising the first by the second (returning a floating point, inputs can be integers).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_BITAND
@deffnx Macro GAL_ARITHMETIC_OP_BITOR
@deffnx Macro GAL_ARITHMETIC_OP_BITXOR
@deffnx Macro GAL_ARITHMETIC_OP_BITLSH
@deffnx Macro GAL_ARITHMETIC_OP_BITRSH
@deffnx Macro GAL_ARITHMETIC_OP_MODULO
Binary integer-only operand operators.
These operators are only defined on integer data types.
When @code{gal_arithmetic} is called with any of these operators, it will expect two operands: the first is put on the left of the operator and the second on the right.
The ones starting with @code{BIT} are the respective bitwise operators in C and @code{MODULO} is the modulo/remainder operator.
For a discussion on these operators, please see
@ref{Arithmetic operators}.
The output type is determined from the input types and C's internal conversions: it is strongly recommended that both inputs have the same type (any integer type), otherwise the bitwise behavior will be determined by your compiler.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_BITNOT
The unary bitwise NOT operator.
When @code{gal_arithmetic} is called with any of these operators, it will expect one operand of an integer type and preform the bitwise NOT operation on it.
The output will have the same type as the input.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_TO_UINT8
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT8
@deffnx Macro GAL_ARITHMETIC_OP_TO_UINT16
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT16
@deffnx Macro GAL_ARITHMETIC_OP_TO_UINT32
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT32
@deffnx Macro GAL_ARITHMETIC_OP_TO_UINT64
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT64
@deffnx Macro GAL_ARITHMETIC_OP_TO_FLOAT32
@deffnx Macro GAL_ARITHMETIC_OP_TO_FLOAT64
Unary type-conversion operators.
When @code{gal_arithmetic} is called with any of these operators, it will expect one operand and convert it to the requested type.
Note that with these operators, @code{gal_arithmetic} is just a wrapper over the @code{gal_data_copy_to_new_type} or @code{gal_data_copy_to_new_type_free} that are discussed in @code{Copying datasets}.
It accepts these operators only for completeness and easy usage in @ref{Arithmetic}.
So in your programs, it might be preferable to directly use those functions.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_E
@deffnx Macro GAL_ARITHMETIC_OP_C
@deffnx Macro GAL_ARITHMETIC_OP_G
@deffnx Macro GAL_ARITHMETIC_OP_H
@deffnx Macro GAL_ARITHMETIC_OP_AU
@deffnx Macro GAL_ARITHMETIC_OP_LY
@deffnx Macro GAL_ARITHMETIC_OP_PI
@deffnx Macro GAL_ARITHMETIC_OP_AVOGADRO
@deffnx Macro GAL_ARITHMETIC_OP_FINESTRUCTURE
Return the respective mathematical constant.
For their description please see @ref{Constants}.
The constant values are taken from the GNU Scientific Library's headers (defined in @file{gsl/gsl_math.h}).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_BOX_AROUND_ELLIPSE
Return the width (along horizontal) and height (along vertical) of a box that encompasses an ellipse with the same center point.
For more on the three input operands to this operator see the description of @code{box-around-ellipse}.
This function returns two datasets as a @code{gal_data_t} linked list.
The top element of the list is the height and its next element is the width.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_BOX_VERTICES_ON_SPHERE
Return the vertices of a (possibly rectangular) box on a sphere, given its center RA, Dec and the width of the box along the two dimensions.
It will take the spherical nature of the coordinate system into account (for more, see the description of @code{gal_wcs_box_vertices_from_center} in @ref{World Coordinate System}).
This function returns 8 datasets as a @code{gal_data_t} linked list in the following order: bottom-left RA, bottom-left Dec, bottom-right RA, bottom-right Dec, top-right RA, top-right Dec, top-left RA, top-left Dec.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_MAKENEW
Create a new, zero-valued dataset with an unsigned 8-bit data type.
The length along each dimension of the dataset should be given as a single list of @code{gal_data_t}s.
The number of dimensions is derived from the number of nodes in the list and the length along each dimension is the single-valued element within that list.
Just note that the list should be in the reverse of the desired dimensions.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_MAKENEW
Given a dataset and a constant,
@end deffn
@deffn Macro GAL_ARITHMETIC_OPSTR_LOADCOL_HDU
@deffnx Macro GAL_ARITHMETIC_OPSTR_LOADCOL_FILE
@deffnx Macro GAL_ARITHMETIC_OPSTR_LOADCOL_PREFIX
@deffnx Macro GAL_ARITHMETIC_OPSTR_LOADCOL_HDU_LEN
@deffnx Macro GAL_ARITHMETIC_OPSTR_LOADCOL_FILE_LEN
@deffnx Macro GAL_ARITHMETIC_OPSTR_LOADCOL_PREFIX_LEN
Constant components of the @command{load-col-} operator (see @ref{Loading external columns}).
These are just fixed strings (and their lengths) that are placed in between the various components of that operator to allow choosing a certain column of a certain HDU of a certain file.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_INDEX
@deffnx Macro GAL_ARITHMETIC_OP_COUNTER
@deffnx Macro GAL_ARITHMETIC_OP_INDEXONLY
@deffnx Macro GAL_ARITHMETIC_OP_COUNTERONLY
Return a dataset with the same number of elements and dimensionality as the first (and only!) input dataset.
But each output pixel's value will be replaced by its index (counting from 0) or counter (counting from 1).
Note that the @code{GAL_ARITHMETIC_OP_INDEX} and @code{GAL_ARITHMETIC_OP_INDEXONLY} operators are identical within the library (same for the counter operators).
They are given separate macros here to help the higher-level callers to manage their inputs separately (see @ref{Size and position operators}).
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_SIZE
Size operator that will return a single value for datasets of any kind.
When @code{gal_arithmetic} is called with this operator, it requires two arguments.
The first is the dataset, and the second is a single integer value.
The output type is a single integer.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_SWAP
Return the first dataset, but with the second dataset being placed in the @code{next} element of the first.
This is useful to swap the operators on the coadds of the higher-level programs that call the arithmetic library.
@end deffn
@deffn Macro GAL_ARITHMETIC_OP_EQB1950_TO_EQJ2000
@deffnx Macro GAL_ARITHMETIC_OP_EQB1950_TO_ECB1950
@deffnx Macro GAL_ARITHMETIC_OP_EQB1950_TO_ECJ2000
@deffnx Macro GAL_ARITHMETIC_OP_EQB1950_TO_GALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_EQB1950_TO_SUPERGALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_EQJ2000_TO_EQB1950
@deffnx Macro GAL_ARITHMETIC_OP_EQJ2000_TO_ECB1950
@deffnx Macro GAL_ARITHMETIC_OP_EQJ2000_TO_ECJ2000
@deffnx Macro GAL_ARITHMETIC_OP_EQJ2000_TO_GALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_EQJ2000_TO_SUPERGALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_ECB1950_TO_EQB1950
@deffnx Macro GAL_ARITHMETIC_OP_ECB1950_TO_EQJ2000
@deffnx Macro GAL_ARITHMETIC_OP_ECB1950_TO_ECJ2000
@deffnx Macro GAL_ARITHMETIC_OP_ECB1950_TO_GALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_ECB1950_TO_SUPERGALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_ECJ2000_TO_EQB1950
@deffnx Macro GAL_ARITHMETIC_OP_ECJ2000_TO_EQJ2000
@deffnx Macro GAL_ARITHMETIC_OP_ECJ2000_TO_ECB1950
@deffnx Macro GAL_ARITHMETIC_OP_ECJ2000_TO_GALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_ECJ2000_TO_SUPERGALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_GALACTIC_TO_EQB1950
@deffnx Macro GAL_ARITHMETIC_OP_GALACTIC_TO_EQJ2000
@deffnx Macro GAL_ARITHMETIC_OP_GALACTIC_TO_ECB1950
@deffnx Macro GAL_ARITHMETIC_OP_GALACTIC_TO_ECJ2000
@deffnx Macro GAL_ARITHMETIC_OP_GALACTIC_TO_SUPERGALACTIC
@deffnx Macro GAL_ARITHMETIC_OP_SUPERGALACTIC_TO_EQB1950
@deffnx Macro GAL_ARITHMETIC_OP_SUPERGALACTIC_TO_EQJ2000
@deffnx Macro GAL_ARITHMETIC_OP_SUPERGALACTIC_TO_ECB1950
@deffnx Macro GAL_ARITHMETIC_OP_SUPERGALACTIC_TO_ECJ2000
@deffnx Macro GAL_ARITHMETIC_OP_SUPERGALACTIC_TO_GALACTIC
Operators that convert recognized celestial coordinates to and from each other.
They all take two operands and return two @code{gal_data_t}s (as a list).
For more on celestial coordinate conversion, see @ref{Coordinate conversion operators}.
@end deffn
@deftypefun {gal_data_t *} gal_arithmetic (int @code{operator}, size_t @code{numthreads}, int @code{flags}, ...)
Apply the requested arithmetic operator on the operand(s).
The @emph{operator} is identified through the macros above (that start with @code{GAL_ARITHMETIC_OP_}).
The number of necessary operands (number of arguments to replace `@code{...}' in the declaration of this function, above) depends on the operator and is described under each operator, above.
Each operand has a type of `@code{gal_data_t *}' (see last paragraph with example).
If the operator can work on multiple threads, the number of threads can be specified with @code{numthreads}.
When the operator is single-threaded, @code{numthreads} will be ignored.
Special conditions can also be specified with the @code{flag} operator (a bit-flag with bits described above, for example, @code{GAL_ARITHMETIC_FLAG_INPLACE} or @code{GAL_ARITHMETIC_FLAG_FREE}).
@code{gal_arithmetic} is a multi-argument function (like C's @code{printf}).
In other words, the number of necessary arguments is not fixed and depends on the value to @code{operator}.
Below, you can see a minimal, fully working example, showing how different operators need different numbers of arguments.
@example
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/fits.h>
#include <gnuastro/arithmetic.h>
int
main(void)
@{
/* Define the datasets and flag. */
gal_data_t *in1, *in2, *out1, *out2;
int flag=GAL_ARITHMETIC_FLAGS_BASIC;
/* Read the input images. */
in1=gal_fits_img_read("image1.fits", "1", -1, 1, NULL);
in2=gal_fits_img_read("image2.fits", "1", -1, 1, NULL);
/* Take the logarithm (base-e) of the first input. */
out1=gal_arithmetic(GAL_ARITHMETIC_OP_LOG, 1, flag, in1);
/* Add the second input with the logarithm of the first. */
out2=gal_arithmetic(GAL_ARITHMETIC_OP_PLUS, 1, flag, in2, out1);
/* Write the output into a file. */
gal_fits_img_write(out2, "out.fits", NULL, 0);
/* Clean up. Due to the in-place flag (in
* 'GAL_ARITHMETIC_FLAGS_BASIC'), 'out1' and 'out2' point to the
* same array in memory and due to the freeing flag, any input
* dataset(s) that were not returned have been freed internally
* by 'gal_arithmetic'. Therefore it is only necessary to free
* 'out2': all other allocated spaces have been freed internally.
* before reaching this point. */
gal_data_free(out2);
/* Return control back to the OS (saying that we succeeded). */
return EXIT_SUCCESS;
@}
@end example
As you see above, you can feed the returned dataset from one call of @code{gal_arithmetic} to another call.
The advantage of using @code{gal_arithmetic} (as opposed to manually writing a @code{for} or @code{while} loop and doing the operation with the @code{+} operator and @code{log()} function yourself), is that you do not have to worry about the type of the input data (for a list of acceptable data types in Gnuastro, see @ref{Library data types}).
Arithmetic will automatically deal with the data types internally and choose the best output type depending on the operator.
@end deftypefun
@deftypefun int gal_arithmetic_set_operator (char @code{*string}, size_t @code{*num_operands})
Return the operator macro/code that corresponds to @code{string}.
The number of operands that it needs are written into the space that @code{*num_operands} points to.
If the string could not be interpreted as an operator, this function will return @code{GAL_ARITHMETIC_OP_INVALID}.
This function will check @code{string} with the fixed human-readable names (using @code{strcmp}) for the operators and return the two numbers.
Note that @code{string} must only contain the single operator name and nothing else (not even any extra white space).
@end deftypefun
@deftypefun {char *} gal_arithmetic_operator_string (int @code{operator})
Return the human-readable standard string that corresponds to the given operator.
For example, when the input is @code{GAL_ARITHMETIC_OP_PLUS} or @code{GAL_ARITHMETIC_OP_MEAN}, the strings @code{+} or @code{mean} will be returned.
@end deftypefun
@deftypefun {gal_data_t *} gal_arithmetic_load_col (char @code{*str}, int @code{searchin}, int @code{ignorecase}, size_t @code{minmapsize}, int @code{quietmmap})
Return the column that corresponds to the identifier in the input string (@code{str}).
@code{str} is expected to be in the format of the @command{load-col-} operator (see @ref{Loading external columns}).
This function will extract the column identifier, the file name and the HDU (if necessary) from the string, read the requested column in memory and return it.
See @ref{Table input output} for the macros that can be given to @code{searchin} and @code{ignorecase} and @ref{Generic data container} for the definitions of @code{minmapsize} and @code{quietmmap}.
@end deftypefun
@node Tessellation library, Bounding box, Arithmetic on datasets, Gnuastro library
@subsection Tessellation library (@file{tile.h})
In many contexts, it is desirable to slice the dataset into subsets or tiles (overlapping or not).
In such a way that you can work on each tile independently.
One method would be to copy that region to a separate allocated space, but in many contexts this is not necessary and in fact can be a big burden on CPU/Memory usage.
The @code{block} pointer in Gnuastro's @ref{Generic data container} is defined for such situations: where allocation is not necessary.
You just want to read the data or write to it independently (or in coordination with) other regions of the dataset.
Added with parallel processing, this can greatly improve the time/memory consumption.
See the figure below for example: assume the @code{larger} dataset is a contiguous block of memory that you are interpreting as a 2D array.
But you only want to work on the smaller @code{tile} region.
@example
larger
---------------------------------
| |
| tile |
| ---------- |
| | | |
| |_ | |
| |*| | |
| ---------- |
| tile->block = larger |
|_ |
|*| |
---------------------------------
@end example
To use @code{gal_data_t}'s @code{block} concept, you allocate a @code{gal_data_t *tile} which is initialized with the pointer to the first element in the sub-array (as its @code{array} argument).
Note that this is not necessarily the first element in the larger array.
You can set the size of the tile along with the initialization as you please.
Recall that, when given a non-@code{NULL} pointer as @code{array}, @code{gal_data_initialize} (and thus @code{gal_data_alloc}) do not allocate any space and just uses the given pointer for the new @code{array} element of the @code{gal_data_t}.
So your @code{tile} data structure will not be pointing to a separately allocated space.
After the allocation is done, you just point @code{tile->block} to the @code{larger} dataset which hosts the full block of memory.
Where relevant, Gnuastro's library functions will check the @code{block} pointer of their input dataset to see how to deal with dimensions and increments so they can always remain within the tile.
The tools introduced in this section are designed to help in defining and working with tiles that are created in this manner.
Since the block structure is defined as a pointer, arbitrary levels of tessellation/grid-ing are possible (@code{tile->block} may itself be a tile in an even larger allocated space).
Therefore, just like a linked-list (see @ref{Linked lists}), it is important to have the @code{block} pointer of the largest (allocated) dataset set to @code{NULL}.
Normally, you will not have to worry about this, because @code{gal_data_initialize} (and thus @code{gal_data_alloc}) will set the @code{block} element to @code{NULL} by default, just remember not to change it.
You can then only change the @code{block} element for the tiles you define over the allocated space.
Below, we will first review constructs for @ref{Independent tiles} and then define the current approach to fully tessellating a dataset (or covering every pixel/data-element with a non-overlapping tile grid in @ref{Tile grid}.
This approach to dealing with parts of a larger block was inspired from a similarly named concept in the GNU Scientific Library (GSL), see its ``Vectors and Matrices'' chapter for their implementation.
@menu
* Independent tiles:: Work on or check independent tiles.
* Tile grid:: Cover a full dataset with non-overlapping tiles.
@end menu
@node Independent tiles, Tile grid, Tessellation library, Tessellation library
@subsubsection Independent tiles
The most general application of tiles is to treat each independently, for example they may overlap, or they may not cover the full image.
This section provides functions to help in checking/inspecting such tiles.
In @ref{Tile grid} we will discuss functions that define/work-with a tile grid (where the tiles do not overlap and fully cover the input dataset).
Therefore, the functions in this section are general and can be used for the tiles produced by that section also.
@deftypefun void gal_tile_start_coord (gal_data_t @code{*tile}, size_t @code{*start_coord})
Calculate the starting coordinates of a tile in its allocated block of memory and write them in the memory that @code{start_coord} points to (which must have @code{tile->ndim} elements).
@end deftypefun
@deftypefun void gal_tile_start_end_coord (gal_data_t @code{*tile}, size_t @code{*start_end}, int @code{rel_block})
Put the starting and ending (end point is not inclusive) coordinates of @code{tile} into the @code{start_end} array.
It is assumed that a space of @code{2*tile->ndim} has been already allocated (static or dynamic) for @code{start_end} before this function is called.
@code{rel_block} (or relative-to-block) is only relevant when @code{tile} has an intermediate tile between it and the allocated space (like a channel, see @code{gal_tile_full_two_layers}).
If it does not (@code{tile->block} points the allocated dataset), then the value to @code{rel_block} is irrelevant.
When @code{tile->block} is itself a larger block and @code{rel_block} is
set to 0, then the starting and ending positions will be based on the
position within @code{tile->block}, not the allocated space.
@end deftypefun
@deftypefun {void *} gal_tile_start_end_ind_inclusive (gal_data_t @code{*tile}, gal_data_t @code{*work}, size_t @code{*start_end_inc})
Put the indices of the first/start and last/end pixels (inclusive) in a tile into the @code{start_end} array (that must have two elements).
NOTE: this function stores the index of each point, not its coordinates.
It will then return the pointer to the start of the tile in the @code{work} data structure (which does not have to be equal to @code{tile->block}.
The outputs of this function are defined to make it easy to parse over an n-dimensional tile.
For example, this function is one of the most important parts of the internal processing of in @code{GAL_TILE_PARSE_OPERATE} function-like macro that is described below.
@end deftypefun
@deftypefun {gal_data_t *} gal_tile_series_from_minmax (gal_data_t @code{*block}, size_t @code{*minmax}, size_t @code{number})
Construct a list of tile(s) given coordinates of the minimum and maximum of each tile.
The minimum and maximums are assumed to be inclusive and in C order (slowest dimension first).
The returned pointer is an allocated @code{gal_data_t} array that can later be freed with @code{gal_data_array_free} (see @ref{Arrays of datasets}).
Internally, each element of the output array points to the next element, so the output may also be treated as a list of datasets (see @ref{List of gal_data_t}) and passed onto the other functions described in this section.
The array keeping the minimum and maximum coordinates for each tile must have the following format.
So in total @code{minmax} must have @code{2*ndim*number} elements.
@example
| min0_d0 | min0_d1 | max0_d0 | max0_d1 | ...
... | minN_d0 | minN_d1 | maxN_d0 | maxN_d1 |
@end example
@end deftypefun
@deftypefun {gal_data_t *} gal_tile_per_label (gal_data_t @code{*labels}, size_t @code{*maxlabel}, uint8_t @code{inbetweenints}, gal_data_t @code{**lab_fromto_tile})
Create a series of tiles that enclose each label (a non-zero and positive integer identifier for a group of pixels) within @code{labels}.
The returned series of tiles is formatted as an array of datasets that are also a list (with @code{num_exist} nodes).
The array nature of the output, allows fast/easy access to any label's tile, but by having the @code{next} pointer, you can also parse them as a list.
To free the returned pointer after your job is done, use @code{gal_data_array_free}.
If you already have the value of the largest label of the input, you can pass it using the @code{maxlabel} argument (to avoid one round of parsing the labels).
Otherwise, initialize @code{maxlabel} to @code{GAL_BLANK_SIZE_T}; in this case, this function will find the maximum label itself and write into the pointer (so you can use it later).
We have defined this argument because functions like @code{gal_binary_connected_components} that create labeled images also return the maximum label, so it may not be necessary for this function to find it (and decrease performance).
The three last arguments are related to cases when the label values in the image can be non-contiguous.
For example when you have cropped a region within a larger labeled image.
In such cases, if you want an empty tile for a non-existent label (a value between 1 and the maximum label, but with no associated pixels), activate @code{inbetweenints} (by giving it a non-zero value).
When @code{inbetweenints==0} and the labels do not start from one or are not contiguous, a one-to-one relation between the tiles and labels does not exist.
In this case, the @code{lab_fromto_tile} argument will be set by this function to allow easy mapping between the labels and tiles (its usage is described below).
In other words, when @code{inbetweenints=0} you need to check if @code{lab_fromto_tile} is @code{NULL} or not and implement future steps based on that.
When the returned @code{lab_fromto_tile} is not @code{NULL}, it will be a 2-node list of @code{gal_data_t}s (both are one dimensional).
The first one is @code{GAL_TYPE_INT32}, and the length of the array is the number of tiles that are created (note that unlike labels that count from one, tiles count from zero).
For example, if you want to know which input label the @code{t}-th tile corresponds to, simply use the @code{t}-th element of this array.
The second node of @code{lab_fromto_tile} will be the inverse of the first mapping: knowing the label, you can retrieve the tile number that was constructed for it.
The second node has one element more than the maximum label in the image (so the zero-th element will be blank; because label counting starts from 1, while tile counting starts from zero) with a type of @code{GAL_TYPE_SIZE_T}.
For an example, if you want to know which output tile the @code{l}-th label corresponds to, simply use the @code{l}-th element of this array.
Here is a minimal working example usage of this function:
@verbatim
int32_t *lab_from_tile=NULL;
gal_data_t *tiles, *lab_fromto_tile=NULL;
size_t *lab_to_tile=NULL, maxlab, numtiles;
tiles=gal_tile_per_label(labels, &maxlab, 0, &lab_fromto_tile);
if(lab_to_tile)
{
ntiles=lab_fromto_tile->size;
lab_from_tile=lab_fromto_tile->array;
lab_to_tile=lab_fromto_tile->next->array;
}
else ntiles=maxlab;
@end verbatim
@noindent
Given the code above, let's assume you want to parse each tile and find pixels that are associated to that tile.
If the tile index is called @code{tind}, the line below will extract the label that the tile corresponds to:
@verbatim
lab = lab_from_tile ? lab_from_tile[tind] : tind+1;
@end verbatim
For a working example of this function, this function see the source of the @code{gal_label_measure} function (in @file{lib/label.c} file) within the tarball or version controlled source of Gnuastro (see @ref{Version controlled source}).
@end deftypefun
@deftypefun {gal_data_t *} gal_tile_block (gal_data_t @code{*tile})
Return the dataset that contains @code{tile}'s allocated block of memory.
If tile is immediately defined as part of the allocated block, then this is equivalent to @code{tile->block}.
However, it is possible to have multiple layers of tiles (where @code{tile->block} is itself a tile).
So this function is the most generic way to get to the actual allocated dataset.
@end deftypefun
@deftypefun size_t gal_tile_block_increment (gal_data_t @code{*block}, size_t @code{*tsize}, size_t @code{num_increment}, size_t @code{*coord})
Return the increment necessary to start at the next contiguous patch memory associated with a tile.
@code{block} is the allocated block of memory and @code{tsize} is the size of the tile along every dimension.
If @code{coord} is @code{NULL}, it is ignored.
Otherwise, it will contain the coordinate of the start of the next contiguous patch of memory.
This function is intended to be used in a loop and @code{num_increment} is the main variable to this function.
For the first time you call this function, it should be @code{1}.
In subsequent calls (while you are parsing a tile), it should be increased by one.
@end deftypefun
@deftypefun {gal_data_t *} gal_tile_block_write_const_value (gal_data_t @code{*tilevalues}, gal_data_t @code{*tilesll}, int @code{withblank}, int @code{initialize})
Write a constant value for each tile over the area it covers in an allocated dataset that is the size of @code{tile}'s allocated block of memory (found through @code{gal_tile_block} described above).
The arguments to this function are:
@table @code
@item tilevalues
This must be an array that has the same number of elements as the nodes in in @code{tilesll} and in the same order that `tilesll' elements are parsed (from top to bottom, see @ref{Linked lists}).
As a result the array's number of dimensions is irrelevant, it will be parsed contiguously.
@item tilesll
The list of input tiles (see @ref{List of gal_data_t}).
Internally, it might be stored as an array (for example, the output of @code{gal_tile_series_from_minmax} described above), but this function does not care, it will parse the @code{next} elements to go to the next tile.
This function will not pop-from or free the @code{tilesll}, it will only parse it from start to end.
@item withblank
If the block containing the tiles has blank elements, those blank elements will be blank in the output of this function also, hence the array will be initialized with blank values when this option is called (see below).
@item initialize
Initialize the allocated space with blank values before writing in the constant values.
This can be useful when the tiles do not cover the full allocated block.
@end table
@end deftypefun
@deftypefun {gal_data_t *} gal_tile_block_check_tiles (gal_data_t @code{*tilesll})
Make a copy of the memory block and fill it with the index of each tile in @code{tilesll} (counting from 0).
The non-filled areas will have blank values.
The output dataset will have a type of @code{GAL_TYPE_INT32} (see @ref{Library data types}).
This function can be used when you want to check the coverage of each tile over the allocated block of memory.
It is just a wrapper over the @code{gal_tile_block_write_const_value} (with @code{withblank} set to zero).
@end deftypefun
@deftypefun {void *} gal_tile_block_relative_to_other (gal_data_t @code{*tile}, gal_data_t @code{*other})
Return the pointer corresponding to the start of the region covered by @code{tile} over the @code{other} dataset.
See the examples in @code{GAL_TILE_PARSE_OPERATE} for some example applications of this function.
@end deftypefun
@deftypefun void gal_tile_block_blank_flag (gal_data_t @code{*tilell}, size_t @code{numthreads})
Check if each tile in the list has blank values and update its @code{flag} to mark this check and its result (see @ref{Generic data container}).
The operation will be done on @code{numthreads} threads.
@end deftypefun
@deffn {Function-like macro} GAL_TILE_PARSE_OPERATE (@code{IN}, @code{OTHER}, @code{PARSE_OTHER}, @code{CHECK_BLANK}, @code{OP})
Parse @code{IN} (which can be a tile or a fully allocated block of memory) and do the @code{OP} operation on it.
@code{OP} can be any combination of C expressions.
If @code{OTHER!=NULL}, @code{OTHER} will be interpreted as a dataset and this macro will allow access to its element(s) and it can optionally be parsed while parsing over @code{IN}.
If @code{OTHER} is a fully allocated block of memory (not a tile), then the same region that is covered by @code{IN} within its own block will be parsed (the same starting pixel with the same number of pixels in each dimension).
Hence, in this case, the blocks of @code{OTHER} and @code{IN} must have the same size.
When @code{OTHER} is a tile it must have the same size as @code{IN} and parsing will start from its starting element/pixel.
Also, the respective allocated blocks of @code{OTHER} and @code{IN} (if different) may have different sizes.
Using @code{OTHER} (along with @code{PARSE_OTHER}), this function-like macro will thus enable you to parse and define your own operation on two fixed size regions in one or two blocks of memory.
In the latter case, they may have different numeric data types, see @ref{Numeric data types}).
The input arguments to this macro are explained below, the expected type of each argument are also written following the argument name:
@table @code
@item IN (gal_data_t)
Input dataset, this can be a tile or an allocated block of memory.
@item OTHER (gal_data_t)
Dataset (@code{gal_data_t}) to parse along with @code{IN}. It can be @code{NULL}. In that case, @code{o} (see description of @code{OP} below) will be @code{NULL} and should not be used. If @code{PARSE_OTHER} is zero, only its first element will be used and the size of this dataset is irrelevant.
When @code{OTHER} is a block of memory, it has to have the same size as the allocated block of @code{IN}.
When it s a tile, it has to have the same size as @code{IN}.
@item PARSE_OTHER (int)
Parse the other dataset along with the input.
When this is non-zero and @code{OTHER!=NULL}, then the @code{o} pointer will be incremented to cover the @code{OTHER} tile at the same rate as @code{i}, see description of @code{OP} for @code{i} and @code{o}.
@item CHECK_BLANK (int)
If it is non-zero, then the input will be checked for blank values and @code{OP} will only be called when we are not on a blank element.
@item OP
Operator: this can be any number of C expressions.
This macro is going to define a @code{itype *i} variable which will increment over each element of the input array/tile.
@code{itype} will be replaced with the C type that corresponds to the type of @code{INPUT}.
As an example, if @code{INPUT}'s type is @code{GAL_DATA_UINT16} or @code{GAL_DATA_FLOAT32}, @code{i} will be defined as @code{uint16} or @code{float} respectively.
This function-like macro will also define an @code{otype *o} which you can use to access an element of the @code{OTHER} dataset (if @code{OTHER!=NULL}).
@code{o} will correspond to the type of @code{OTHER} (similar to @code{itype} and @code{INPUT} discussed above).
If @code{PARSE_OTHER} is non-zero, then @code{o} will also be incremented to the same index element but in the other array.
You can use these along with any other variable you define before this macro to process the input and/or the other.
All variables within this function-like macro begin with @code{tpo_} except for the three variables listed below.
Therefore, as long as you do not start the names of your variables with this prefix everything will be fine.
Note that @code{i} (and possibly @code{o}) will be incremented once by this function-like macro, so do not increment them within @code{OP}.
@table @code
@item i
Pointer to the element of @code{INPUT} that is being parsed with the proper type.
@item o
Pointer to the element of @code{OTHER} that is being parsed with the proper type.
@code{o} can only be used if @code{OTHER!=NULL} and it will be parsed/incremented if @code{PARSE_OTHER} is non-zero.
@item b
Blank value in the type of @code{INPUT}.
@end table
@end table
You can use a given tile (@code{tile} on a dataset that it was not
initialized with but has the same size, let's call it @code{new}) with the
following steps:
@example
void *tarray;
gal_data_t *tblock;
/* `tile->block' must be corrected AFTER `tile->array'. */
tarray = tile->array;
tblock = tile->block;
tile->array = gal_tile_block_relative_to_other(tile, new);
tile->block = new;
/* Parse and operate over this region of the `new' dataset. */
GAL_TILE_PARSE_OPERATE(tile, NULL, 0, 0, @{
YOUR_PROCESSING;
@});
/* Reset `tile->block' and `tile->array'. */
tile->array=tarray;
tile->block=tblock;
@end example
You can work on the same region of another block in one run of this function-like macro.
To do that, you can make a fake tile and pass that as the @code{OTHER} argument.
Below is a demonstration, @code{tile} is the actual tile that you start with and @code{new} is the other block of allocated memory.
@example
size_t zero=0;
gal_data_t *faketile;
/* Allocate the fake tile, these can be done outside a loop
* (over many tiles). */
faketile=gal_data_alloc(NULL, new->type, 1, &zero,
NULL, 0, -1, 1, NULL, NULL, NULL);
free(faketile->array); /* To keep things clean. */
free(faketile->dsize); /* To keep things clean. */
faketile->block = new;
faketile->ndim = new->ndim;
/* These can be done in a loop (over many tiles). */
faketile->size = tile->size;
faketile->dsize = tile->dsize;
faketile->array = gal_tile_block_relative_to_other(tile, new);
/* Do your processing.... in a loop (over many tiles). */
GAL_TILE_PARSE_OPERATE(tile, faketile, 1, 1, @{
YOUR_PROCESSING_EXPRESSIONS;
@});
/* Clean up (outside the loop). */
faketile->array=NULL;
faketile->dsize=NULL;
gal_data_free(faketile);
@end example
@end deffn
@node Tile grid, , Independent tiles, Tessellation library
@subsubsection Tile grid
One very useful application of tiles is to completely cover an input dataset with tiles.
Such that you know every pixel/data-element of the input image is covered by only one tile.
The constructs in this section allow easy definition of such a tile structure.
They will create lists of tiles that are also usable by the general tools discussed in @ref{Independent tiles}.
As discussed in @ref{Tessellation}, (mainly raw) astronomical images will mostly require two layers of tessellation, one for amplifier channels which all have the same size and another (smaller tile-size) tessellation over each channel.
Hence, in this section we define a general structure to keep the main parameters of this two-layer tessellation and help in benefiting from it.
@deftp {Type (C @code{struct})} gal_tile_two_layer_params
The general structure to keep all the necessary parameters for a two-layer tessellation.
@example
struct gal_tile_two_layer_params
@{
/* Inputs */
size_t *tilesize; /*******************************/
size_t *numchannels; /* These parameters have to be */
float remainderfrac; /* filled manually before */
uint8_t workoverch; /* calling the functions in */
uint8_t checktiles; /* this section. */
uint8_t oneelempertile; /*******************************/
/* Internal parameters. */
size_t ndim;
size_t tottiles;
size_t tottilesinch;
size_t totchannels;
size_t *channelsize;
size_t *numtiles;
size_t *numtilesinch;
char *tilecheckname;
size_t *permutation;
size_t *firsttsize;
/* Tile and channel arrays (which are also lists). */
gal_data_t *tiles;
gal_data_t *channels;
@};
@end example
@end deftp
@deftypefun {size_t *} gal_tile_full (gal_data_t @code{*input}, size_t @code{*regular}, float @code{remainderfrac}, gal_data_t @code{**out}, size_t @code{multiple}, size_t @code{**firsttsize})
Cover the full dataset with (mostly) identical tiles and return the number of tiles created along each dimension.
The regular tile size (along each dimension) is determined from the @code{regular} array.
If @code{input}'s size is not an exact multiple of @code{regular} for each dimension, then the tiles touching the edges in that dimension will have a different size to fully cover every element of the input (depending on @code{remainderfrac}).
The output is an array with the same dimensions as @code{input} which contains the number of tiles along each dimension.
See @ref{Tessellation} for a description of its application in Gnuastro's programs and @code{remainderfrac}, just note that this function defines only one layer of tiles.
This is a low-level function (independent of the @code{gal_tile_two_layer_params} structure defined above).
If you want a two-layer tessellation, directly call @code{gal_tile_full_two_layers} that is described below.
The input arguments to this function are:
@table @code
@item input
The main dataset (allocated block) which you want to create a tessellation over (only used for its sizes).
So @code{input} may be a tile also.
@item regular
The size of the regular tiles along each of the input's dimensions.
So it must have the same number of elements as the dimensions of @code{input} (or @code{input->ndim}).
@item remainderfrac
The significant fraction of the remainder space to see if it should be split into two and put on both sides of a dimension or not.
This is thus only relevant @code{input} length along a dimension is not an exact multiple of the regular tile size along that dimension.
See @ref{Tessellation} for a more thorough discussion.
@item out
Pointer to the array of data structures that will keep all the tiles (see @ref{Arrays of datasets}).
If @code{*out==NULL}, then the necessary space to keep all the tiles will be allocated.
If not, then all the tile information will be filled from the dataset that @code{*out} points to, see
@code{multiple} for more.
@item multiple
When @code{*out==NULL} (and thus will be allocated by this function), allocate space for @code{multiple} times the number of tiles needed.
This can be very useful when you have several more identically sized @code{inputs}, and you want all their tiles to be allocated (and thus indexed) together, even though they have different @code{block} datasets (that then link to one allocated space).
See the definition of channels in
@ref{Tessellation} and @code{gal_tile_full_two_layers} below.
@item firsttsize
The size of the first tile along every dimension.
This is only different from the regular tile size when @code{regular} is not an exact multiple of @code{input}'s length along every dimension.
This array is allocated internally by this function.
@end table
@end deftypefun
@deftypefun void gal_tile_full_sanity_check (char @code{*filename}, char @code{*hdu}, gal_data_t @code{*input}, struct gal_tile_two_layer_params @code{*tl})
Make sure that the input parameters (in @code{tl}, short for two-layer) correspond to the input dataset.
@code{filename} and @code{hdu} are only required for error messages.
Also, allocate and fill the @code{tl->channelsize} array.
@end deftypefun
@deftypefun void gal_tile_full_two_layers (gal_data_t @code{*input}, struct gal_tile_two_layer_params @code{*tl})
Create the two layered tessellation in @code{tl}.
The general set of steps you need to take to define the two-layered tessellation over an image can be seen in the example code below.
@example
gal_data_t *input;
struct gal_tile_two_layer_params tl;
char *filename="input.fits", *hdu="1";
/* Set all the inputs shown in the structure definition. */
...
/* Read the input dataset. */
input=gal_fits_img_read(filename, hdu, -1, 1, NULL);
/* Do a sanity check and preparations. */
gal_tile_full_sanity_check(filename, hdu, input, &tl);
/* Build the two-layer tessellation*/
gal_tile_full_two_layers(input, &tl);
/* `tl.tiles' and `tl.channels' are now a lists of tiles.*/
@end example
@end deftypefun
@deftypefun void gal_tile_full_permutation (struct gal_tile_two_layer_params @code{*tl})
Make a permutation to allow the conversion of tile location in memory to its location in the full input dataset and put it in @code{tl->permutation}.
If a permutation has already been defined for the tessellation, this function will not do anything.
If permutation will not be necessary (there is only one channel or one dimension), then this function will not do anything (@code{tl->permutation} must have been initialized to @code{NULL}).
When there is only one channel OR one dimension, the tiles are allocated in memory in the same order that they represent the input data.
However, to make channel-independent processing possible in a generic way, the tiles of each channel are allocated contiguously.
So, when there is more than one channel AND more than one dimension, the index of the tile does not correspond to its position in the grid covering the input dataset.
The example below may help clarify: assume you have a 6x6 tessellation with two channels in the horizontal and one in the vertical.
On the left you can see how the tile IDs correspond to the input dataset.
NOTE how `03' is on the second row, not on the first after `02'.
On the right, you can see how the tiles are stored in memory (and shown if you simply write the array into a FITS file for example).
@example
Corresponding to input In memory
---------------------- --------------
15 16 17 33 34 35 30 31 32 33 34 35
12 13 14 30 31 32 24 25 26 27 28 29
09 10 11 27 28 29 18 19 20 21 22 23
06 07 08 24 25 26 <-- 12 13 14 15 16 17
03 04 05 21 22 23 06 07 08 09 10 11
00 01 02 18 19 20 00 01 02 03 04 05
@end example
As a result, if your values are stored in same order as the tiles, and you
want them in over-all memory (for example, to save as a FITS file), you need
to permute the values:
@example
gal_permutation_apply(values, tl->permutation);
@end example
If you have values over-all and you want them in tile-order, you can apply
the inverse permutation:
@example
gal_permutation_apply_inverse(values, tl->permutation);
@end example
Recall that this is the definition of permutation in this context:
@example
permute: IN_ALL[ i ] = IN_MEMORY[ perm[i] ]
inverse: IN_ALL[ perm[i] ] = IN_MEMORY[ i ]
@end example
@end deftypefun
@deftypefun void gal_permutation_apply_onlydim0 (gal_data_t @code{*input}, size_t @code{*permutation})
Similar to @code{gal_permutation_apply}, but when the dataset is 2-dimensional, permute each row (dimension 1 in C) as one element.
In other words, only permute along dimension 0.
The @code{permutation} array should therefore only have @code{input->dsize[0]} elements.
@end deftypefun
@deftypefun void gal_tile_full_values_write (gal_data_t @code{*tilevalues}, struct gal_tile_two_layer_params @code{*tl}, int @code{withblank}, char @code{*filename}, gal_fits_list_key_t @code{*keys}, int @code{freekeys})
Write one value for each tile into a file.
It is important to note that the values in @code{tilevalues} must be ordered in the same manner as the tiles, so @code{tilevalues->array[i]} is the value that should be given to @code{tl->tiles[i]}.
The @code{tl->permutation} array must have been initialized before calling this function with @code{gal_tile_full_permutation}.
If @code{withblank} is non-zero, then block structure of the tiles will be checked and all blank pixels in the block will be blank in the final output file also.
@end deftypefun
@deftypefun {gal_data_t *} gal_tile_full_values_smooth (gal_data_t @code{*tilevalues}, struct gal_tile_two_layer_params @code{*tl}, size_t @code{width}, size_t @code{numthreads})
Smooth the given values with a flat kernel of the given @code{width}.
This cannot be done manually because if @code{tl->workoverch==0}, tiles in different channels must not be mixed/smoothed.
Also the tiles are contiguous within the channel, not within the image, see the description under @code{gal_tile_full_permutation}.
@end deftypefun
@deftypefun size_t gal_tile_full_id_from_coord (struct gal_tile_two_layer_params @code{*tl}, size_t @code{*coord})
Return the ID of the tile that corresponds to the coordinates @code{coord}.
Having this ID, you can use the @code{tl->tiles} array to get to the proper tile or read/write a value into an array that has one value per tile.
@end deftypefun
@deftypefun void gal_tile_full_free_contents (struct gal_tile_two_layer_params @code{*tl})
Free all the allocated arrays within @code{tl}.
@end deftypefun
@node Bounding box, Polygons, Tessellation library, Gnuastro library
@subsection Bounding box (@file{box.h})
Functions related to reporting the bounding box of certain inputs are declared in @file{gnuastro/box.h}.
All coordinates in this header are in the FITS format (first axis is the horizontal and the second axis is vertical).
@deftypefun void gal_box_bound_ellipse_extent (double @code{a}, double @code{b}, double @code{theta_deg}, double @code{*extent})
Return the maximum extent along each dimension of the given ellipse from the center of the ellipse.
Therefore this is half the extent of the box in each dimension.
@code{a} is the ellipse semi-major axis, @code{b} is the semi-minor axis, @code{theta_deg} is the position angle in degrees.
The extent in each dimension is in floating point format and stored in @code{extent} which must already be allocated before this function.
@end deftypefun
@deftypefun void gal_box_bound_ellipse (double @code{a}, double @code{b}, double @code{theta_deg}, long @code{*width})
Any ellipse can be enclosed into a rectangular box.
This function will write the height and width of that box where @code{width} points to.
It assumes the center of the ellipse is located within the central pixel of the box.
@code{a} is the ellipse semi-major axis length, @code{b} is the semi-minor axis, @code{theta_deg} is the position angle in degrees.
The @code{width} array will contain the output size in long integer type.
@code{width[0]}, and @code{width[1]} are the number of pixels along the first and second FITS axis.
Since the ellipse center is assumed to be in the center of the box, all the values in @code{width} will be an odd integer.
@end deftypefun
@deftypefun void gal_box_bound_ellipsoid_extent (double @code{*semiaxes}, double @code{*euler_deg}, double @code{*extent})
Return the maximum extent along each dimension of the given ellipsoid from its center.
Therefore this is half the extent of the box in each dimension.
The semi-axis lengths of the ellipsoid must be present in the 3 element @code{semiaxis} array.
The @code{euler_deg} array contains the three ellipsoid Euler angles in degrees.
For a description of the Euler angles, see description of @code{gal_box_bound_ellipsoid} below.
The extent in each dimension is in floating point format and stored in @code{extent} which must already be allocated before this function.
@end deftypefun
@deftypefun void gal_box_bound_ellipsoid (double @code{*semiaxes}, double @code{*euler_deg}, long @code{*width})
Any ellipsoid can be enclosed into a rectangular volume/box.
The purpose of this function is to give the integer size/width of that box.
The semi-axes lengths of the ellipse must be in the @code{semiaxes} array (with three elements).
The major axis length must be the first element of @code{semiaxes}.
The only other condition is that the next two semi-axes must both be smaller than the first.
The orientation of the major axis is defined through three proper Euler angles (ZXZ order in degrees) that are given in the @code{euler_deg} array.
The @code{width} array will contain the output size in long integer type (in FITS axis order).
Since the ellipsoid center is assumed to be in the center of the box, all the values in @code{width} will be an odd integer.
@cindex Euler angles
The proper Euler angles can be defined in many ways (which axes to rotate about).
For a full description of the Euler angles, please see @url{https://en.wikipedia.org/wiki/Euler_angles, Wikipedia}.
Here we adopt the ZXZ (or @mymath{Z_1X_2Z_3}) proper Euler angles were the first rotation is done around the Z axis, the second one about the (rotated) X axis and the third about the (rotated) Z axis.
@end deftypefun
@deftypefun void gal_box_border_from_center (double @code{center}, size_t @code{ndim}, long @code{*width}, long @code{*fpixel}, long @code{*lpixel})
Given the center coordinates in @code{center} and the @code{width} (along each dimension) of a box, return the coordinates of the first (@code{fpixel}) and last (@code{lpixel}) pixels.
All arrays must have @code{ndim} elements (one for each dimension).
@end deftypefun
@deftypefun void gal_box_border_rotate_around_center (long @code{*fpixel}, long @code{*lpixel}, size_t @code{ndim}, float @code{rotate_deg})
Modify the input first and last pixels (@code{fpixel} and @code{lpixel}, that you can estimate with @code{gal_box_border_from_center}) to account for the given rotation (in units of degrees) in 2D (currently @code{ndim} can only have a value of @code{2}).
@end deftypefun
@deftypefun int gal_box_overlap (long @code{*naxes}, long @code{*fpixel_i}, long @code{*lpixel_i}, long @code{*fpixel_o}, long @code{*lpixel_o}, size_t @code{ndim})
An @code{ndim}-dimensional dataset of size @code{naxes} (along each dimension, in FITS order) and a box with first and last (inclusive) coordinate of @code{fpixel_i} and @code{lpixel_i} is given.
This box does not necessarily have to lie within the dataset, it can be outside of it, or only partially overlap.
This function will change the values of @code{fpixel_i} and @code{lpixel_i} to exactly cover the overlap in the input dataset's coordinates.
This function will return 1 if there is an overlap and 0 if there is not.
When there is an overlap, the coordinates of the first and last pixels of the overlap will be put in @code{fpixel_o} and @code{lpixel_o}.
@end deftypefun
@node Polygons, Qsort functions, Bounding box, Gnuastro library
@subsection Polygons (@file{polygon.h})
Polygons are commonly necessary in image processing.
For example, in Crop they are used for cutting out non-rectangular regions of a image (see @ref{Crop}), and in Warp, for mapping different pixel grids over each other (see @ref{Warp}).
@cindex Convex polygons
@cindex Concave polygons
@cindex Polygons, Convex
@cindex Polygons, Concave
Polygons come in two classes: convex and concave (or generally, non-convex!), see below for a demonstration.
Convex polygons are those where all inner angles are less than 180 degrees.
By contrast, a convex polygon is one where an inner angle may be more than 180 degress.
@example
Concave Polygon Convex Polygon
D --------C D------------- C
\ | E / |
\E | \ |
/ | \ |
A--------B A ----------B
@end example
In all the functions here the vertices (and points) are defined as an array.
So a polygon with 4 vertices will be identified with an array of 8 elements with the first two elements keeping the 2D coordinates of the first vertice and so on.
@deffn Macro GAL_POLYGON_MAX_CORNERS
The largest number of vertices a polygon can have in this library.
@end deffn
@deffn Macro GAL_POLYGON_ROUND_ERR
@cindex Round-off error
We have to consider floating point round-off errors when dealing with polygons.
For example, we will take @code{A} as the maximum of @code{A} and @code{B} when @code{A>B-GAL_POLYGON_ROUND_ERR}.
@end deffn
@deftypefun void gal_polygon_vertices_sort_convex (double @code{*in}, size_t @code{n}, size_t @code{*ordinds})
We have a simple polygon (that can result from projection, so its edges do not collide or it does not have holes) and we want to order its corners in an anticlockwise fashion.
This is necessary for clipping it and finding its area later.
The input vertices can have practically any order.
The input (@code{in}) is an array containing the coordinates (two values) of each vertice.
@code{n} is the number of corners.
So @code{in} should have @code{2*n} elements.
The output (@code{ordinds}) is an array with @code{n} elements specifying the indices in order.
This array must have been allocated before calling this function.
The indexes are output for more generic usage, for example, in a homographic transform (necessary in warping an image, see @ref{Linear warping basics}), the necessary order of vertices is the same for all the pixels.
In other words, only the positions of the vertices change, not the way they need to be ordered.
Therefore, this function would only be necessary once.
As a summary, the input is unchanged, only @code{n} values will be put in the @code{ordinds} array.
Such that calling the input coordinates in the following fashion will give an anti-clockwise order when there are 4 vertices:
@example
1st vertice: in[ordinds[0]*2], in[ordinds[0]*2+1]
2nd vertice: in[ordinds[1]*2], in[ordinds[1]*2+1]
3rd vertice: in[ordinds[2]*2], in[ordinds[2]*2+1]
4th vertice: in[ordinds[3]*2], in[ordinds[3]*2+1]
@end example
@cindex Convex Hull
@noindent
The implementation of this is very similar to the Graham scan in finding the Convex Hull.
However, in projection we will never have a concave polygon (the left condition below, where this algorithm will get to E before D), we will always have a convex polygon (right case) or E will not exist!
This is because we are always going to be calculating the area of the overlap between a quadrilateral and the pixel grid or the quadrilateral itself.
The @code{GAL_POLYGON_MAX_CORNERS} macro is defined so there will be no need to allocate these temporary arrays separately.
Since we are dealing with pixels, the polygon cannot really have too many vertices.
@end deftypefun
@deftypefun int gal_polygon_is_convex (double @code{*v}, size_t @code{n})
Returns @code{1} if the polygon is convex with vertices defined by @code{v} and @code{0} if it is a concave polygon.
Note that the vertices of the polygon should be sorted in an anti-clockwise manner.
@end deftypefun
@deftypefun double gal_polygon_area_flat (double @code{*v}, size_t @code{n})
Find the area of a polygon with vertices defined in @code{v} on a euclidian (flat) coordinate system.
@code{v} points to an array of doubles which keep the positions of the vertices such that @code{v[0]} and @code{v[1]} are the positions of the first vertice to be considered.
@end deftypefun
@deftypefun double gal_polygon_area_sky (double @code{*v}, size_t @code{n})
Find the area of a polygon with vertices defined in @code{v} on a celestial coordinate system.
This is a coordinate system where the first coordinate goes from 0 to 360 (increasing to the right), while the second coordinate ranges from -90 to +90 (on the poles).
@code{v} points to an array of doubles which keep the positions of the vertices such that @code{v[0]} and @code{v[1]} are the positions of the first vertice to be considered.
This function uses an approximation to account for the curvature of the sky and the different nature of spherical coordinates with respect to the flat coordinate system.
@url{https://savannah.gnu.org/bugs/index.php?64617, Bug 64617} has been defined in Gnuastro to address this problem.
Please check that bug in case it has been fixed.
Until this bug is fixed, here are some tips:
@itemize
@item
Subtract the RA and Dec of all the vertice coordinates from a constant so the center of the polygon falls on (RA, Dec) of (180,0).
The sphere has a similar nature everywhere on it, so shifting the polygon vertices will not change its area; this also removes issues with the RA=0 or RA=360 coordinate and decrease issues caused by RA depending on declination.
@item
These approximations should not cause any statistically significant error on normal (less than a few degrees) scales.
But it won't hurt to do a small sanity check for your particular usage scenario.
@item
Any help (even in the mathematics of the problem; not necessary programming) would be appreciated (we didn't have time to derive the necessary equations), so if you have some background in this and can prepare the mathematical description of the problem, please get in touch.
@end itemize
@end deftypefun
@deftypefun int gal_polygon_is_inside (double @code{*v}, double @code{*p}, size_t @code{n})
Returns @code{0} if point @code{p} in inside a polygon, either convex or concave.
The vertices of the polygon are defined by @code{v} and @code{0} otherwise, they have to be ordered in an anti-clockwise manner.
This function uses the @url{https://en.wikipedia.org/wiki/Point_in_polygon#Winding_number_algorithm, winding number algorithm}, to check the points.
Note that this is a generic function (working on both concave and convex polygons, so if you know before-hand that your polygon is convex, it is much more efficient to use @code{gal_polygon_is_inside_convex}.
@end deftypefun
@deftypefun int gal_polygon_is_inside_convex (double @code{*v}, double @code{*p}, size_t @code{n})
Return @code{1} if the point @code{p} is within the polygon whose vertices are defined by @code{v}.
The polygon is assumed to be convex, for a more generic function that deals with concave and convex polygons, see @code{gal_polygon_is_inside}.
Note that the vertices of the polygon have to be sorted in an anti-clock-wise manner.
@end deftypefun
@deftypefun int gal_polygon_ppropin (double @code{*v}, double @code{*p}, size_t @code{n})
Similar to @code{gal_polygon_is_inside_convex}, except that if the point @code{p} is on
one of the edges of a polygon, this will return @code{0}.
@end deftypefun
@deftypefun int gal_polygon_is_counterclockwise (double @code{*v}, size_t @code{n})
Returns @code{1} if the sorted polygon has a counter-clockwise orientation and @code{0} otherwise.
This function uses the concept of ``winding'', which defines the relative order in which the vertices of a polygon are listed to determine the orientation of vertices.
For complex polygons (where edges, or sides, intersect), the most significant orientation is returned.
In a complex polygon, when the alternative windings are equal (for example, an @code{8}-shape) it will return @code{1} (as if it was counter-clockwise).
Note that the polygon vertices have to be sorted before calling this function.
@end deftypefun
@deftypefun int gal_polygon_to_counterclockwise (double @code{*v}, size_t @code{n})
Arrange the vertices of the sorted polygon in place, to be in a counter-clockwise direction.
If the input polygon already has a counter-clockwise direction it will not touch the input.
The return value is @code{1} on successful execution.
This function is just a wrapper over @code{gal_polygon_is_counterclockwise}, and will reverse the order of the vertices when necessary.
@end deftypefun
@deftypefun void gal_polygon_clip (double @code{*s}, size_t @code{n}, double @code{*c}, size_t @code{m}, double @code{*o}, size_t @code{*numcrn})
Clip (find the overlap of) two polygons.
This function uses the @url{https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm, Sutherland-Hodgman} polygon clipping algorithm.
Note that the vertices of both polygons have to be sorted in an anti-clock-wise manner.
The Pseudocode from Wikipedia:
@verbatim
List outputList = subjectPolygon;
for (Edge clipEdge in clipPolygon) do
List inputList = outputList;
outputList.clear();
Point S = inputList.last;
for (Point E in inputList) do
if (E inside clipEdge) then
if (S not inside clipEdge) then
outputList.add(ComputeIntersection(S,E,clipEdge));
end if
outputList.add(E);
else if (S inside clipEdge) then
outputList.add(ComputeIntersection(S,E,clipEdge));
end if
S = E;
done
done
@end verbatim
The difference is that we are not using lists, but arrays to keep polygon vertices.
The two polygons are called Subject @code{s} and Clip @code{c} with @code{n} and @code{m} vertices respectively.
The output is stored in @code{o} and the number of elements in the output are stored in what @code{*numcrn} (for number of corners) points to.
@end deftypefun
@deftypefun void gal_polygon_vertices_sort (double @code{*vertices}, size_t @code{n}, size_t @code{*ordinds})
Sort the indices of the un-ordered @code{vertices} array to a counter-clockwise polygon in the already allocated space of @code{ordinds}.
It is assumed that there are @code{n} vertices, and thus that @code{vertices} contains @code{2*n} elements where the two coordinates of the first vertice occupy the first two elements of the array and so on.
The polygon can be both concave and convex (see the start of this section).
However, note that for concave polygons there is no unique sort from an un-ordered set of vertices.
So after this function you may want to use @code{gal_polygon_is_convex} and print a warning to check the output if the polygon was concave.
Note that the contents of the @code{vertices} array are left untouched by this function.
If you want to write the ordered vertice coordinates in another array with the same size, you can use a loop like this:
@example
for(i=0;i<n;++i)
@{
ordered[i*2 ] = vertices[ ordinds[i]*2 ];
ordered[i*2+1] = vertices[ ordinds[i]*2 + 1];
@}
@end example
In this algorithm, we find the rightmost and leftmost points (based on their x-coordinate) and use the diagonal vector between those points to group the points in arrays based on their position with respect to this vector.
For anticlockwise sorting, all the points below the vector are sorted by their ascending x-coordinates and points above the vector are sorted in decreasing order using @code{qsort}.
Finally, both these arrays are merged together to get the final sorted array of points, from which the points are indexed into the @code{ordinds} using linear search.
@end deftypefun
@node Qsort functions, K-d tree, Polygons, Gnuastro library
@subsection Qsort functions (@file{qsort.h})
@cindex @code{qsort}
When sorting a dataset is necessary, the C programming language provides the @code{qsort} (Quick sort) function.
@code{qsort} is a generic function which allows you to sort any kind of data structure (not just a single array of numbers).
To define ``greater'' and ``smaller'' (for sorting), @code{qsort} needs another function, even for simple numerical types.
The functions introduced in this section are to passed onto @code{qsort}.
@cindex NaN
Note that larger and smaller operators are not defined on NaN elements.
Therefore, if the input array is a floating point type, and contains NaN values, the relevant functions of this section are going to put the NaN elements at the end of the list (after the sorted non-NaN elements), irrespective of the requested sorting order (increasing or decreasing).
The first class of functions below (with @code{TYPE} in their names) can be used for sorting a simple numeric array.
Just replace @code{TYPE} with the dataset's numeric datatype.
The second set of functions can be used to sort indices (leave the actual numbers untouched).
To use the second set of functions, a global variable or structure are also necessary as described below.
@deffn {Global variable} {gal_qsort_index_single}
@cindex Thread safety
@cindex Multi-threaded operation
Pointer to an array (for example, @code{float *} or @code{int *}) to use as a reference in @code{gal_qsort_index_single_TYPE_d} or @code{gal_qsort_index_single_TYPE_i}, see the explanation of these functions for more.
Note that if @emph{more than one} array is to be sorted in a multi-threaded operation, these functions will not work as expected.
However, when all the threads just sort the indices based on a @emph{single array}, this global variable can safely be used in a multi-threaded scenario.
@end deffn
@deftp {Type (C @code{struct})} gal_qsort_index_multi
Structure to get the sorted indices of multiple datasets on multiple threads with @code{gal_qsort_index_multi_d} or @code{gal_qsort_index_multi_i}.
Note that the @code{values} array will not be changed by these functions, it is only read.
Therefore all the @code{values} elements in the (to be sorted) array of @code{gal_qsort_index_multi} must point to the same place.
@example
struct gal_qsort_index_multi
@{
float *values; /* Array of values (same in all). */
size_t index; /* Index of each element to be sorted. */
@};
@end example
@end deftp
@deftypefun int gal_qsort_TYPE_d (const void @code{*a}, const void @code{*b})
When passed to @code{qsort}, this function will sort a @code{TYPE} array in decreasing order (first element will be the largest).
Please replace @code{TYPE} (in the function name) with one of the @ref{Numeric data types}, for example, @code{gal_qsort_int32_d}, or @code{gal_qsort_float64_d}.
@end deftypefun
@deftypefun int gal_qsort_TYPE_i (const void @code{*a}, const void @code{*b})
When passed to @code{qsort}, this function will sort a @code{TYPE} array in increasing order (first element will be the smallest).
Please replace @code{TYPE} (in the function name) with one of the @ref{Numeric data types}, for example, @code{gal_qsort_int32_i}, or @code{gal_qsort_float64_i}.
@end deftypefun
@deftypefun int gal_qsort_index_single_TYPE_d (const void @code{*a}, const void @code{*b})
When passed to @code{qsort}, this function will sort a @code{size_t} array based on decreasing values in the @code{gal_qsort_index_single}.
The global @code{gal_qsort_index_single} pointer has a @code{void *} pointer which will be cast to the proper type based on this function: for example @code{gal_qsort_index_single_uint16_d} will cast the array to an unsigned 16-bit integer type.
The array that @code{gal_qsort_index_single} points to will not be changed, it is only read.
For example, see this demo program:
@example
#include <stdio.h>
#include <stdlib.h> /* qsort is defined in stdlib.h. */
#include <gnuastro/qsort.h>
int
main (void)
@{
size_t s[4]=@{0, 1, 2, 3@};
float f[4]=@{1.3,0.2,1.8,0.1@};
gal_qsort_index_single=f;
qsort(s, 4, sizeof(size_t), gal_qsort_index_single_float32_d);
printf("%zu, %zu, %zu, %zu\n", s[0], s[1], s[2], s[3]);
return EXIT_SUCCESS;
@}
@end example
@noindent
The output will be: @code{2, 0, 1, 3}.
@end deftypefun
@deftypefun int gal_qsort_index_single_TYPE_i (const void @code{*a}, const void @code{*b})
Similar to @code{gal_qsort_index_single_TYPE_d}, but will sort the indexes
such that the values of @code{gal_qsort_index_single} can be parsed in
increasing order.
@end deftypefun
@deftypefun int gal_qsort_index_multi_d (const void @code{*a}, const void @code{*b})
When passed to @code{qsort} with an array of @code{gal_qsort_index_multi}, this function will sort the array based on the values of the given indices.
The sorting will be ordered according to the @code{values} pointer of @code{gal_qsort_index_multi}.
Note that @code{values} must point to the same place in all the structures of the @code{gal_qsort_index_multi} array.
This function is only useful when the indices of multiple arrays on multiple threads are to be sorted.
If your program is single threaded, or all the indices belong to a single array (sorting different sub-sets of indices in a single array on multiple threads), it is recommended to use @code{gal_qsort_index_single_TYPE_d}.
@end deftypefun
@deftypefun int gal_qsort_index_multi_i (const void @code{*a}, const void *@code{b})
Similar to @code{gal_qsort_index_multi_d}, but the result will be sorted in
increasing order (first element will have the smallest value).
@end deftypefun
@node K-d tree, Permutations, Qsort functions, Gnuastro library
@subsection K-d tree (@file{kdtree.h})
@cindex K-d tree
K-d tree is a space-partitioning binary search tree for organizing points in a k-dimensional space.
They are a very useful data structure for multidimensional searches like range searches and nearest neighbor searches.
For a more formal and complete introduction see @url{https://en.wikipedia.org/wiki/K-d_tree, the Wikipedia page}.
Each non-leaf node in a k-d tree divides the space into two parts, known as half-spaces.
To select the top/root node for partitioning, we find the median of the points and make a hyperplane normal to the first dimension.
The points to the left of this space are represented by the left subtree of that node and points to the right of the space are represented by the right subtree.
This is then repeated for all the points in the input, thus associating a ``left'' and ``right'' branch for each input point.
Gnuastro uses the standard algorithms of the k-d tree with one small difference that makes it much more memory and CPU optimized.
The set of input points that define the tree nodes are given as a list of Gnuastro's data container type, see @ref{List of gal_data_t}.
Each @code{gal_data_t} in the list represents the point's coordinate in one dimension, and the first element in the list is the first dimension.
Hence the number of data values in each @code{gal_data_t} (which must be equal in all of them) represents the number of points.
This is the same format that Gnuastro's Table reading/writing functions read/write columns in tables, see @ref{Table input output}.
The output k-d tree is a list of two @code{gal_data_t}s, representing the input's row-number (or index, counting from 0) of the left and right subtrees of each row.
Each @code{gal_data_t} thus has the same number of rows (or points) as the input, but only containing integers with a type of @code{uint32_t} (unsigned 32-bit integer).
If a node has no left, or right subtree, then @code{GAL_BLANK_UINT32} will be used.
Below you can see the simple tree for 2D points from Wikipedia.
The input point coordinates are represented as two input @code{gal_data_t}s (@code{X} and @code{Y}, where @code{X->next=Y} and @code{Y->next=NULL}).
If you had three dimensional points, you could define an extra @code{gal_data_t} such that @code{Y->next=Z} and @code{Z->next=NULL}.
The output is always a list of two @code{gal_data_t}s, where the first one contains the index of the left sub-tree in the input, and the second one, the index of the right subtree.
The index of the root node (@code{0} in the case below@footnote{This example input table is the same as the example in Wikipedia (as of December 2020).
However, on the Wikipedia output, the root node is (7,2), not (5,4).
The difference is primarily because there are 6 rows and the median element of an even number of elements can vary by integer calculation strategies.
Here we use 0-based indexes for finding median and round to the smaller integer.}) is also returned as a single number.
@example
INDEX INPUT OUTPUT K-D Tree
(as guide) X --> Y LEFT --> RIGHT (visualized)
---------- ------- -------------- ------------------
0 5 4 1 2 (5,4)
1 2 3 BLANK 4 / \
2 7 2 5 3 (2,3) \
3 9 6 BLANK BLANK \ (7,2)
4 4 7 BLANK BLANK (4,7) / \
5 8 1 BLANK BLANK (8,1) (9,6)
@end example
This format is therefore scalable to any number of dimensions: the number of dimensions are determined from the number of nodes in the input list of @code{gal_data_t}s (for example, using @code{gal_list_data_number}).
In Gnuastro's k-d tree implementation, there are thus no special structures to keep every tree node (which would take extra memory and would need to be moved around as the tree is being created).
Everything is done internally on the index of each point in the input dataset: the only thing that is flipped/sorted during tree creation is the index to the input row for any number of dimensions.
As a result, Gnuastro's k-d tree implementation is very memory and CPU efficient and its two output columns can directly be written into a standard table (without having to define any special binary format).
@deftypefun {gal_data_t *} gal_kdtree_create (gal_data_t @code{*coords_raw}, size_t @code{*root})
Create a k-d tree in a bottom-up manner (from leaves to the root).
This function returns two @code{gal_data_t}s connected as a list, see description above.
The first dataset contains the indexes of left and right nodes of the subtrees for each input node.
The index of the root node is written into the memory that @code{root} points to.
@code{coords_raw} is the list of the input points (one @code{gal_data_t} per dimension, see above).
If the input dataset has no data (@code{coords_raw->size==0}), this function will return a @code{NULL} pointer.
For example, assume you have the simple set of points below (from the visualized example at the start of this section) in a plain-text file called @file{coordinates.txt}:
@example
$ cat coordinates.txt
5 4
2 3
7 2
9 6
4 7
8 1
@end example
With the program below, you can calculate the kd-tree, and write it in a FITS file (while keeping the root index as a FITS keyword inside of it).
@example
#include <stdio.h>
#include <gnuastro/table.h>
#include <gnuastro/kdtree.h>
int
main (void)
@{
gal_data_t *input, *kdtree;
char kdtreefile[]="kd-tree.fits";
char inputfile[]="coordinates.txt";
/* To write the root within the saved file. */
size_t root;
char *unit="index";
char *keyname="KDTROOT";
gal_fits_list_key_t *keylist=NULL;
char *comment="k-d tree root index (counting from 0).";
/* Read the input table. Note: this assumes the table only
* contains your input point coordinates (one column for each
* dimension). If it contains more columns with other properties
* for each point, you can specify which columns to read by
* name or number, see the documentation of 'gal_table_read'. */
input=gal_table_read(inputfile, "1", NULL, NULL,
GAL_TABLE_SEARCH_NAME, 0, -1, 0, NULL);
/* Construct a k-d tree. The index of root is stored in `root` */
kdtree=gal_kdtree_create(input, &root);
/* Write the k-d tree to a file and write root index and input
* name as FITS keywords ('gal_table_write' frees 'keylist').*/
gal_fits_key_list_title_add(&keylist, "k-d tree parameters", 0);
gal_fits_key_write_filename("KDTIN", inputfile, &keylist, 0, 1);
gal_fits_key_list_add_end(&keylist, GAL_TYPE_SIZE_T, keyname, 0,
&root, 0, comment, 0, unit, 0);
gal_table_write(kdtree, &keylist, NULL, GAL_TABLE_FORMAT_BFITS,
kdtreefile, "kdtree", 0, 1);
/* Clean up and return. */
gal_list_data_free(input);
gal_list_data_free(kdtree);
return EXIT_SUCCESS;
@}
@end example
You can inspect the saved k-d tree FITS table with Gnuastro's @ref{Table} (first command below), and you can see the keywords containing the root index with @ref{Fits} (second command below):
@example
asttable kd-tree.fits
astfits kd-tree.fits -h1
@end example
@end deftypefun
@deftypefun size_t gal_kdtree_nearest_neighbor (gal_data_t @code{*coords_raw}, gal_data_t @code{*kdtree}, size_t @code{root}, double @code{*point}, double @code{*least_dist}, uint8_t @code{nosamenode})
Returns the index of the nearest input point to the query point (@code{point}, assumed to be an array with same number of elements as @code{gal_data_t}s in @code{coords_raw}).
If @code{nosamenode} is not zero, exact matches are discarded.
This is useful for instance when we want to search the nearest neighbor within the same dataset.
The distance between the query point and its nearest neighbor is stored in the space that @code{least_dist} points to.
This search is efficient due to the constant checking for the presence of possible best points in other branches.
If it is not possible for the other branch to have a better nearest neighbor, that branch is not searched.
As an example, let's use the k-d tree that was created in the example of @code{gal_kdtree_create} (above) and find the nearest row to a given coordinate (@code{point}).
This will be a very common scenario, especially in large and multi-dimensional datasets where the k-d tree creation can take long and you do not want to re-create the k-d tree every time.
In the @code{gal_kdtree_create} example output, we also wrote the k-d tree root index as a FITS keyword (@code{KDTROOT}), so after loading the two table data (input coordinates and k-d tree), we will read the root from the FITS keyword.
This is a very simple example, but the scalability is clear: for example, it is trivial to parallelize (see @ref{Library demo - multi-threaded operation}).
@example
#include <stdio.h>
#include <gnuastro/table.h>
#include <gnuastro/kdtree.h>
int
main (void)
@{
/* INPUT: desired point. */
double point[2]=@{8.9,5.9@};
/* Same as example in description of 'gal_kdtree_create'. */
gal_data_t *input, *kdtree;
char kdtreefile[]="kd-tree.fits";
char inputfile[]="coordinates.txt";
/* Processing variables of this function. */
char kdtreehdu[]="1";
double *in_x, *in_y, least_dist;
size_t root, nkeys=1, nearest_index;
gal_data_t *rkey, *keysll=gal_data_array_calloc(nkeys);
/* Read the input coordinates, see comments in example of
* 'gal_kdtree_create' for more. */
input=gal_table_read(inputfile, "1", NULL, NULL,
GAL_TABLE_SEARCH_NAME, 0, -1, 0, NULL);
/* Read the k-d tree contents (created before). */
kdtree=gal_table_read(kdtreefile, "1", NULL, NULL,
GAL_TABLE_SEARCH_NAME, 0, -1, 0, NULL);
/* Read the k-d tree root index from the header keyword.
* See example in description of 'gal_fits_key_read_from_ptr'.*/
keysll[0].name="KDTROOT";
keysll[0].type=GAL_TYPE_SIZE_T;
gal_fits_key_read(kdtreefile, kdtreehdu, keysll, 0, 0, NULL);
keysll[0].name=NULL; /* Since we did not allocate it. */
rkey=gal_data_copy_to_new_type(&keysll[0], GAL_TYPE_SIZE_T);
root=((size_t *)(rkey->array))[0];
/* Find the nearest neighbor of the point. */
nearest_index=gal_kdtree_nearest_neighbor(input, kdtree, root,
point, &least_dist);
/* Print the results. */
in_x=input->array;
in_y=input->next->array;
printf("(%g, %g): nearest is (%g, %g), with a distance of %g\n",
point[0], point[1], in_x[nearest_index],
in_y[nearest_index], least_dist);
/* Clean up and return. */
gal_data_free(rkey);
gal_list_data_free(input);
gal_list_data_free(kdtree);
gal_data_array_free(keysll, nkeys, 1);
return EXIT_SUCCESS;
@}
@end example
@end deftypefun
@deftypefun {gal_list_sizetf64_t *} gal_kdtree_range (gal_data_t @code{*coords_raw}, gal_data_t @code{*kdtree}, size_t @code{root}, double @code{*point}, double @code{radius})
Return the list of all input points that are within the given @code{radius} of the desired point.
Each element of the list stores the index and the distance to the desired point.
See @ref{List of size_t and double} for more on this type of list.
The rest input arguments are like the similarly named arguments of @code{gal_kdtree_nearest_neighbor}.
@end deftypefun
@node Permutations, Matching, K-d tree, Gnuastro library
@subsection Permutations (@file{permutation.h})
@cindex permutation
Permutation is the technical name for re-ordering of values.
The need for permutations occurs a lot during (mainly low-level) processing.
To do permutation, you must provide two inputs: an array of values (that you want to re-order in place) and a permutation array which contains the new index of each element (let's call it @code{perm}).
The diagram below shows the input array before and after the re-ordering.
@example
permute: AFTER[ i ] = BEFORE[ perm[i] ] i = 0 .. N-1
inverse: AFTER[ perm[i] ] = BEFORE[ i ] i = 0 .. N-1
@end example
@cindex GNU Scientific Library
The functions here are a re-implementation of the GNU Scientific Library's @code{gsl_permute} function.
The reason we did not use that function was that it uses system-specific types (like @code{long} and @code{int}) which can have different widths on different systems, hence are not easily convertible to Gnuastro's fixed width types (see @ref{Numeric data types}).
There is also a separate function for each type, heavily using macros to allow a @code{base} function to work on all the types.
Thus it is hard to read/understand.
Hence, Gnuastro contains a re-write of their steps in a new type-agnostic method which is a single function that can work on any type.
As described in GSL's source code and manual, this implementation comes from Donald Knuth's @emph{Art of computer programming} book, in the "Sorting and Searching" chapter of Volume 3 (3rd ed).
Exercise 10 of Section 5.2 defines the problem and in the answers, Knuth describes the solution.
So if you are interested, please have a look there for more.
We are in contact with the GSL developers and in the future@footnote{Gnuastro's @url{http://savannah.gnu.org/task/?14497, Task 14497}.
If this task is still ``postponed'' when you are reading this and you are interested to help, your contributions would be very welcome.
Both Gnuastro and GSL developers are very busy, hence both would appreciate your help.} we will submit these implementations to GSL.
If they are finally incorporated there, we will delete this section in future versions.
@deftypefun void gal_permutation_check (size_t @code{*permutation}, size_t @code{size})
Print how @code{permutation} will re-order an array that has @code{size} elements for each element in one one line.
@end deftypefun
@deftypefun void gal_permutation_apply (gal_data_t @code{*input}, size_t @code{*permutation})
Apply @code{permutation} on the @code{input} dataset (can have any type), see above for the definition of permutation.
@end deftypefun
@deftypefun void gal_permutation_apply_inverse (gal_data_t @code{*input}, size_t @code{*permutation})
Apply the inverse of @code{permutation} on the @code{input} dataset (can have any type), see above for the definition of permutation.
@end deftypefun
@deftypefun void gal_permutation_transpose_2d (gal_data_t @code{*input})
Transpose an input 2D matrix into a new dataset.
If the input is not a square, this function will change the @code{input->array} element to a newly allocated array (the old one will be freed internally).
Therefore, in case you have already stored @code{input->array} for other usage @emph{before} this function, and the input is not a square, be sure to update the previously stored pointer if the input is not a square.
@end deftypefun
@node Matching, Statistical operations, Permutations, Gnuastro library
@subsection Matching (@file{match.h})
@cindex Matching
@cindex Coordinate matching
Matching is often necessary when two measurements of the same points have been done using different instruments (or hardware), different software or different configurations of the same software.
In other words, you have two catalogs or tables, and each has N columns containing the N-dimensional ``coordinate'' values of each point.
Each table can have other columns too, for example, one can have magnitudes in one filter, and another can have morphology measurements.
The matching functions here will use the coordinate columns of the two tables to find a permutation for each, and the total number of matched rows (@mymath{N_{match}}).
This will enable you to match by the positions if you like.
At a higher level, you can apply the permutation to the magnitude or morphology columns to merge the catalogs over the @mymath{N_{match}} rows.
The input and output data formats of the functions are the some and described below before the actual functions.
Each function also has extra arguments due to the particular algorithm it uses for the matching.
The two inputs of the functions (@code{coord1} and @code{coord2}) must be @ref{List of gal_data_t}.
Each @code{gal_data_t} node in @code{coord1} or @code{coord2} should be a single dimensional dataset (column in a table) and all the nodes (in each) must have the same number of elements (rows).
In other words, each column can be visualized as having the coordinates of each point in its respective dimension.
The dimensions of the coordinates is determined by the number of @code{gal_data_t} nodes in the two input lists (which must be equal).
The number of rows (or the number of elements in each @code{gal_data_t}) in the columns of @code{coord1} and @code{coord2} can (and, usually will!) be different.
In summary, these functions will be happy if you use @code{gal_table_read} to read the two coordinate columns from a file, see @ref{Table input output}.
@cindex Permutation
The functions below return a simply-linked list of three 1D datasets (see @ref{List of gal_data_t}), let's call the returned dataset @code{ret}.
The first two (@code{ret} and @code{ret->next}) are permutations.
In other words, the @code{array} elements of both have a type of @code{size_t}, see @ref{Permutations}.
The third node (@code{ret->next->next}) is the calculated distance for that match and its array has a type of @code{double}.
The number of matches will be put in the space pointed by the @code{nummatched} argument.
If there was not any match, this function will return @code{NULL}.
The two permutations can be applied to the rows of the two inputs: the first one (@code{ret}) should be applied to the rows of the table containing @code{coord1} and the second one (@code{ret->next}) to the table containing @code{coord2}.
After applying the returned permutations to the inputs, the top @code{nummatched} elements of both will match with each other.
The ordering of the rest of the elements is undefined (depends on the matching function used).
The third node is the distances between the respective match (which may be elliptical distance, see discussion of ``aperture'' below).
The functions will not simply return the nearest neighbor as a match.
This is because the nearest neighbor may be too far/close to be a meaningful (given the errors in position measurements)!
They will check the distance between the nearest neighbor of each point and only return a match if it is within an acceptable N-dimensional distance (or ``aperture'') and is unambiguous (see @ref{Unambiguous matching}).
The matching aperture is defined by the @code{aperture} array that is an input argument to the functions.
In a 2D situation (where the input lists have two nodes), for the most generic case, @code{aperture} must have three elements: the major axis length, axis ratio and position angle (see @ref{Defining an ellipse and ellipsoid}).
Therefore, if @code{aperture[1]==1}, the aperture will be a circle of radius @code{aperture[0]} and the third value will not be used.
When the aperture is an ellipse, distances between the points are also calculated in the respective elliptical distances (@mymath{r_{el}} in @ref{Defining an ellipse and ellipsoid}).
@strong{Output permutations ignore internal sorting}: the output permutations will correspond to the initial inputs.
Therefore, even when @code{inplace!=0} (and this function re-arranges the inputs in place), the output permutation will correspond to original (possibly non-sorted) inputs.
The reason for this is that you rarely want to permute the actual positional columns after the match.
Usually, you also have other columns (such as the magnitude and morphology) and you want to find how they differ between the objects that match.
Once you have the permutations, they can be applied to those other columns (see @ref{Permutations}) and the higher-level processing can continue.
So if you do not need the coordinate columns for the rest of your analysis, it is better to set @code{inplace=1}.
@deffn Macro GAL_MATCH_ARRANGE_FULL
@deffnx Macro GAL_MATCH_ARRANGE_INNER
@deffnx Macro GAL_MATCH_ARRANGE_OUTER
@deffnx Macro GAL_MATCH_ARRANGE_INVALID
@deffnx Macro GAL_MATCH_ARRANGE_OUTERWITHINAPERTURE
The arrangement of the match output; for a description of the various match arrangements, see @ref{Arranging match output}.
The invalid keyword is useful when you need to initialize values (and later make sure that your users have actually given a value.
@end deffn
@deftypefun {gal_data_t *} gal_match_sort_based (gal_data_t @code{*coord1}, gal_data_t @code{*coord2}, double @code{*aperture}, int @code{sorted_by_first}, int @code{inplace}, size_t @code{minmapsize}, int @code{quietmmap}, uint8_t @code{**flag}, size_t @code{*nummatched})
Use a basic sort-based match to find the matching points of two input coordinates.
See the descriptions above on the format of the inputs and outputs.
For inner and full arrangements, this function will allocate and return a @code{flag} array that has a value of 1 for ambiguous elements of the second input and 0 for good or non-matching elements (see @ref{Unambiguous matching}).
The final number of matches is returned in @code{nummatched} and the format of the returned dataset (three columns) is described above.
To speed up the search, this function will sort the input coordinates by their first column (first axis).
If @emph{both} are already sorted by their first column, you can avoid the sorting step by giving a non-zero value to @code{sorted_by_first}.
When sorting is necessary and @code{inplace} is non-zero, the actual input columns will be sorted.
Otherwise, an internal copy of the inputs will be made, used (sorted) and later freed before returning.
Therefore, when @code{inplace==0}, inputs will remain untouched, but this function will take more time and memory.
If internal allocation is necessary and the space is larger than @code{minmapsize}, the space will be not allocated in the RAM, but in a file, see description of @option{--minmapsize} and @code{--quietmmap} in @ref{Processing options}.
@end deftypefun
@deftypefun {gal_data_t *} gal_match_kdtree (gal_data_t @code{*coord1},
gal_data_t @code{*coord2}, gal_data_t @code{*coord1_kdtree}, size_t @code{kdtree_root}, uint8_t @code{arrange}, double @code{*aperture}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{*nummatched}, uint8_t @code{**flag}, uint8_t @code{nosamenode})
@cindex Matching by k-d tree
@cindex k-d tree matching
Use the k-d tree algorithm for finding matches between two catalogs.
The @code{arrange}-ment of the match output should be set with one of the @code{GAL_MATCH_ARRANGE_*} macros described above.
The k-d tree of the first input (@code{coord1_kdtree}), and its root index (@code{kdtree_root}), should be constructed and found before calling this function, to do this, you can use the @code{gal_kdtree_create} of @ref{K-d tree}.
The desired @code{aperture} array is the same as @code{gal_match_sort_based} and described at the top of this section, but the aperture array can be @code{NULL} for an outer arrangement (it will not be used).
If @code{coord1_kdtree==NULL}, this function will return a @code{NULL} pointer and write a value of @code{0} in the space that @code{nummatched} points to.
If @code{nosamenode} is not zero, exact matches are discarded. This is useful for instance when we want to search the nearest neighbor within the same dataset.
If @code{numthreads} threads is larger than one, the matching will be done in parallel.
For inner and full arrangements, this function will allocate and return a @code{flag} array that has a value of 1 for ambiguous elements of the second input and 0 for good or non-matching elements (see @ref{Unambiguous matching}).
The final number of matches is returned in @code{nummatched} and the format of the returned dataset (three columns) is described above.
If internal allocation is necessary and the space is larger than @code{minmapsize}, the space will be not allocated in the RAM, but in a file, see description of @option{--minmapsize} and @code{--quietmmap} in @ref{Processing options}.
When @code{arrange} is outer or outer-within-aperture, the output is only a two column table (list of 1D @code{gal_data_t}s).
The first one is the permutation that that should be applied to the first input and the second one is the distance between the match.
This is because in these types of arrangements, the second output is not changed or permuted).
@end deftypefun
@node Statistical operations, Fitting functions, Matching, Gnuastro library
@subsection Statistical operations (@file{statistics.h})
After reading a dataset into memory from a file or fully simulating it with another process, the most common processes that will be done on it are statistical operations to let you quantify different aspects of the data.
the functions in this section describe Gnuastro's current set of tools for this job.
All these functions can work on any numeric data type natively (see @ref{Numeric data types}) and can also work on tiles over a dataset.
Hence the inputs and outputs are in Gnuastro's @ref{Generic data container}.
@deffn Macro GAL_STATISTICS_SIG_CLIP_MAX_CONVERGE
The maximum number of clips, when @mymath{\sigma}-clipping should be done by convergence.
If the clipping does not converge before making this many clips, all @mymath{\sigma}-clipping outputs will be NaN.
@end deffn
@deffn Macro GAL_STATISTICS_MODE_GOOD_SYM
The minimum acceptable symmetricity of the mode calculation.
If the symmetricity of the derived mode is less than this value, all the returned values by @code{gal_statistics_mode} will have a value of NaN.
@end deffn
@deffn Macro GAL_STATISTICS_BINS_INVALID
@deffnx Macro GAL_STATISTICS_BINS_REGULAR
@deffnx Macro GAL_STATISTICS_BINS_IRREGULAR
Macros used to identify if the regularity of the bins when defining bins.
@end deffn
@deffn Macro GAL_STATISTICS_CLIP_OUTCOL_STD
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_MAD
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_MEAN
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_MEDIAN
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_NUMBER_USED
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_NUMBER_CLIPS
Macros containing the index of the clipping outputs, see the descriptions of @code{gal_statistics_clip_sigma} below.
@end deffn
@deffn Macro GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_STD
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_MAD
@deffnx Macro GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_MEAN
Macros containing bit flags for optional clipping outputs, see the descriptions of @code{gal_statistics_clip_sigma} below.
@end deffn
@cindex Number
@deftypefun {gal_data_t *} gal_statistics_number (gal_data_t @code{*input})
Return a single-element dataset with type @code{size_t} which contains the number of non-blank elements in @code{input}.
@end deftypefun
@cindex Minimum
@deftypefun {gal_data_t *} gal_statistics_minimum (gal_data_t @code{*input})
Return a single-element dataset containing the minimum non-blank value in @code{input}.
The numerical datatype of the output is the same as @code{input}.
@end deftypefun
@cindex Maximum
@deftypefun {gal_data_t *} gal_statistics_maximum (gal_data_t @code{*input})
Return a single-element dataset containing the maximum non-blank value in @code{input}.
The numerical datatype of the output is the same as @code{input}.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_range_double (gal_data_t @code{*input})
return the difference between the minimum and maximum as a double-precision floating point of a dataset (with any type).
@end deftypefun
@cindex Sum
@deftypefun {gal_data_t *} gal_statistics_sum (gal_data_t @code{*input})
Return a single-element (@code{double} or @code{float64}) dataset
containing the sum of the non-blank values in @code{input}.
@end deftypefun
@cindex Mean
@cindex Average
@deftypefun {gal_data_t *} gal_statistics_mean (gal_data_t @code{*input})
Return a single-element (@code{double} or @code{float64}) dataset
containing the mean of the non-blank values in @code{input}.
@end deftypefun
@cindex Standard deviation
@deftypefun {gal_data_t *} gal_statistics_std (gal_data_t @code{*input})
Return a single-element (@code{double} or @code{float64}) dataset
containing the standard deviation of the non-blank values in @code{input}.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_mean_std (gal_data_t @code{*input})
Return a two-element (@code{double} or @code{float64}) dataset containing the mean and standard deviation of the non-blank values in @code{input}.
The first element of the returned dataset is the mean and the second is the standard deviation.
This function will calculate both values in one pass over the dataset.
Hence when both the mean and standard deviation of a dataset are necessary, this function is much more efficient than calling @code{gal_statistics_mean} and @code{gal_statistics_std} separately.
@end deftypefun
@deftypefun double gal_statistics_std_from_sums (double @code{sum}, double @code{sump2}, size_t @code{num})
Return the standard deviation from the values that can be obtained in a single pass through the distribution: @code{sum}: the sum of the elements, @code{sump2}: the sum of the power-of-2 of each element, and @code{num}: the number of elements.
This is a low-level function that is only useful after the distribution of values has been parsed (and the three input arguments are calculated).
It is the lower-level function that is used in functions like @code{gal_statistics_std}, or other components of Gnuastro that measure the standard deviation (for example, MakeCatalog's @option{--std} column).
@end deftypefun
@cindex Median
@deftypefun {gal_data_t *} gal_statistics_median (gal_data_t @code{*input}, int @code{inplace})
Return a single-element dataset containing the median of the non-blank values in @code{input}.
The numerical datatype of the output is the same as @code{input}.
Calculating the median involves sorting the dataset and removing blank values, for better performance (and less memory usage), you can give a non-zero value to the @code{inplace} argument.
In this case, the sorting and removal of blank elements will be done directly on the input dataset.
However, after this function the original dataset may have changed (if it was not sorted or had blank values).
@end deftypefun
@cindex Median absolute deviation (MAD)
@cindex MAD (Median absolute deviation)
@deftypefun {gal_data_t *} gal_statistics_mad (gal_data_t @code{*input}, int @code{inplace})
Return a single-element dataset with same type as input, containing the median absolute deviation (MAD) of the non-blank values in @code{input}.
If @code{inplace==0}, the input dataset will remain untouched.
Otherwise, the MAD calculation will be done on the input dataset without allocating a new one (its values will be changed after this function).
This is good when you do not need the input after this function and avoid taking extra RAM and CPU.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_median_mad (gal_data_t @code{*input}, int @code{inplace})
Return a two-element dataset with same type as input, containing the median and median absolute deviation (MAD) of the non-blank values in @code{input}.
If @code{inplace==0}, the input dataset will remain untouched.
Otherwise, the MAD calculation will be done on the input dataset without allocating a new one (its values will be changed after this function).
This is good when you do not need the input after this function and avoid taking extra RAM and CPU.
@end deftypefun
@cindex Quantile
@deftypefun size_t gal_statistics_quantile_index (size_t @code{size}, double @code{quantile})
Return the index of the element that has a quantile of @code{quantile}
assuming the dataset has @code{size} elements.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_quantile (gal_data_t @code{*input}, double @code{quantile}, int @code{inplace})
Return a single-element dataset containing the value with in a quantile @code{quantile} of the non-blank values in @code{input}.
The numerical datatype of the output is the same as @code{input}.
See @code{gal_statistics_median} for a description of @code{inplace}.
@end deftypefun
@deftypefun size_t gal_statistics_quantile_function_index (gal_data_t @code{*input}, gal_data_t @code{*value}, int @code{inplace})
Return the index of the quantile function (inverse quantile) of @code{input} at @code{value}.
In other words, this function will return the index of the nearest element (of a sorted and non-blank) @code{input} to @code{value}.
If the value is outside the range of the input, then this function will return @code{GAL_BLANK_SIZE_T}.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_quantile_function (gal_data_t @code{*input}, gal_data_t @code{*value}, int @code{inplace})
Return a single-element dataset containing the quantile function of the non-blank values in @code{input} at @code{value} (a single-element dataset).
The numerical data type is of the returned dataset is @code{float64} (or @code{double}).
In other words, this function will return the quantile of @code{value} in @code{input}.
@code{value} has to have the same type as @code{input}.
See @code{gal_statistics_median} for a description of @code{inplace}.
When all elements are blank, the returned value will be NaN.
If the value is smaller than the input's smallest element, the returned value will be negative infinity.
If the value is larger than the input's largest element, then the returned value will be positive infinity
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_unique (gal_data_t @code{*input}, int @code{inplace})
Return a 1D dataset with the same numeric data type as the input, but only containing its unique elements and without any (possible) blank/NaN elements.
Note that the input's number of dimensions is irrelevant for this function.
If @code{inplace} is not zero, then the unique values will over-write the allocated space of the input, otherwise a new space will be allocated and the input will not be touched.
@end deftypefun
@deftypefun int gal_statistics_has_negative (gal_data_t @code{*input})
Return @code{1} if the input dataset contains a negative number and @code{0} otherwise.
If the dataset doesn't have a numeric type (as in a string), this function will abort with, saying that it does not recognize the file type.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_mode (gal_data_t @code{*input}, float @code{mirrordist}, int @code{inplace})
Return a four-element (@code{double} or @code{float64}) dataset that contains the mode of the @code{input} distribution.
This function implements the non-parametric algorithm to find the mode that is described in Appendix C of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
In short it compares the actual distribution and its ``mirror distribution'' to find the mode.
In order to be efficient, you can determine how far the comparison goes away from the mirror through the @code{mirrordist} parameter (think of it as a multiple of sigma/error).
See @code{gal_statistics_median} for a description of @code{inplace}.
The output array has the following elements (in the given order, note that
counting in C starts from 0).
@example
array[0]: mode
array[1]: mode quantile.
array[2]: symmetricity.
array[3]: value at the end of symmetricity.
@end example
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_mode_mirror_plots (gal_data_t @code{*input}, gal_data_t @code{*value}, size_t @code{numbins}, int @code{inplace}, double @code{*mirror_val})
Make a mirrored histogram and cumulative frequency plot (with @code{numbins}) with the mirror distribution of the @code{input} having a value in @code{value}.
If all the input elements are blank, or the mirror value is outside the range of the input, this function will return a @code{NULL} pointer.
The output is a list of data structures (see @ref{List of gal_data_t}): the
first is the bins with one bin at the mirror point, the second is the
histogram with a maximum of one and the third is the cumulative frequency
plot (with a maximum of one).
@end deftypefun
@deftypefun int gal_statistics_is_sorted (gal_data_t @code{*input}, int @code{updateflags})
Return @code{0} if the input is not sorted, if it is sorted, this function will return @code{1} and @code{2} if it is increasing or decreasing, respectively.
This function will abort with an error if @code{input} has zero elements and will return @code{1} (sorted, increasing) when there is only one element.
This function will only look into the dataset if the @code{GAL_DATA_FLAG_SORT_CH} bit of @code{input->flag} is @code{0}, see @ref{Generic data container}.
When the flags do not indicate a previous check @emph{and} @code{updateflags} is non-zero, this function will set the flags appropriately to avoid having to re-check the dataset in future calls (this can be very useful when repeated checks are necessary).
When @code{updateflags==0}, this function has no side-effects on the dataset: it will not toggle the flags.
If you want to re-check a dataset with the blank-value-check flag already set (for example, if you have made changes to it), then explicitly set the @code{GAL_DATA_FLAG_SORT_CH} bit to zero before calling this function.
When there are no other flags, you can simply set the flags to zero (with @code{input->flag=0}), otherwise you can use this expression:
@example
input->flag &= ~GAL_DATA_FLAG_SORT_CH;
@end example
@end deftypefun
@deftypefun void gal_statistics_sort_increasing (gal_data_t @code{*input})
Sort the input dataset (in place) in an increasing order and toggle the sort-related bit flags accordingly.
To sort a table (many columns) based on one column, see @code{gal_table_sort} in @ref{Table input output}.
@end deftypefun
@deftypefun void gal_statistics_sort_decreasing (gal_data_t @code{*input})
Sort the input dataset (in place) in a decreasing order and toggle the sort-related bit flags accordingly.
To sort a table (many columns) based on one column, see @code{gal_table_sort} in @ref{Table input output}.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_no_blank_sorted (gal_data_t @code{*input}, int @code{inplace})
Remove all the blanks and sort the input dataset.
If @code{inplace} is non-zero this will happen on the input dataset (in the allocated space of the input dataset).
However, if @code{inplace} is zero, this function will allocate a new copy of the dataset and work on that.
Therefore if @code{inplace==0}, the input dataset will be modified.
This function uses the bit flags of the input, so if you have modified the dataset, set @code{input->flag=0} before calling this function.
Also note that @code{inplace} is only for the dataset elements.
Therefore even when @code{inplace==0}, if the input is already sorted @emph{and} has no blank values, then the flags will be updated to show this.
If all the elements were blank, then the returned dataset's @code{size} will be zero.
This is thus a good parameter to check after calling this function to see if there actually were any non-blank elements in the input or not and take the appropriate measure.
This can help avoid strange bugs in later steps.
The flags of a zero-sized returned dataset will indicate that it has no blanks and is sorted in an increasing order.
Even if having blank values or being sorted is not defined on a zero-element dataset, it is up to the caller to choose what they will do with a zero-element dataset.
The flags have to be set after this function any way.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_regular_bins (gal_data_t @code{*input}, gal_data_t @code{*inrange}, size_t @code{numbins}, double @code{onebinstart})
Generate an array of regularly spaced elements as a 1D array (column) of type @code{double} (i.e., @code{float64}, it has to be double to account for small differences on the bin edges).
The input arguments are described below
@table @code
@item input
The dataset you want to apply the bins to.
This is only necessary if the range argument is not complete, see below.
If @code{inrange} has all the necessary information, you can pass a @code{NULL} pointer for this.
@item inrange
This dataset keeps the desired range along each dimension of the input data structure, it has to be in @code{float} (i.e., @code{float32}) type.
@itemize
@item
If you want the full range of the dataset (in any dimensions, then just set @code{inrange} to @code{NULL} and the range will be specified from the minimum and maximum value of the dataset (@code{input} cannot be @code{NULL} in this case).
@item
If there is one element for each dimension in range, then it is viewed as a quantile (Q), and the range will be: `Q to 1-Q'.
@item
If there are two elements for each dimension in range, then they are assumed to be your desired minimum and maximum values.
When either of the two are NaN, the minimum and maximum will be calculated for it.
@end itemize
@item numbins
The number of bins: must be larger than 0.
@item onebinstart
A desired value to start one bin.
Note that with this option, the bins will not start and end exactly on the given range values, it will be slightly shifted to accommodate this request (enough for the bin containing the value to start at it).
If you do not have any preference on where to start a bin, set this to NAN.
@end table
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_histogram (gal_data_t @code{*input}, gal_data_t @code{*bins}, int @code{normalize}, int @code{maxone})
@cindex Histogram
Make a histogram of all the elements in the given dataset with bin values that are defined in the @code{bins} structure (see @code{gal_statistics_regular_bins}, they currently have to be equally spaced).
The returned histogram is a 1-D @code{gal_data_t} of type @code{GAL_TYPE_FLOAT32}, with the same number of elements as @code{bins}.
For each bin, it will contain the number of input elements that fell inside of that bin.
Let's write the center of the @mymath{i}th element of the bin array as @mymath{b_i}, and the fixed half-bin width as @mymath{h}.
Then element @mymath{j} of the input array (@mymath{in_j}) will be counted in @mymath{b_i} if @mymath{(b_i-h) \le in_j < (b_i+h)}.
However, if @mymath{in_j} is somewhere in the last bin, the condition changes to @mymath{(b_i-h) \le in_j \le (b_i+h)}.
If @code{normalize!=0}, the histogram will be ``normalized'' such that the sum of the counts column will be one.
In other words, all the counts in every bin will be divided by the total number of counts.
If @code{maxone!=0}, the histogram's maximum count will be 1.
In other words, the counts in every bin will be divided by the value of the maximum.
In both of these cases, the output dataset will have a @code{GAL_DATA_FLOAT32} datatype.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_histogram2d (gal_data_t @code{*input}, gal_data_t @code{*bins})
@cindex Histogram, 2D
@cindex 2D histogram
This function is very similar to @code{gal_statistics_histogram}, but will build a 2D histogram (count how many of the elements of @code{input} are a within a 2D box.
The bins comprising the first dimension of the 2D box are defined by @code{bins}.
The bins of the second dimension are defined by @code{bins->next} (@code{bins} is a @ref{List of gal_data_t}).
Both the @code{bin} and @code{bin->next} can be created with @code{gal_statistics_regular_bins}.
This function returns a list of @code{gal_data_t} with three nodes/columns, so you can directly write them into a table (see @ref{Table input output}).
Assuming @code{bins} has @mymath{N1} bins and @code{bins->next} has @mymath{N2} bins, each node/column of the returned output is a 1D array with @mymath{N1\times N2} elements.
The first and second columns are the center of the 2D bin along the first and second dimensions and have a @code{double} data type.
The third column is the 2D histogram (the number of input elements that have a value within that 2D bin) and has a @code{uint32} data type (see @ref{Numeric data types}).
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_cfp (gal_data_t @code{*input}, gal_data_t @code{*bins}, int @code{normalize})
Make a cumulative frequency plot (CFP) of all the elements in @code{input}
with bin values that are defined in the @code{bins} structure (see
@code{gal_statistics_regular_bins}).
The CFP is built from the histogram: in each bin, the value is the sum of all previous bins in the histogram.
Thus, if you have already calculated the histogram before calling this function, you can pass it onto this function as the data structure in @code{bins->next} (see @code{List of gal_data_t}).
If @code{bin->next!=NULL}, then it is assumed to be the histogram.
If it is @code{NULL}, then the histogram will be calculated internally and freed after the job is finished.
When a histogram is given and it is normalized, the CFP will also be normalized (even if the normalized flag is not set here): note that a normalized CFP's maximum value is 1.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_concentration (gal_data_t @code{*input}, double @code{width}, int @code{inplace})
Return the concentration around the median for the input distribution.
For more on the algorithm and @code{width}, see the description of @option{--concentration} in @ref{Single value measurements}.
If @code{inplace!=0}, then this function will use the actual allocated space of the input data and will not internally allocate a new dataset (which can have memory and CPU benefits); but will alter (sort and remove blank elements from) your input dataset.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_clip_sigma (gal_data_t @code{*input}, float @code{multip}, float @code{param}, float @code{extrastats}, int @code{inplace}, int @code{quiet})
Apply @mymath{\sigma}-clipping on a given dataset and return a dataset that contains the results.
For a description of @mymath{\sigma}-clipping see @ref{Sigma clipping}.
@code{multip} is the multiple of the standard deviation (or @mymath{\sigma}, that is used to define outliers in each round of clipping).
The role of @code{param} is determined based on its value.
If @code{param} is larger than @code{1} (one), it must be an integer and will be interpreted as the number clips to do.
If it is less than @code{1} (one), it is interpreted as the tolerance level to stop the iteration.
The returned dataset (let's call it @code{out}) contains a 6-element array with type @code{GAL_TYPE_FLOAT32}.
Through the @code{GAL_STATISTICS_CLIP_OUTCOL_*} macros below, you can access any particular measurement.
@example
out=gal_statistics_clip_sigma(input, ....);
float *array=out->array;
array[ GAL_STATISTICS_CLIP_OUTCOL_NUMBER_USED ]
array[ GAL_STATISTICS_CLIP_OUTCOL_MEAN ]
array[ GAL_STATISTICS_CLIP_OUTCOL_STD ]
array[ GAL_STATISTICS_CLIP_OUTCOL_MEDIAN ]
array[ GAL_STATISTICS_CLIP_OUTCOL_MAD ]
array[ GAL_STATISTICS_CLIP_OUTCOL_NUMBER_CLIPS ]
@end example
However, note that all are not measured by default!
Since the mean and MAD are not necessary during sigma-clipping, if you want them, you have to set the following two bit flags in the @code{extrastats} argument as below.
@example
int extrastats=0; /* To initialize all bits */
/* If you want the sigma-clipped MAD. */
extrastats |= GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_MAD;
/* If you want the sigma-clipped mean. */
extrastats |= GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_MEAN;
@end example
If the @mymath{\sigma}-clipping does not converge or all input elements are blank, then this function will return NaN values for all the elements above.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_clip_mad (gal_data_t @code{*input}, float @code{multip}, float @code{param}, uint8_t @code{extrastats}, int @code{inplace}, int @code{quiet})
Similar to @code{gal_statistics_clip_sigma}, but will do median absolute deviation (MAD) based clipping, see @ref{MAD clipping}.
The only difference is that for this function the MAD is automatically calculated during clipping.
It is the mean and standard deviation that will not be calculated unless requested with the @code{GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_MEAN} and @code{GAL_STATISTICS_CLIP_OUTCOL_OPTIONAL_STD} bit flats respectively.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_outlier_bydistance (int @code{pos1_neg0}, gal_data_t @code{*input}, size_t @code{window_size}, float @code{sigma}, float @code{sigclip_multip}, float @code{sigclip_param}, int @code{inplace}, int @code{quiet})
Find the first positive outlier (if @code{pos1_neg0!=0}) in the @code{input} distribution.
When @code{pos1_neg0==0}, the same algorithm goes to the start of the dataset.
The returned dataset contains a single element: the first positive outlier.
It is one of the dataset's elements, in the same type as the input.
If the process fails for any reason (for example, no outlier was found), a @code{NULL} pointer will be returned.
All (possibly existing) blank elements are first removed from the input dataset, then it is sorted.
A sliding window of @code{window_size} elements is parsed over the dataset.
Starting from the @code{window_size}-th element of the dataset, in the direction of increasing values.
This window is used as a reference.
The first element where the distance to the previous (sorted) element is @code{sigma} units away from the distribution of distances in its window is considered an outlier and returned by this function.
Formally, if we assume there are @mymath{N} non-blank elements.
They are first sorted.
Searching for the outlier starts on element @mymath{W}.
Let's take @mymath{v_i} to be the @mymath{i}-th element of the sorted input (with no blank values) and @mymath{m} and @mymath{\sigma} as the @mymath{\sigma}-clipped median and standard deviation from the distances of the previous @mymath{W} elements (not including @mymath{v_i}).
If the value given to @code{sigma} is displayed with @mymath{s}, the @mymath{i}-th element is considered as an outlier when the condition below is true.
@dispmath{{(v_i-v_{i-1})-m\over \sigma}>s}
The @code{sigclip_multip} and @code{sigclip_param} arguments specify the properties of the @mymath{\sigma}-clipping (see @ref{Sigma clipping} for more).
You see that by this definition, the outlier cannot be any of the lower half elements.
The advantage of this algorithm compared to @mymath{\sigma}-clippign is that it only looks backwards (in the sorted array) and parses it in one direction.
If @code{inplace!=0}, the removing of blank elements and sorting will be done within the input dataset's allocated space.
Otherwise, this function will internally allocate (and later free) the necessary space to keep the intermediate space that this process requires.
If @code{quiet!=0}, this function will report the parameters every time it moves the window as a separate line with several columns.
The first column is the value, the second (in square brackets) is the sorted index, the third is the distance of this element from the previous one.
The Fourth and fifth (in parenthesis) are the median and standard deviation of the @mymath{\sigma}-clipped distribution within the window and the last column is the difference between the third and fourth, divided by the fifth.
@end deftypefun
@deftypefun {gal_data_t *} gal_statistics_outlier_flat_cfp (gal_data_t @code{*input}, size_t @code{numprev}, float @code{sigclip_multip}, float @code{sigclip_param}, float @code{thresh}, size_t @code{numcontig}, int @code{inplace}, int @code{quiet}, size_t @code{*index})
Return the first element in the given dataset where the cumulative frequency plot first becomes significantly flat for a sufficient number of elements.
The returned dataset only has one element (with the same type as the input).
If @code{index!=NULL}, the index (counting from zero, after sorting the dataset and removing any blanks) is written in the space that @code{index} points to.
If no sufficiently flat portion is found, the returned pointer will be @code{NULL}.
@cindex Sigma-clipping
The flatness on the cumulative frequency plot is defined like this (see @ref{Histogram and Cumulative Frequency Plot}): on the sorted dataset, for every point (@mymath{a_i}), we calculate @mymath{d_i=a_{i+2}-a_{i-2}}.
This done on the first @mymath{N} elements (value of @code{numprev}).
After element @mymath{a_{N+2}}, we start estimating the flatness as follows: for every element we use the @mymath{N}, @mymath{d_i} measurements before it as the reference.
Let's call this set @mymath{D_i} for element @mymath{i}.
The @mymath{\sigma}-clipped median (@mymath{m}) and standard deviation (@mymath{s}) of @mymath{D_i} are then calculated.
The @mymath{\sigma}-clipping can be configured with the two @code{sigclip_param} and @code{sigclip_multip} arguments.
Taking @mymath{t} as the significance threshold (value to @code{thresh}), a point is considered flat when @mymath{a_i>m+t\sigma}.
But a single point satisfying this condition will probably just be due to noise.
To make a more robust estimate, this significance/condition has to hold for @code{numcontig} contiguous elements after @mymath{a_i}.
When this is satisfied, @mymath{a_i} is returned as the point where the distribution's cumulative frequency plot becomes flat.
To get a good estimate of @mymath{m} and @mymath{s}, it is thus recommended to set @code{numprev} as large as possible.
However, be careful not to set it too high: the checks in the paragraph above are not done on the first @code{numprev} elements and this function assumes the flatness occurs after them.
Also, be sure that the value to @code{numcontig} is much less than @code{numprev}, otherwise @mymath{\sigma}-clipping may not be able to remove the immediate outliers in @mymath{D_i} near the boundary of the flat region.
When @code{quiet==0}, the basic measurements done on each element are printed on the command-line (good for finding the best parameters).
When @code{inplace!=0}, the sorting and removal of blank elements is done on the input dataset, so the input may be altered after this function.
@end deftypefun
@node Fitting functions, Binary datasets, Statistical operations, Gnuastro library
@subsection Fitting functions (@file{fit.h})
@cindex Fitting
@cindex Least squares fitting
After doing a measurement, it is usually necessary to parameterize the relation that has been found.
The functions in this section are wrappers over the GNU Scientific Library (GSL) @url{https://www.gnu.org/software/gsl/doc/html/lls.html, Linear Least-Squares Fitting}, to make them easily accessible using Gnuastro's @ref{Generic data container}.
The respective GSL function is mentioned under each function.
@deffn {Global integer} GAL_FIT_INVALID
@deffnx {Global integer} GAL_FIT_LINEAR
@deffnx {Global integer} GAL_FIT_LINEAR_WEIGHTED
@deffnx {Global integer} GAL_FIT_LINEAR_NO_CONSTANT
@deffnx {Global integer} GAL_FIT_LINEAR_NO_CONSTANT_WEIGHTED
@deffnx {Global integer} GAL_FIT_POLYNOMIAL
@deffnx {Global integer} GAL_FIT_POLYNOMIAL_ROBUST
@deffnx {Global integer} GAL_FIT_POLYNOMIAL_WEIGHTED
@deffnx {Global integer} GAL_FIT_POLYNOMIAL_TIKHONOV
@deffnx {Global integer} GAL_FIT_NUMBER
Identifiers for the various types of fitting functions.
These can be used by the callers of these functions to select between various fitting types.
They can easily be converted to, and from, fixed human-readable strings using the @code{gal_fit_name_*} functions below.
The last one @code{GAL_FIT_NUMBER} is the total number of available fitting methods (can be used to add more macros in the calling program and to avoid overlaps with existing codes).
@end deffn
@deffn {Global integer} GAL_FIT_ROBUST_INVALID
@deffnx {Global integer} GAL_FIT_ROBUST_BISQUARE
@deffnx {Global integer} GAL_FIT_ROBUST_CAUCHY
@deffnx {Global integer} GAL_FIT_ROBUST_FAIR
@deffnx {Global integer} GAL_FIT_ROBUST_HUBER
@deffnx {Global integer} GAL_FIT_ROBUST_OLS
@deffnx {Global integer} GAL_FIT_ROBUST_WELSCH
@deffnx {Global integer} GAL_FIT_ROBUST_NUMBER
Identifiers for the various types of robust polynomial fitting functions.
For a description of each, see @url{https://www.gnu.org/s/gsl/doc/html/lls.html#c.gsl_multifit_robust_alloc}.
The last one @code{GAL_FIT_ROBUST_NUMBER} is the total number of available functions (can be used to add more macros in the calling program and to avoid overlaps with existing codes).
@end deffn
@deffn {Global integer} GAL_FIT_MATRIX_INVALID
@deffnx {Global integer} GAL_FIT_MATRIX_POLYNOMIAL_1D
@deffnx {Global integer} GAL_FIT_MATRIX_NUMBER_1D
@deffnx {Global integer} GAL_FIT_MATRIX_POLYNOMIAL_2D
@deffnx {Global integer} GAL_FIT_MATRIX_POLYNOMIAL_2D_TPV
@deffnx {Global integer} GAL_FIT_MATRIX_NUMBER_ALL
The last one @code{GAL_FIT_MATRIX_TOTAL_ALL} is the total number of available matrices (can be used to add more macros in the calling program and to avoid overlaps with existing codes), while @code{GAL_FIT_MATRIX_NUMBER_1D} show how many of them are 1D.
@end deffn
@deftypefun uint8_t gal_fit_name_to_id (char @code{*name})
Return the internal code of a standard human-readable name for the various fitting functions.
If the name is not recognized, the returned value will be @code{GAL_FIT_INVALID}.
@end deftypefun
@deftypefun {char *} gal_fit_name_from_id (uint8_t @code{fitid})
Return a standard human-readable name for the fitting function identified with the @code{fitid} (read as ``fitting ID'').
If the fitting ID couldn't be recognized, a NULL pointer is returned.
@end deftypefun
@deftypefun uint8_t gal_fit_name_robust_to_id (char @code{*name})
Return the internal code of a standard human-readable name for the various robust fitting types.
If the name is not recognized, the returned value will be @code{GAL_FIT_INVALID}.
@end deftypefun
@deftypefun {char *} gal_fit_name_robust_from_id (uint8_t @code{robustid})
Return a standard human-readable name for the input robust fitting type.
If the fitting ID couldn't be recognized, a NULL pointer is returned.
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_linear_1d (gal_data_t @code{*xin}, gal_data_t @code{*yin}, gal_data_t @code{*ywht})
@cindex Weight (in fitting)
Preform a 1D linear regression fit with a constant term@footnote{@url{https://www.gnu.org/s/gsl/doc/html/lls.html#linear-regression-with-a-constant-term}} in the form of @mymath{Y=c_0+c_1X}.
The input @code{xin} contains the independent variable values and @code{yin} contains the measured variable values for each independent variable.
When @code{ywht!=NULL}, it is assumed to contain the ``weight'' of each Y measurement (if you don't have weights on your measured values, simply set this to @code{NULL}).
The weight of each measurement is the inverse of its variance.
For a Gaussian error distribution with standard deviation @mymath{\sigma}, the weight is therefore @mymath{1/\sigma^2}.
If any of the values in any of the inputs is blank (NaN in floating point), the final fitted parameters will all be NaN.
To remove rows with a NaN/blank, you can use @code{gal_blank_remove_rows} (which will remove all rows with a blank values in any of the columns with a single call).
@cindex Chi-squared
@cindex Covariance matrix
@cindex Matrix (covariance)
@cindex Variance-covariance matrix
The output is a single dataset with a @code{GAL_TYPE_FLOAT64} type with 6 elements:
@enumerate
@item
@mymath{c_0}: the constant in @mymath{Y=c_0+c_1X}.
@item
@mymath{c_1}: the multiple in @mymath{Y=c_0+c_1X}.
@item
First element of variance-covariance matrix.
@item
Second and third (which are equal) elements of the variance-covariance matrix.
@item
Fourth element of the variance-covariance matrix.
@item
The reduced @mymath{\chi^2} of the fit.
@end enumerate
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_linear_1d_no_constant (gal_data_t @code{*xin}, gal_data_t @code{*yin}, gal_data_t @code{*ywht})
@cindex Weight (in fitting)
Preform a 1D linear regression fit @emph{without} a constant term@footnote{@url{https://www.gnu.org/s/gsl/doc/html/lls.html#linear-regression-without-a-constant-term}}, formally: @mymath{Y=c_1X}.
The input @code{xin} contains the independent variable values and @code{yin} contains the measured variable values for each independent variable.
When @code{ywht!=NULL}, it is assumed to contain the ``weight'' of each Y measurement (if you don't have weights on your measured values, simply set this to @code{NULL}).
The weight of each measurement is the inverse of its variance.
For a Gaussian error distribution with standard deviation @mymath{\sigma}, the weight is therefore @mymath{1/\sigma^2}.
If any of the values in any of the inputs is blank (NaN in floating point), the final fitted parameters will all be NaN.
To remove rows with a NaN/blank, you can use @code{gal_blank_remove_rows} (which will remove all rows with a blank values in any of the columns with a single call).
The output is a single dataset with a @code{GAL_TYPE_FLOAT64} type with 3 elements:
@enumerate
@item
@mymath{c_1}: the multiple in @mymath{Y=c_0+c_1X}.
@item
Variance of @mymath{c_1}.
@item
The reduced @mymath{\chi^2} of the fit.
@end enumerate
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_linear_1d_estimate (gal_data_t @code{*fit}, gal_data_t @code{*xin})
Given a linear least squares fit output (@code{fit}), estimate the fit on an arbitrary number of independent variable (horizontal axis, or X, in an X-Y plot) within @code{xin}.
@code{fit} is assumed to be the output of either @code{gal_fit_linear_1d} or @code{gal_fit_linear_1d_no_constant}.
In case you haven't used those functions to obtain the constants and covariance matrix elements, see the description of those functions for the expected format of @code{fit}.
This function returns two columns (as a @ref{List of gal_data_t}): The top node of the list is the estimated values at the input X-axis positions, and the next node is the errors in the estimation.
Naturally, both have the same number of elements as @code{xin}.
Being a list, helps in easily printing the output columns to a table (see @ref{Table input output}).
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_polynomial (gal_data_t @code{*xin}, gal_data_t @code{*yin}, gal_data_t @code{*ywht}, size_t @code{maxpower}, double @code{*redchisq}, uint8_t @code{matrixid})
@cindex Polynomial fit
@cindex Fitting (polynomial)
Perform a polynomial fit on the independent variable(s), @code{xin} to best fit the values in @code{yin}.
Currently only ordinary 1D and 2D polynomial models are supported using GSL's multi-parameter regression@footnote{@url{https://www.gnu.org/s/gsl/doc/html/lls.html#multi-parameter-regression}}.
The degree of the polynomial is determined with the @code{maxpower} argument (which is @mymath{d} in the equation above).
The reduced @mymath{\chi^2} of the fit is written in the space that @code{*redchisq} points to.
The available polynomial model/matrix is determined with the @code{matrixid} argument which can accept one of the following models:
@table @code
@item GAL_FIT_MATRIX_POLYNOMIAL_1D
@mymath{Y=c_0+c_1X+c_2X^2+\cdots+c_dX^d}
@item GAL_FIT_MATRIX_POLYNOMIAL_2D
@mymath{Y = c_0 + c_1X_1 + c_2X_2 + c_3X_1^2 + c_4X_1X_2 + c_5X_2^2 + c_6X_1^3 + c_7X_1^2X_2 + c_8X_1X_2^2 + c_9X_2^3 + \cdots c_nX_1^jX_2^{d-j}}
@item GAL_FIT_MATRIX_POLYNOMIAL_2D_TPV
Same as the 2D above, but with extra @mymath{r^m=\sqrt{X_1^2+X_2^2}^m} terms for the odd orders according to the FITS TPV standard@footnote{@url{https://fits.gsfc.nasa.gov/registry/tpvwcs/tpv.html}}.
These terms are added at the end of the terms in that order:
@mymath{Y = c_0 + c_1X_1 + c_2X_2 + c_3r + c_4X_1^2 + \cdots}
@end table
The input @code{xin} contains a list of the independent variable values.
The function detects the dimensions of the independent variable by checking the number of nodes in @code{xin}.
Therefore, if your input is one-dimensional it is important to set @code{xin->next=NULL}.
Similarly, for a two-dimensional input, set @code{xin->next->next=NULL}.
The input @code{yin} contains the measured variable values for each independent variable.
When @code{ywht!=NULL}, it is assumed to contain the ``weight'' of each Y measurement (if you don't have weights on your measured values, simply set this to @code{NULL}).
The weight of each measurement is the inverse of its variance.
For a Gaussian error distribution with standard deviation @mymath{\sigma}, the weight is therefore @mymath{1/\sigma^2}.
If any of the values in any of the inputs is blank (NaN in floating point), the final fitted parameters will all be NaN.
To remove rows with a NaN/blank, you can use @code{gal_blank_remove_rows} (which will remove all rows with a blank values in any of the columns with a single call).
The output of this function is a list of two datasets, linked as a list (as a @ref{List of gal_data_t}).
Both have a @code{GAL_TYPE_FLOAT64} type, and are described below (in order).
@enumerate
@item
A one dimensional and contains @mymath{n+1} elements (for the @mymath{n+1} constants that have been found @mymath{(c_0, c_1, c_2, \cdots, c_n)}.
@item
A two dimensional variance-covariance matrix with @mymath{(n+1)\times(n+1)} elements.
@end enumerate
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_polynomial_robust (gal_data_t @code{*xin}, gal_data_t @code{*yin}, size_t @code{maxpower}, uint8_t @code{robustid}, double @code{*redchisq}, uint8_t @code{matrixid})
@cindex Robust Polynomial fit
Perform a 1D (or 2D) robust polynomial fit, as already described in @code{gal_fit_polynomial}, using GSL's robust linear regression@footnote{@url{https://www.gnu.org/software/gsl/doc/html/lls.html#robust-linear-regression}}.
See the description there for the details.
The inputs and outputs of this function are almost identical to @code{gal_fit_polynomial}, with the difference that you need to specify the function to reject outliers through the @code{robustid} input argument.
You can pass any of the @code{GAL_FIT_ROBUST_*} codes defined at the top of this section to this (the names are identical to the names in GSL).
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_polynomial_tikhonov (gal_data_t @code{*xin}, gal_data_t @code{*yin}, size_t @code{maxpower}, double @code{*redchisq}, uint8_t @code{matrixid}, double @code{tikhonovlambda})
@cindex Tikhonov regularization
@cindex Regularized Polynomial fit
Perform a 1D (or 2D) regularized polynomial fit, as already described in @code{gal_fit_polynomial}, but using GSL's Tikhonov regularized regression@footnote{@url{https://www.gnu.org/software/gsl/doc/html/lls.html#regularized-regression}}. See the description there for the details.
The inputs and outputs of this function are almost identical to @code{gal_fit_polynomial}, with the difference that you need to specify the regularization parameter @mymath{\lambda} through the @code{tikhonovlambda} input argument.
The current implementation is simpler than what is described in the GSL manual, as it is required to provide the system already in the standard form: @mymath{\chi^2 = \| \tilde{y} - \tilde{X}\tilde{c} \|^2 + \lambda^2 \| \tilde{c} \|^2}, where @mymath{\tilde{y}} is the observation vector stored in @code{*yin}, @mymath{\tilde{X}} is the design matrix built using @code{*xin} (and @code{xin->next}) according to the rules corresponding to @code{matrixid} and @mymath{\tilde{c}} is the output of this function.
This means that this function does not yet support weights on the @code{*yin} due to time constraints on the developers (it is planned for the future).
If you are interested to contribute to Gnuastro by adding the weights to this function, please get in touch with us at @code{bug-gnuastro@@gnu.org}.
@end deftypefun
@deftypefun {gal_data_t *} gal_fit_polynomial_1d_estimate (gal_data_t @code{*fit}, gal_data_t @code{*xin})
Given a 1D polynomial fit output (@code{fit}), estimate the fit on an arbitrary number of independent variable (horizontal axis, or X, in an X-Y plot) within @code{xin}.
@code{fit} is assumed to be the output of @code{gal_fit_polynomial_1d}.
In case you haven't used this function to obtain the constants and covariance matrix, see the description of that function for the expected format of @code{fit}.
This function returns two columns (as a @ref{List of gal_data_t}): The top node of the list is the estimated values at the input X-axis positions, and the next node is the errors in the estimation.
Naturally, both have the same number of elements as @code{xin}.
Being a list, helps in easily printing the output columns to a table (see @ref{Table input output}).
@end deftypefun
@node Binary datasets, Labeled datasets, Fitting functions, Gnuastro library
@subsection Binary datasets (@file{binary.h})
@cindex Thresholding
@cindex Binary datasets
@cindex Dataset: binary
Binary datasets only have two (usable) values: 0 (also known as background) or 1 (also known as foreground).
They are created after some binary classification is applied to the dataset.
The most common is thresholding: for example, in an image, pixels with a value above the threshold are given a value of 1 and those with a value less than the threshold are assigned a value of 0.
@cindex Connectivity
@cindex Immediate neighbors
@cindex Neighbors, immediate
Since there is only two values, in the processing of binary images, you are usually concerned with the positioning of an element and its vicinity (neighbors).
When a dataset has more than one dimension, multiple classes of immediate neighbors (that are touching the element) can be defined for each data-element.
To separate these different classes of immediate neighbors, we define @emph{connectivity}.
The classification is done by the distance from element center to the neighbor's center.
The nearest immediate neighbors have a connectivity of 1, the second nearest class of neighbors have a connectivity of 2 and so on.
In total, the largest possible connectivity for data with @code{ndim} dimensions is @code{ndim}.
For example, in a 2D dataset, 4-connected neighbors (that share an edge and have a distance of 1 pixel) have a connectivity of 1.
The other 4 neighbors that only share a vertice (with a distance of @mymath{\sqrt{2}} pixels) have a connectivity of 2.
Conventionally, the class of connectivity-2 neighbors also includes the connectivity 1 neighbors, so for example, we call them 8-connected neighbors in 2D datasets.
Ideally, one bit is sufficient for each element of a binary dataset.
However, CPUs are not designed to work on individual bits, the smallest unit of memory addresses is a byte (containing 8 bits on modern CPUs).
Therefore, in Gnuastro, the type used for binary dataset is @code{uint8_t} (see @ref{Numeric data types}).
Although it does take 8-times more memory, this choice offers much better performance and the some extra (useful) features.
The advantage of using a full byte for each element of a binary dataset is that you can also have other values (that will be ignored in the processing).
One such common ``other'' value in real datasets is a blank value (to mark regions that should not be processed because there is no data).
The constant @code{GAL_BLANK_UINT8} value must be used in these cases (see @ref{Library blank values}).
Another is some temporary value(s) that can be given to a processed pixel to avoid having another copy of the dataset as in @code{GAL_BINARY_TMP_VALUE} that is described below.
@deffn Macro GAL_BINARY_TMP_VALUE
The functions described below work on a @code{uint8_t} type dataset with values of 1 or 0 (no other pixel will be touched).
However, in some cases, it is necessary to put temporary values in each element during the processing of the functions.
This temporary value has a special meaning for the operation and will be operated on.
So if your input datasets have values other than 0 and 1 that you do not want these functions to work on, be sure they are not equal to this macro's value.
Note that this value is also different from @code{GAL_BLANK_UINT8}, so your input datasets may also contain blank elements.
@end deffn
@deftypefun {gal_data_t *} gal_binary_erode (gal_data_t @code{*input}, size_t @code{num}, int @code{connectivity}, int @code{inplace})
Do @code{num} erosions on the @code{connectivity}-connected neighbors of
@code{input} (see above for the definition of connectivity).
If @code{inplace} is non-zero @emph{and} the input's type is @code{GAL_TYPE_UINT8}, then the erosion will be done within the input dataset and the returned pointer will be @code{input}.
Otherwise, @code{input} is copied (and converted if necessary) to @code{GAL_TYPE_UINT8} and erosion will be done on this new dataset which will also be returned.
This function will only work on the elements with a value of 1 or 0.
It will leave all the rest unchanged.
@cindex Erosion
@cindex Mathematical morphology
Erosion (inverse of dilation) is an operation in mathematical morphology where each foreground pixel that is touching a background pixel is flipped (changed to background).
The @code{connectivity} value determines the definition of ``touching''.
Erosion will thus decrease the area of the foreground regions by one layer of pixels.
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_dilate (gal_data_t @code{*input}, size_t @code{num}, int @code{connectivity}, int @code{inplace})
Do @code{num} dilations on the @code{connectivity}-connected neighbors of @code{input} (see above for the definition of connectivity).
For more on @code{inplace} and the output, see @code{gal_binary_erode}.
@cindex Dilation
Dilation (inverse of erosion) is an operation in mathematical morphology where each background pixel that is touching a foreground pixel is flipped (changed to foreground).
The @code{connectivity} value determines the definition of ``touching''.
Dilation will thus increase the area of the foreground regions by one layer of pixels.
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_open (gal_data_t @code{*input}, size_t @code{num}, int @code{connectivity}, int @code{inplace})
Do @code{num} openings on the @code{connectivity}-connected neighbors of @code{input} (see above for the definition of connectivity).
For more on @code{inplace} and the output, see @code{gal_binary_erode}.
@cindex Opening (Mathematical morphology)
Opening is an operation in mathematical morphology which is defined as erosion followed by dilation (see above for the definitions of erosion and dilation).
Opening will thus remove the outer structure of the foreground.
In this implementation, @code{num} erosions are going to be applied on the dataset, then @code{num} dilations.
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_number_neighbors (gal_data_t @code{*input}, int @code{connectivity}, int @code{inplace})
Return an image of the same size as the input, but where each non-zero and non-blank input pixel is replaced with the number of its non-zero and non-blank neighbors.
The input dataset is assumed to be binary (having an unsigned, 8-bit dataset).
The neighbors are defined through the @code{connectivity} argument (see above) and if @code{inplace!=0}, then the output will be written into the input.
@end deftypefun
@deftypefun size_t gal_binary_connected_components (gal_data_t @code{*binary}, gal_data_t @code{**out}, int @code{connectivity})
@cindex Breadth first search
@cindex Connected component labeling
Return the number of connected components in @code{binary} through the breadth first search algorithm (finding all pixels belonging to one component before going on to the next).
Connection between two pixels is defined based on the value to @code{connectivity}.
@code{out} is a dataset with the same size as @code{binary} with @code{GAL_TYPE_INT32} type.
Every pixel in @code{out} will have the label of the connected component it belongs to.
The labeling of connected components starts from 1, so a label of zero is given to the input's background pixels.
When @code{*out!=NULL} (its space is already allocated), it will be cleared (to zero) at the start of this function.
Otherwise, when @code{*out==NULL}, the necessary dataset to keep the output will be allocated by this function.
@code{binary} must have a type of @code{GAL_TYPE_UINT8}, otherwise this function will abort with an error.
Other than blank pixels (with a value of @code{GAL_BLANK_UINT8} defined in @ref{Library blank values}), all other non-zero pixels in @code{binary} will be considered as foreground (and will be labeled).
Blank pixels in the input will also be blank in the output.
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_connected_indexs(gal_data_t @code{*binary}, int @code{connectivity})
Build a @code{gal_data_t} linked list, where each node of the list contains an array with indices of the connected regions.
Therefore the arrays of each node can have a different size.
Note that the indices will only be calculated on the pixels with a value of 1 and internally, it will temporarily change the values to 2 (and return them back to 1 in the end).
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_connected_adjacency_matrix (gal_data_t @code{*adjacency}, size_t @code{*numconnected})
@cindex Adjacency matrix
@cindex Matrix, adjacency
Find the number of connected labels and new labels based on an adjacency matrix, which must be a square binary array (type @code{GAL_TYPE_UINT8}).
The returned dataset is a list of new labels for each old label.
In other words, this function will find the objects that are connected (possibly through a third object) and in the output array, the respective elements for all input labels is going to have the same value.
The total number of connected labels is put into the space that @code{numconnected} points to.
An adjacency matrix defines connection between two labels.
For example, let's assume we have 5 labels and we know that labels 1 and 5 are connected to label 3, but are not connected with each other.
Also, labels 2 and 4 are not touching any other label.
So in total we have 3 final labels: one combined object (merged from labels 1, 3, and 5) and the initial labels 2 and 4.
The input adjacency matrix would look like this (note the extra row and column for a label 0 which is ignored):
@example
INPUT OUTPUT
===== ======
in_lab 1 2 3 4 5 |
| numconnected = 3
0 0 0 0 0 0 |
in_lab 1 --> 0 0 0 1 0 0 |
in_lab 2 --> 0 0 0 0 0 0 | Returned: new labels for the
in_lab 3 --> 0 1 0 0 0 1 | 5 initial objects
in_lab 4 --> 0 0 0 0 0 0 | | 0 | 1 | 2 | 1 | 3 | 1 |
in_lab 5 --> 0 0 0 1 0 0 |
@end example
Although the adjacency matrix as used here is symmetric, currently this function assumes that it is filled on both sides of the diagonal.
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_connected_adjacency_list (gal_list_sizet_t @code{**listarr}, size_t @code{number}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{*numconnected})
@cindex RAM
Find the number of connected labels and new labels based on an adjacency list.
The output of this function is identical to that of @code{gal_binary_connected_adjacency_matrix}.
But the major difference is that it uses a list of connected labels to each label instead of a square adjacency matrix.
This is done because when the number of labels becomes very large (for example, on the scale of 100,000), the adjacency matrix can consume more than 10GB of RAM!
The input list has the following format: it is an array of pointers to @code{gal_list_sizet_t *} (or @code{gal_list_sizet_t **}).
The array has @code{number} elements and each @code{listarr[i]} is a linked list of @code{gal_list_sizet_t *}.
As a demonstration, the input of the same example in @code{gal_binary_connected_adjacency_matrix} would look like below and the output of this function will be identical to there.
@example
listarr[0] = NULL
listarr[1] = 3
listarr[2] = NULL
listarr[3] = 1 -> 5
listarr[4] = NULL
listarr[5] = 3
@end example
From this example, it is already clear that this method will consume far less memory.
But because it needs to parse lists (and not easily jump between array elements), it can be slower.
But in scenarios where there are too many objects (that may exceed the whole system's RAM+SWAP), this option is a good alternative and the drop in processing speed is worth getting the job done.
Similar to @code{gal_binary_connected_adjacency_matrix}, this function will write the final number of connected labels in @code{numconnected}.
But since it takes no @code{gal_data_t *} argument (where it can inherit the @code{minmapsize} and @code{quietmmap} parameters), it also needs these as input.
For more on @code{minmapsize} and @code{quietmmap}, see @ref{Memory management}.
@end deftypefun
@deftypefun {gal_data_t *} gal_binary_holes_label (gal_data_t @code{*input}, int @code{connectivity}, size_t @code{*numholes})
Label all the holes in the foreground (non-zero elements in input) as independent regions.
Holes are background regions (zero-valued in input) that are fully surrounded by the foreground, as defined by @code{connectivity}.
The returned dataset has a 32-bit signed integer type with the size of the input.
All holes in the input will have labels/counters greater or equal to @code{1}.
The rest of the background regions will still have a value of @code{0} and the initial foreground pixels will have a value of @code{-1}.
The total number of holes will be written where @code{numholes} points to.
@end deftypefun
@deftypefun void gal_binary_holes_fill (gal_data_t @code{*input}, int @code{connectivity}, size_t @code{maxsize})
Fill all the holes (0 valued pixels surrounded by 1 valued pixels) of the binary @code{input} dataset.
The connectivity of the holes can be set with @code{connectivity}.
Holes larger than @code{maxsize} are not filled.
This function currently only works on a 2D dataset.
@end deftypefun
@node Labeled datasets, Convolution functions, Binary datasets, Gnuastro library
@subsection Labeled datasets (@file{label.h})
A labeled dataset is one where each element/pixel has an integer label (or counter).
The label identifies the group/class that the element belongs to.
This form of labeling allows the higher-level study of all pixels within a certain class.
For example, to detect objects/targets in an image/dataset, you can apply a threshold to separate the noise from the signal (to detect diffuse signal, a threshold is useless and more advanced methods are necessary, for example @ref{NoiseChisel}).
But the output of detection is a binary dataset (which is just a very low-level labeling of @code{0} for noise and @code{1} for signal).
The raw detection map is therefore hardly useful for any kind of analysis on objects/targets in the image.
One solution is to use a connected-components algorithm (see @code{gal_binary_connected_components} in @ref{Binary datasets}).
It is a simple and useful way to separate/label connected patches in the foreground.
This higher-level (but still elementary) labeling therefore allows you to count how many connected patches of signal there are in the dataset and is a major improvement compared to the raw detection.
However, when your objects/targets are touching, the simple connected components algorithm is not enough and a still higher-level labeling mechanism is necessary.
This brings us to the necessity of the functions in this part of Gnuastro's library.
The main inputs to the functions in this section are already labeled datasets (for example, with the connected components algorithm above).
Each of the labeled regions are independent of each other (the labels specify different classes of targets).
Therefore, especially in large datasets, it is often useful to process each label on independent CPU threads in parallel rather than in series.
Therefore the functions of this section actually use an array of pixel/element indices (belonging to each label/class) as the main identifier of a region.
Using indices will also allow processing of overlapping labels (for example, in deblending problems).
Just note that overlapping labels are not yet implemented, but planned.
You can use @code{gal_label_indexs} to generate lists of indices belonging to separate classes from the labeled input.
@deffn Macro GAL_LABEL_INIT
@deffnx Macro GAL_LABEL_RIVER
@deffnx Macro GAL_LABEL_TMPCHECK
Special negative integer values used internally by some of the functions in this section.
Recall that meaningful labels are considered to be positive integers (@mymath{\geq1}).
Zero is conventionally kept for regions with no labels, therefore negative integers can be used for any extra classification in the labeled datasets.
@end deffn
@deftypefun {gal_data_t *} gal_label_indexs (gal_data_t @code{*labels}, size_t @code{numlabs}, size_t @code{minmapsize}, int @code{quietmmap})
Return an array of @code{gal_data_t} containers, each containing the pixel indices of the respective label (see @ref{Generic data container}).
@code{labels} contains the label of each element and has to have an @code{GAL_TYPE_INT32} type (see @ref{Library data types}).
Only positive (greater than zero) values in @code{labels} will be used/indexed, other elements will be ignored.
Meaningful labels start from @code{1} and not @code{0}, therefore the output array of @code{gal_data_t} will contain @code{numlabs+1} elements.
The first (zero-th) element of the output (@code{indexs[0]} in the example below) will be initialized to a dataset with zero elements.
This will allow easy (non-confusing) access to the indices of each (meaningful) label.
@code{numlabs} is the number of labels in the dataset.
If it is given a value of zero, then the maximum value in the input (largest label) will be found and used.
Therefore if it is given, but smaller than the actual number of labels, this function may/will crash (it will write in un-allocated space).
@code{numlabs} is therefore useful in a highly optimized/checked environment.
For example, if the returned array is called @code{indexs}, then
@code{indexs[10].size} contains the number of elements that have a label of
@code{10} in @code{labels} and @code{indexs[10].array} is an array (after
casting to @code{size_t *}) containing the indices of each one of those
elements/pixels.
By @emph{index} we mean the 1D position: the input number of dimensions is irrelevant (any dimensionality is supported).
In other words, each element's index is the number of elements/pixels between it and the dataset's first element/pixel.
Therefore it is always greater or equal to zero and stored in @code{size_t} type.
@end deftypefun
@deftypefun size_t gal_label_watershed (gal_data_t @code{*values}, gal_data_t @code{*indexs}, gal_data_t @code{*label}, size_t @code{*topinds}, int @code{min0_max1})
@cindex Watershed algorithm
@cindex Algorithm: watershed
Use the watershed algorithm@footnote{The watershed algorithm was initially introduced by @url{https://doi.org/10.1109/34.87344, Vincent and Soille}.
It starts from the minima and puts the pixels in, one by one, to grow them until the touch (create a watershed).
For more, also see the Wikipedia article: @url{https://en.wikipedia.org/wiki/Watershed_%28image_processing%29}.} to ``over-segment'' the pixels in the @code{indexs} dataset based on values in the @code{values} dataset.
Internally, each local extrema (maximum or minimum, based on @code{min0_max1}) and its surrounding pixels will be given a unique label.
For demonstration, see Figures 8 and 9 of Akhlaghi and Ichikawa @url{http://arxiv.org/abs/1505.01664,2015}.
If @code{topinds!=NULL}, it is assumed to point to an already allocated space to write the index of each clump's local extrema, otherwise, it is ignored.
The @code{values} dataset must have a 32-bit floating point type (@code{GAL_TYPE_FLOAT32}, see @ref{Library data types}) and will only be read by this function.
@code{indexs} must contain the indices of the elements/pixels that will be over-segmented by this function and have a @code{GAL_TYPE_SIZE_T} type, see the description of @code{gal_label_indexs}, above.
The final labels will be written in the respective positions of @code{labels}, which must have a @code{GAL_TYPE_INT32} type and be the same size as @code{values}.
When @code{indexs} is already sorted, this function will ignore @code{min0_max1}.
To judge if the dataset is sorted or not (by the values the indices correspond to in @code{values}, not the actual indices), this function will look into the bits of @code{indexs->flag}, for the respective bit flags, see @ref{Generic data container}.
If @code{indexs} is not already sorted, this function will sort it according to the values of the respective pixel in @code{values}.
The increasing/decreasing order will be determined by @code{min0_max1}.
Note that if this function is called on multiple threads @emph{and} @code{values} points to a different array on each thread, this function will not return a reasonable result.
In this case, please sort @code{indexs} prior to calling this function (see @code{gal_qsort_index_multi_d} in @ref{Qsort functions}).
When @code{indexs} is decreasing (increasing), or @code{min0_max1} is
@code{1} (@code{0}), local minima (maxima), are considered rivers
(watersheds) and given a label of @code{GAL_LABEL_RIVER} (see above).
Note that rivers/watersheds will also be formed on the edges of the labeled regions or when the labeled pixels touch a blank pixel.
Therefore this function will need to check for the presence of blank values.
To be most efficient, it is thus recommended to use @code{gal_blank_present} (with @code{updateflag=1}) prior to calling this function (see @ref{Library blank values}.
Once the flag has been set, no other function (including this one) that needs special behavior for blank pixels will have to parse the dataset to see if it has blank values any more.
If you are sure your dataset does not have blank values (by the design of your software), to avoid an extra parsing of the dataset and improve performance, you can set the two bits manually (see the description of @code{flags} in @ref{Generic data container}):
@example
input->flag |= GAL_DATA_FLAG_BLANK_CH; /* Set bit to 1. */
input->flag &= ~GAL_DATA_FLAG_HASBLANK; /* Set bit to 0. */
@end example
@end deftypefun
@deftypefun void gal_label_clump_significance (gal_data_t @code{*values}, gal_data_t @code{*std}, gal_data_t @code{*label}, gal_data_t @code{*indexs}, struct gal_tile_two_layer_params @code{*tl}, size_t @code{numclumps}, size_t @code{minarea}, int @code{variance}, int @code{keepsmall}, gal_data_t @code{*sig}, gal_data_t @code{*sigind})
@cindex Clump
This function is usually called after @code{gal_label_watershed}, and is
used as a measure to identify which over-segmented ``clumps'' are real and
which are noise.
A measurement is done on each clump (using the @code{values} and @code{std} datasets, see below).
To help in multi-threaded environments, the operation is only done on pixels which are indexed in @code{indexs}.
It is expected for @code{indexs} to be sorted by their values in @code{values}.
If not sorted, the measurement may not be reliable.
If sorted in a decreasing order, then clump building will start from their highest value and vice-versa.
See the description of @code{gal_label_watershed} for more on @code{indexs}.
Each ``clump'' (identified by a positive integer) is assumed to be surrounded by at least one river/watershed pixel (with a non-positive label).
This function will parse the pixels identified in @code{indexs} and make a measurement on each clump and over all the river/watershed pixels.
The number of clumps (@code{numclumps}) must be given as an input argument and any clump that is smaller than @code{minarea} is ignored (because of scatter).
If @code{variance} is non-zero, then the @code{std} dataset is interpreted as variance, not standard deviation.
The @code{values} and @code{std} datasets must have a @code{float} (32-bit floating point) type.
Also, @code{label} and @code{indexs} must respectively have @code{int32} and @code{size_t} types.
@code{values} and @code{label} must have the same size, but @code{std} can have three possible sizes: 1) a single element (which will be used for the whole dataset, 2) the same size as @code{values} (so a different error can be assigned to every pixel), 3) a single value for each tile, based on the @code{tl} tessellation (see @ref{Tile grid}).
In the last case, a tile/value will be associated to each clump based on its flux-weighted (only positive values) center.
The main output is an internally allocated, 1-dimensional array with one value per label.
The array information (length, type, etc.) will be written into the @code{sig} generic data container.
Therefore @code{sig->array} must be @code{NULL} when this function is called.
After this function, the details of the array (number of elements, type and size, etc) will be written in to the various components of @code{sig}, see the definition of @code{gal_data_t} in @ref{Generic data container}.
Therefore @code{sig} must already be allocated before calling this function.
Optionally (when @code{sigind!=NULL}, similar to @code{sig}) the clump labels of each measurement in @code{sig} will be written in @code{sigind->array}.
If @code{keepsmall} zero, small clumps (where no measurement is made) will not be included in the output table.
This function is initially intended for a multi-threaded environment.
In such cases, you will be writing arrays of clump measures from different regions in parallel into an array of @code{gal_data_t}s.
You can simply allocate (and initialize), such an array with the @code{gal_data_array_calloc} function in @ref{Arrays of datasets}.
For example, if the @code{gal_data_t} array is called @code{array}, you can pass @code{&array[i]} as @code{sig}.
Along with some other functions in @code{label.h}, this function was initially written for @ref{Segment}.
The description of the parameter used to measure a clump's significance is fully given in Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
@end deftypefun
@deftypefun void gal_label_grow_indexs (gal_data_t @code{*labels}, gal_data_t @code{*indexs}, int @code{withrivers}, int @code{connectivity})
Grow the (positive) labels of @code{labels} over the pixels in @code{indexs} (see description of @code{gal_label_indexs}).
The pixels (position in @code{indexs}, values in @code{labels}) that must be ``grown'' must have a value of @code{GAL_LABEL_INIT} in @code{labels} before calling this function.
For a demonstration see Columns 2 and 3 of Figure 10 in Akhlaghi and Ichikawa @url{http://arxiv.org/abs/1505.01664,2015}.
In many aspects, this function is very similar to over-segmentation (watershed algorithm, @code{gal_label_watershed}).
The big difference is that in over-segmentation local maximums (that are not touching any already labeled pixel) get a separate label.
However, here the final number of labels will not change.
All pixels that are not directly touching a labeled pixel just get pushed back to the start of the loop, and the loop iterates until its size does not change any more.
This is because in a generic scenario some of the indexed pixels might not be reachable through other indexed pixels.
The next major difference with over-segmentation is that when there is only one label in growth region(s), it is not mandatory for @code{indexs} to be sorted by values.
If there are multiple labeled regions in growth region(s), then values are important and you can use @code{qsort} with @code{gal_qsort_index_single_TYPE_d} to sort the indices by values in a separate array (see @ref{Qsort functions}).
This function looks for positive-valued neighbors of each pixel in @code{indexs} and will label a pixel if it touches one.
Therefore, it is very important that only pixels/labels that are intended for growth have positive values in @code{labels} before calling this function.
Any non-positive (zero or negative) value will be ignored as a label by this function.
Thus, it is recommended that while filling in the @code{indexs} array values, you initialize all the pixels that are in @code{indexs} with @code{GAL_LABEL_INIT}, and set non-labeled pixels that you do not want to grow to @code{0}.
This function will write into both the input datasets.
After this function, some of the non-positive @code{labels} pixels will have a new positivelabel and the number of useful elements in @code{indexs} will have decreased.
The index of those pixels that could not be labeled will remain inside @code{indexs}.
If @code{withrivers} is non-zero, then pixels that are immediately touching more than one positive value will be given a @code{GAL_LABEL_RIVER} label.
@cindex GNU C library
Note that the @code{indexs->array} is not re-allocated to its new size at the end@footnote{Note that according to the GNU C Library, even a @code{realloc} to a smaller size can also cause a re-write of the whole array, which is not a cheap operation.}.
But since @code{indexs->dsize[0]} and @code{indexs->size} have new values after this function is returned, the extra elements just will not be used until they are ultimately freed by @code{gal_data_free}.
Connectivity is a value between @code{1} (fewest number of neighbors) and the number of dimensions in the input (most number of neighbors).
For example, in a 2D dataset, a connectivity of @code{1} and @code{2} corresponds to 4-connected and 8-connected neighbors.
@end deftypefun
@deftypefun {gal_data_t *} gal_label_measure (gal_data_t @code{*labels}, gal_data_t @code{*values}, int @code{operator}, size_t @code{numthreads}, int @code{flags})
Implement the given arithmetic measurement operator on the pixels that have the same label in @code{labels} (possibly on the @code{values} array) and write the operator's output over the pixels of that label.
For examples of how this function can be useful see the examples of @code{label-*} operators of the Arithmetic program in @ref{Statistical operators}.
The @code{flags} argument is read as bit-flags and the only one currently implemented is the @code{GAL_ARITHMETIC_FLAG_FREE} of @ref{Arithmetic on datasets}.
In other words, if this bit-flag is active, the inputs will be freed.
The different measurements currently accepted are implemented arithmetic library operators are the following (from @ref{Arithmetic on datasets}).
Depending on the operator, the @code{values} dataset may not be necessary.
@table @code
@item GAL_ARITHMETIC_OP_LABEL_AREA
Measure the area (number of pixels) of each label to write into that label's pixels.
This operator does not use the @code{values} dataset and the output has a type of unsigned 32-bit integer.
@item GAL_ARITHMETIC_OP_LABEL_MINIMUM
@itemx GAL_ARITHMETIC_OP_LABEL_MAXIMUM
Measure the minimum/maximum value (from @code{values}) of the pixels of each label (from @code{labels}) and write into that label's pixels.
The output has the same type as the values dataset.
@end table
@end deftypefun
@node Convolution functions, Pooling functions, Labeled datasets, Gnuastro library
@subsection Convolution functions (@file{convolve.h})
Convolution is a very common operation during data analysis and is thoroughly described as part of Gnuastro's @ref{Convolve} program which is fully devoted to this job.
Because of the complete introduction that was presented there, we will directly skip onto the currently available convolution functions in Gnuastro's library.
As of this version, only spatial domain convolution is available in Gnuastro's libraries.
We have not had the time to liberate the frequency domain function convolution and deconvolution functions that are available in the Convolve program@footnote{Hence any help would be greatly appreciated.}.
@deftypefun {gal_data_t *} gal_convolve_spatial (gal_data_t @code{*tiles}, gal_data_t @code{*kernel}, size_t @code{numthreads}, int @code{edgecorrection}, int @code{convoverch}, int @code{conv_on_blank})
Convolve the given @code{tiles} dataset (possibly a list of tiles, see @ref{List of gal_data_t} and @ref{Tessellation library}) with @code{kernel} on @code{numthreads} threads.
When @code{edgecorrection} is non-zero, it will correct for the edge dimming effects as discussed in @ref{Edges in the spatial domain}.
When @code{conv_on_blank} is non-zero, this function will also attempt convolution over the blank pixels (and therefore give values to the blank pixels that are near non-blank pixels).
@code{tiles} can be a single/complete dataset, but in that case the speed will be very slow.
Therefore, for larger images, it is recommended to give a list of tiles covering a dataset.
To create a tessellation that fully covers an input image, you may use @code{gal_tile_full}, or @code{gal_tile_full_two_layers} to also define channels over your input dataset.
These functions are discussed in @ref{Tile grid}.
You may then pass the list of tiles to this function.
This is the recommended way to call this function because spatial domain convolution is slow and breaking the job into many small tiles and working on simultaneously on several threads can greatly speed up the processing.
If the tiles are defined within a channel (a larger tile), by default convolution will be done within the channel, so pixels on the edge of a channel will not be affected by their neighbors that are in another channel.
See @ref{Tessellation} for the necessity of channels in astronomical data analysis.
This behavior may be disabled when @code{convoverch} is non-zero.
In this case, it will ignore channel borders (if they exist) and mix all pixels that cover the kernel within the dataset.
@end deftypefun
@deftypefun void gal_convolve_spatial_correct_ch_edge (gal_data_t @code{*tiles}, gal_data_t @code{*kernel}, size_t @code{numthreads}, int @code{edgecorrection}, int @code{conv_on_blank}, gal_data_t @code{*tocorrect})
Correct the edges of channels in an already convolved image when it was initially convolved with @code{gal_convolve_spatial} and @code{convoverch==0}.
In that case, strong boundaries might exist on the channel edges.
So if you later need to remove those boundaries at later steps of your processing, you can call this function.
It will only do convolution on the tiles that are near the edge and were effected by the channel borders.
Other pixels in the image will not be touched.
Hence, it is much faster.
When @code{conv_on_blank} is non-zero, this function will also attempt convolution over the blank pixels (and therefore give values to the blank pixels that are near non-blank pixels).
@end deftypefun
@node Pooling functions, Interpolation, Convolution functions, Gnuastro library
@subsection Pooling functions (@file{pool.h})
Pooling is the process of reducing the complexity of the input image (its size and variation of pixel values).
Its underlying concepts, and an analysis of its usefulness, is fully described in @ref{Pooling operators}.
The following functions are available pooling in Gnuastro.
Just note that unlike the Arithmetic operators, the output of these functions should contain a correct WCS in their output.
@deftypefun {gal_data_t *} gal_pool_max (gal_data_t @code{*input}, size_t @code{psize}, size_t @code{numthreads})
Return the max-pool of @code{input}, assuming a pool size of @code{psize} pixels.
The number of threads to use can be set with @code{numthreads}.
@end deftypefun
@deftypefun {gal_data_t *} gal_pool_min (gal_data_t @code{*input}, size_t @code{psize}, size_t @code{numthreads})
Return the min-pool of @code{input}, assuming a pool size of @code{psize} pixels.
The number of threads to use can be set with @code{numthreads}.
@end deftypefun
@deftypefun {gal_data_t *} gal_pool_sum (gal_data_t @code{*input}, size_t @code{psize}, size_t @code{numthreads})
Return the sum-pool of @code{input}, assuming a pool size of @code{psize} pixels.
The number of threads to use can be set with @code{numthreads}.
@end deftypefun
@deftypefun {gal_data_t *} gal_pool_mean (gal_data_t @code{*input}, size_t @code{psize}, size_t @code{numthreads})
Return the mean-pool of @code{input}, assuming a pool size of @code{psize} pixels.
The number of threads to use can be set with @code{numthreads}.
@end deftypefun
@deftypefun {gal_data_t *} gal_pool_median (gal_data_t @code{*input}, size_t @code{psize}, size_t @code{numthreads})
Return the median-pool of @code{input}, assuming a pool size of @code{psize} pixels.
The number of threads to use can be set with @code{numthreads}.
@end deftypefun
@node Interpolation, Warp library, Pooling functions, Gnuastro library
@subsection Interpolation (@file{interpolate.h})
@cindex Sky line
@cindex Interpolation
During data analysis, it happens that parts of the data cannot be given a value, but one is necessary for the higher-level analysis.
For example, a very bright star saturated part of your image and you need to fill in the saturated pixels with some values.
Another common usage case are masked sky-lines in 1D spectra that similarly need to be assigned a value for higher-level analysis.
In other situations, you might want a value in an arbitrary point: between the elements/pixels where you have data.
The functions described in this section are for such operations.
@cindex GNU Scientific Library
The parametric interpolations discussed below are wrappers around the interpolation functions of the GNU Scientific Library (or GSL, see @ref{GNU Scientific Library}).
To identify the different GSL interpolation types, Gnuastro's @file{gnuastro/interpolate.h} header file contains macros that are discussed below.
The GSL wrappers provided here are not yet complete because we are too busy.
If you need them, please consider helping us in adding them to Gnuastro's library.
Your contributions would be very welcome and appreciated.
@deffn Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_RADIAL
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_MANHATTAN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_INVALID
The metric used to find distance for nearest neighbor interpolation.
A radial metric uses the simple Euclidean function to find the distance between two pixels.
A manhattan metric will always be an integer and is like steps (but is also much faster to calculate than radial metric because it does not need a square root calculation).
@end deffn
@deffn Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MIN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MAX
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MEAN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MEDIAN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_INVALID
@cindex Saturated stars
The various types of nearest-neighbor interpolation functions for @code{gal_interpolate_neighbors}.
The names are descriptive for the operation they do, so we will not go into much more detail here.
The median operator will be one of the most used, but operators like the maximum are good to fill the center of saturated stars.
@end deffn
@deftypefun {gal_data_t *} gal_interpolate_neighbors (gal_data_t @code{*input}, struct gal_tile_two_layer_params @code{*tl}, uint8_t @code{metric}, size_t @code{numneighbors}, size_t @code{numthreads}, int @code{onlyblank}, int @code{aslinkedlist}, int @code{function})
Interpolate the values in the input dataset using a calculated statistics from the distribution of their @code{numneighbors} closest neighbors.
The desired statistics is determined from the @code{func} argument, which takes any of the @code{GAL_INTERPOLATE_NEIGHBORS_FUNC_} macros (see above).
This function is non-parametric and thus agnostic to the input's number of dimension or shape of the distribution.
Distance can be defined on different metrics that are identified through @code{metric} (taking values determined by the @code{GAL_INTERPOLATE_NEIGHBORS_METRIC_} macros described above).
If @code{onlyblank} is non-zero, then only blank elements will be interpolated and pixels that already have a value will be left untouched.
This function is multi-threaded and will run on @code{numthreads} threads (see @code{gal_threads_number} in @ref{Multithreaded programming}).
@code{tl} is Gnuastro's tessellation structure used to define tiles over an image and is fully described in @ref{Tile grid}.
When @code{tl!=NULL}, then it is assumed that the @code{input->array} contains one value per tile and interpolation will respect certain tessellation properties, for example, to not interpolate over channel borders.
If several datasets have the same set of blank values, you do not need to call this function multiple times.
When @code{aslinkedlist} is non-zero, then @code{input} will be seen as a @ref{List of gal_data_t}.
In this case, the same neighbors will be used for all the datasets in the list.
Of course, the values for each dataset will be different, so a different value will be written in each dataset, but the neighbor checking that is the most CPU intensive part will only be done once.
This is a non-parametric and robust function for interpolation.
The interpolated values are also always within the range of the non-blank values and strong outliers do not get created.
However, this type of interpolation must be used with care when there are gradients.
This is because it is non-parametric and if there are not enough neighbors, step-like features can be created.
@end deftypefun
@deffn Macro GAL_INTERPOLATE_1D_INVALID
This is just a place-holder to manage errors.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_LINEAR
[From GSL:] Linear interpolation. This interpolation method does not
require any additional memory.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_POLYNOMIAL
@cindex Polynomial interpolation
@cindex Interpolation: Polynomial
[From GSL:] Polynomial interpolation.
This method should only be used for interpolating small numbers of points because polynomial interpolation introduces large oscillations, even for well-behaved datasets.
The number of terms in the interpolating polynomial is equal to the number of points.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_CSPLINE
@cindex Interpolation: Spline
@cindex Cubic spline interpolation
@cindex Spline (cubic) interpolation
[From GSL:] Cubic spline with natural boundary conditions.
The resulting curve is piece-wise cubic on each interval, with matching first and second derivatives at the supplied data-points.
The second derivative is chosen to be zero at the first point and last point.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_CSPLINE_PERIODIC
[From GSL:] Cubic spline with periodic boundary conditions.
The resulting curve is piece-wise cubic on each interval, with matching first and second derivatives at the supplied data-points.
The derivatives at the first and last points are also matched.
Note that the last point in the data must have the same y-value as the first point, otherwise the resulting periodic interpolation will have a discontinuity at the boundary.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_AKIMA
@cindex Interpolation: Akima spline
@cindex Akima spline interpolation
@cindex Spline (Akima) interpolation
[From GSL:] Non-rounded Akima spline with natural boundary conditions.
This method uses the non-rounded corner algorithm of Wodicka.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_AKIMA_PERIODIC
[From GSL:] Non-rounded Akima spline with periodic boundary conditions.
This method uses the non-rounded corner algorithm of Wodicka.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_STEFFEN
@cindex Steffen interpolation
@cindex Interpolation: Steffen
@cindex Interpolation: monotonic
[From GSL:] Steffen's method@footnote{@url{http://adsabs.harvard.edu/abs/1990A%26A...239..443S}} guarantees the monotonicity of the interpolating function between the given data points.
Therefore, minima and maxima can only occur exactly at the data points, and there can never be spurious oscillations between data points.
The interpolated function is piece-wise cubic in each interval.
The resulting curve and its first derivative are guaranteed to be continuous, but the second derivative may be discontinuous.
@end deffn
@deftypefun {gsl_spline *} gal_interpolate_1d_make_gsl_spline (gal_data_t @code{*X}, gal_data_t @code{*Y}, int @code{type_1d})
@cindex GNU Scientific Library
Allocate and initialize a GNU Scientific Library (GSL) 1D @code{gsl_spline} structure using the non-blank elements of @code{Y}.
@code{type_1d} identifies the interpolation scheme and must be one of the @code{GAL_INTERPOLATE_1D_*} macros defined above.
If @code{X==NULL}, the X-axis is assumed to be integers starting from zero (the index of each element in @code{Y}).
Otherwise, the values in @code{X} will be used to initialize the interpolation structure.
Note that when given, @code{X} must @emph{not} contain any blank elements and it must be sorted (in increasing order).
Each interpolation scheme needs a minimum number of elements to successfully operate.
If the number of non-blank values in @code{Y} is less than this number, this function will return a @code{NULL} pointer.
To be as generic and modular as possible, GSL's tools are low-level.
Therefore before doing the interpolation, many steps are necessary (like preparing your dataset, then allocating and initializing @code{gsl_spline}).
The metadata available in Gnuastro's @ref{Generic data container} make it easy to hide all those preparations within this function.
Once @code{gsl_spline} has been initialized by this function, the
interpolation can be evaluated for any X value within the non-blank range
of the input using @code{gsl_spline_eval} or @code{gsl_spline_eval_e}.
For example, in the small program below (@file{sample-interp.c}), we read the first two columns of the table in @file{table.txt} and feed them to this function to later estimate the values in the second column for three selected points.
You can use @ref{BuildProgram} to compile and run this function, see @ref{Library demo programs} for more.
Contents of the @file{table.txt} file:
@example
@verbatim
$ cat table.txt
0 0
1 2
3 6
4 8
6 12
8 16
9 18
@end verbatim
@end example
Contents of the @file{sample-interp.c} file:
@cindex first-in-first-out
@example
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/table.h>
#include <gnuastro/interpolate.h>
int
main(void)
@{
size_t i;
gal_data_t *X, *Y;
gsl_spline *spline;
gsl_interp_accel *acc;
gal_list_str_t *cols=NULL;
/* Change the values based on your input table. */
double points[]=@{1.8, 2.5, 7@};
/* Read the first two columns from `tab.txt'.
IMPORTANT: the list is first-in-first-out, so the output
column order is the inverse of the input order. */
gal_list_str_add(&cols, "1", 0);
gal_list_str_add(&cols, "2", 0);
Y=gal_table_read("table.txt", NULL, NULL, cols,
GAL_TABLE_SEARCH_NAME, 0, 1, -1, 1, NULL);
X=Y->next;
/* Allocate the GSL interpolation accelerator and make the
`gsl_spline' structure. */
acc=gsl_interp_accel_alloc();
spline=gal_interpolate_1d_make_gsl_spline(X, Y,
GAL_INTERPOLATE_1D_STEFFEN);
/* Calculate the respective value for all the given points,
if `spline' could be allocated. */
if(spline)
for(i=0; i<(sizeof points)/(sizeof *points); ++i)
printf("%f: %f\n", points[i],
gsl_spline_eval(spline, points[i], acc));
/* Clean up and return. */
gal_data_free(X);
gal_data_free(Y);
gsl_spline_free(spline);
gsl_interp_accel_free(acc);
gal_list_str_free(cols, 0);
return EXIT_SUCCESS;
@}
@end example
@end deftypefun
@noindent
Compile and run this program with @ref{BuildProgram} to see the interpolation results for the three points within the program.
@example
$ astbuildprog sample-interp.c --quiet
1.800000: 3.600000
2.500000: 5.000000
7.000000: 14.000000
@end example
@deftypefun void gal_interpolate_1d_blank (gal_data_t @code{*in}, int @code{type_1d})
Fill the blank elements of @code{in} using the rest of the elements and the given interpolation.
The interpolation scheme can be set through @code{type_1d}, which accepts any of the @code{GAL_INTERPOLATE_1D_*} macros above.
The interpolation is internally done in 64-bit floating point type (@code{double}).
However the evaluated/interpolated values (originally blank) will be written (in @code{in}) with its original numeric datatype, using C's standard type conversion.
By definition, interpolation is only defined ``between'' valid points.
Therefore, if any number of elements on the start or end of the 1D array are blank, those elements will not be interpolated and will remain blank.
To see if any blank (non-interpolated) elements remain, you can use @code{gal_blank_present} on @code{in} after this function is finished.
@end deftypefun
@cindex Warp
@cindex Align
@cindex Resampling
@cindex WCS distortion
@cindex Non-linear distortion
@node Warp library, Color functions, Interpolation, Gnuastro library
@subsection Warp library (@file{warp.h})
Warping an image to a new pixel grid is commonly necessary as part of astronomical data reduction, for an introduction, see @ref{Warp}.
For details of how we resample the old pixel grid to the new pixel grid, see @ref{Resampling}.
Gnuastro's Warp program uses the following functions for its default mode (when no linear warps are requested).
Through the following functions, you can directly access those features in your own custom programs.
The linear warping operations of the Warp program aren't yet brought into the library.
If you need them please get in touch with us at @code{bug-gnuastro@@gnu.org}.
For usage examples of this library, please see @ref{Library demo - Warp to another image} or @ref{Library demo - Warp to new grid}.
You are free to provide any valid WCS keywords to the functions defined in this library using the @code{gal_warp_wcsalign_t} data type.
This might be used to align the input image to the standard WCS grid, potentially changing the pixel scale, removing any valid WCS non-linear distortion available, and projecting to any valid WCS projection type.
Further details of the warp library functions and parameters are shown below:
@deffn Macro GAL_WARP_OUTPUT_NAME_WARPED
@deffnx Macro GAL_WARP_OUTPUT_NAME_MAXFRAC
Names of the output datasets (in the @code{name} component of the output @code{gal_data_t}s).
By default the output is only a single dataset, but when the @code{checkmaxfrac} component of the input is non-zero, it will contain two datasets.
@end deffn
@deftp {Type (C @code{struct})} gal_warp_wcsalign_t
The main data container for inputs, output and internal variables to simplify the WCS-aligning functions.
Due to the large number of input variables, this structure makes it easy to call the main functions.
Similar to @code{gal_data_t}, the @code{gal_warp_wcsalign_t} is a structure @code{typedef}'d as a new type, see @ref{Library data container}.
Please note that this structure has elements that are @emph{allocated} dynamically and must be freed after usage.
@code{gal_warp_wcsalign_free} only frees the internal variables, so you are responsible for freeing your own inputs (@code{cdelt}, @code{input}, etc.) and the output.
The internal variables are cached here to cut cpu-intensive computations.
To prevent from using uninitialized variables, we recommend using the helper function @code{gal_warp_wcsalign_template} to get a clean structure before setting your own variables.
The structure and each of its elements are defined below:
@example
typedef struct
@{
/* Arguments given (and later freed) by the caller. If 'twcs' is
given, then the "WCS To build" elements will be ignored. */
gal_data_t *input;
size_t numthreads;
double coveredfrac;
size_t edgesampling;
gal_data_t *widthinpix;
uint8_t checkmaxfrac;
struct wcsprm *twcs; /* WCS Predefined. */
gal_data_t *ctype; /* WCS To build. */
gal_data_t *cdelt; /* WCS To build. */
gal_data_t *center; /* WCS To build. */
/* Output (must be freed by caller) */
gal_data_t *output;
/* Internal variables (allocated and freed internally) */
size_t v0;
size_t nhor;
size_t ncrn;
size_t gcrn;
int isccw;
gal_data_t *vertices;
@} gal_warp_wcsalign_t;
@end example
@table @code
@item gal_data_t *input
The input dataset.
This dataset must contain both the image array of type @code{GAL_TYPE_FLOAT64}, and @code{input->wcs} should not be @code{NULL} for the WCS-aligning operations to work, see @ref{Library demo - Warp to new grid}.
@item size_t numthreads
Number of threads to use during the WCS aligning operations.
If the given value is @code{0}, the library will calculate the number of available threads at run-time.
The @code{warp} library functions are @emph{thread-safe} so you can freely enjoy the merits of parallel processing.
@item double coveredfrac
Acceptable fraction of output pixel that is covered by input pixels.
The value should be between 0 and 1 (inclusive).
If the area of an output pixel is covered by less than this fraction, its value will be @code{NaN}.
For more, see the description of @option{--coveredfrac} in @ref{Invoking astwarp}.
@item size_t edgesampling
Set the number of extra vertices along each edge of the output pixel's polygon to account for potential curvature due to projection or distortion.
A value of @code{0} is usually enough for this (so the pixel is only defined by a four vertice polygon.
Greater values increase memory usage and program execution time.
For more, please see the description of @option{--edgesampling} in @ref{Align pixels with WCS considering distortions}.
@item gal_data_t *widthinpix
Output image size (width and height) in number of pixels.
If a @code{NULL} pointer is passed, the WCS-aligning operations will estimate the output image size internally such that it contains the full input.
This dataset should have a type of @code{GAL_TYPE_SIZE_T} and contain exactly two @emph{odd} values.
This ensures that the center of the central pixel lies at the requested central coordinate (note that an image with an even number of pixels doesn't have a ``central'' pixel!
@item struct wcsprm *twcs
The target grid WCS which must follow the standard WCSLIB structure.
You can read it from a file using @code{gal_wcs_read} or create an entirely new one with @code{gal_wcs_create} and later free it with @code{gal_wcs_free}, see @ref{World Coordinate System}.
If this element is given, the @code{ctype}, @code{cdelt} and @code{center} elements (which are used to construct a WCS internally) are ignored.
Please note that the @code{wcsprm} structure doesn't contain the image size.
To set the final image size, you should use @option{widthinpix}.
@item gal_data_t *ctype
The output's projection type.
The dataset has to have the type @code{GAL_TYPE_STRING}, containing exactly two strings.
Both strings will be directly passed to WCSLIB and should conform to the FITS standard's @code{CTYPEi} keywords, see the description of @option{--ctype} in @ref{Align pixels with WCS considering distortions}.
For example, @code{"RA---TAN"} and @code{"DEC--TAN"}, or @code{"RA---HPX"} and @code{"DEC--HPX"}.
@item gal_data_t *cdelt
Output pixel scale (size of pixel in the WCS units: value to @code{CUNITi} keywords in FITS, usually degrees).
The dataset should have a type of @code{GAL_TYPE_FLOAT64} and contain exactly two values.
Hint: to convert arcsec to degrees, just divide by 3600.
@item gal_data_t *center
WCS coordinate of the center of the central pixel of the output.
The units depend on the WCS, for example, if the @code{CUNITi} keywords are @code{deg}, it is in degrees.
This dataset should have a type of @code{GAL_TYPE_FLOAT64} and contain exactly two values.
@item uint8_t checkmaxfrac
When this is non-zero, the output will be a two-element @ref{List of gal_data_t}.
The second element shows the @url{https://en.wikipedia.org/wiki/Moir%C3%A9_pattern, Moir@'e pattern} of the warp.
For more, see @ref{Moire pattern in coadding and its correction}.
@end table
@end deftp
@deftypefun gal_warp_wcsalign_t gal_warp_wcsalign_template (void)
A high-level helper function that returns a clean @code{gal_warp_wcsalign_t} struct with all values initialized
This function returns a copy of a statically allocated structure.
So you don't need to free the returned structure.
The Warp library decides on the program flow based on this struct.
Uninitialized pointers can point to random space in RAM which can create segmentation faults, or even worse, produce unnoticed side-effects.
It is therefore good practice to manually set unused pointers to @code{NULL} and give blank values to numbers
Since there are many variables and pointers in @code{gal_warp_wcsalign_t}, it is easy to forget @emph{initializing} them.
With that said, we recommend using this function to minimize human error.
@end deftypefun
@deftypefun void gal_warp_wcsalign (gal_warp_wcsalign_t *wa)
A high-level function to align the input dataset's pixels to its WCS coordinates and write the result in @code{wa->output}.
This function assumes that the input variables have already been set in the @code{wa} structure.
The input variables are clearly shown in the definition of @code{gal_warp_wcsalign_t}.
It will call the lower level functions below to do the job and will free the internal variables afterwards.
@end deftypefun
The following low-level functions are called from the high-level @code{gal_warp_wcsalign} function.
They are provided here in scenarios where fine grain control over the thread workflow is necessary, see @ref{Multithreaded programming}.
@deftypefun void gal_warp_wcsalign_init (gal_warp_wcsalign_t *wa)
Low-level function to initialize all the elements inside the @code{wa} structure assuming that the input variables have been set.
The input variables are clearly shown in the definition of @code{gal_warp_wcsalign_t}.
This includes sanity checking the input arguments, as well as allocating the output image's empty pixels (that can be filled with @code{gal_warp_wcsalign_onpix}, possibly on threads).
@end deftypefun
@deftypefun void gal_warp_wcsalign_onpix (gal_warp_wcsalign_t *nl, size_t ind)
Low-level function that fills pixel @code{ind} (counting from 0) in the already initialized output image.
@end deftypefun
@deftypefun {void *} gal_warp_wcsalign_onthread (void *inparam)
Low-level worker function that can be passed to the high-level @code{gal_threads_spin_off} or the lower-level @code{pthread_create} with some modifications, see @ref{Multithreaded programming}.
@end deftypefun
@deftypefun void gal_warp_wcsalign_free (gal_warp_wcsalign_t *wa)
Low-level function to free the internal variables inside @code{wa} only.
The caller must free the input pointers themselves, this function will not free them (they may be necessary in other parts of the caller's higher-level architecture).
@end deftypefun
@deftypefun void gal_warp_pixelarea (gal_warp_wcsalign_t *wa)
Calculate each input pixel's area based on its WCS and save it to a copy of the input image with only one difference: the pixel values now show pixel area.
For examples on its usage, see @ref{Pixel information images}.
@end deftypefun
@node Color functions, Git wrappers, Warp library, Gnuastro library
@subsection Color functions (@file{color.h})
@cindex Colors
The available pre-defined colors in Gnuastro are shown and discussed in @ref{Vector graphics colors}.
This part of Gnuastro is currently in charge of mapping the color names to the color IDs and to return the red-green-blue fractions of each color.
On a terminal that supports 24-bit (true color), you can see the full list of color names and a demo of each color with this command:
@example
$ astconvertt --listcolors
@end example
@noindent
For each color we have a separate macro that starts with @code{GAL_COLOR_}, and ends with the color name in all-caps.
@deffn Macro GAL_COLOR_INVALID
@deffnx Macro GAL_COLOR_MEDIUMVIOLETRED
@deffnx Macro GAL_COLOR_DEEPPINK
@deffnx Macro GAL_COLOR_*
The integer identifiers for each of the named colors in Gnuastro.
Except for the first one (@code{GAL_COLOR_INVALID}), we currently have 140 colors from the @url{https://en.wikipedia.org/wiki/Web_colors#Extended_colors, extended web colors}.
The full list of colors and a demo can be visually inspected on the command-line with the @command{astconvertt --listcolors} command and is also shown in @ref{Vector graphics colors}.
The macros have the same names, just in full-caps.
@end deffn
@noindent
The functions below can be used to interact with the pre-defined colors:
@deftypefun uint8_t gal_color_name_to_id (char @code{*name})
Given the name of a color, return the identifier.
The name matching is not case-sensitive.
@end deftypefun
@deftypefun {char *} gal_color_id_to_name (uint8_t @code{color})
Given the ID of a color, return its name.
@end deftypefun
@deftypefun void gal_color_in_rgb (uint8_t @code{color}, float @code{*f})
Given the identifier of a color, write the color's red-green-blue fractions in the space that @code{f} points to.
It is up to the caller to have the space for three 32-bit floating point numbers to be already allocated before calling this function.
@end deftypefun
@node Git wrappers, Python interface, Color functions, Gnuastro library
@subsection Git wrappers (@file{git.h})
@cindex Git
@cindex libgit2
Git is one of the most common tools for version control and it can often be useful during development, for example, see @code{COMMIT} keyword in @ref{Output FITS files}.
At installation time, Gnuastro will also check for the existence of libgit2, and store the value in the @code{GAL_CONFIG_HAVE_LIBGIT2}, see @ref{Configuration information} and @ref{Optional dependencies}.
@file{gnuastro/git.h} includes @file{gnuastro/config.h} internally, so you will not have to include both for this macro.
@deftypefun {char *} gal_git_describe ( )
When libgit2 is present and the program is called within a directory that is version controlled, this function will return a string containing the commit description (similar to Gnuastro's unofficial version number, see @ref{Version numbering}).
If there are uncommitted changes in the running directory, it will add a `@code{-dirty}' prefix to the description.
When there is no tagged point in the previous commit, this function will return a uniquely abbreviated commit object as fallback.
This function is used for generating the value of the @code{COMMIT} keyword in @ref{Output FITS files}.
The output string is similar to the output of the following command:
@example
$ git describe --dirty --always
@end example
Space for the output string is allocated within this function, so after using the value you have to @code{free} the output string.
If libgit2 is not installed or the program calling this function is not within a version controlled directory, then the output will be the @code{NULL} pointer.
@end deftypefun
@node Python interface, Unit conversion library, Git wrappers, Gnuastro library
@subsection Python interface (@file{python.h})
@url{https://en.wikipedia.org/wiki/Python_(programming_language), Python} is a high-level interpreted programming language that is used by some for data analysis.
Python itself is written in C, which is the same language that Gnuastro is written in.
Hence Gnuastro's library can be directly used in Python wrappers.
The functions in this section provide some low-level features to simplify the creation of Python modules that may want to use Gnuastro's advanced and powerful features directly.
To see why Gnuastro was written in C, please see @ref{Why C}.
@cartouche
@noindent
@strong{Python interface is not built by default:} to have the features described in this section, Gnuastro's library needs to be built with the @option{--with-python} configuration option.
For more, on this configuration option, see @ref{Gnuastro configure options}.
To see if the Gnuastro library that you are linking with has these features, you can check the value of @code{GAL_CONFIG_HAVE_PYTHON} macro, see @ref{Configuration information}.
@end cartouche
The Gnuastro Python Package is built using CPython.
This entails using Python wrappers around currently existing Gnuastro library functions to build @url{https://docs.python.org/3/extending/extending.html#, Python Extension Modules}.
It also makes use of the @url{https://numpy.org/doc/stable/reference/c-api/index.html, NumPy C-API} for dealing with data arrays.
Writing an interface between these and Gnuastro can be simplified using the functions below.
Since many of these functions depend on the Gnuastro Library itself, it is more convenient to package them with the Library to facilitate the work of Python package.
These functions will be expanding as Gnuastro's own Python module (pyGnuastro) grows.
The Python interface of Gnuastro's library is built and installed by default if a Python 3.0.0 or greater with NumPy is found in @code{$PATH}.
Users may disable this interface with the @option{--without-python} option to @code{./configure} when they installed Gnuastro, see @ref{Gnuastro configure options}.
If you have problems in a Python virtual env, see @ref{Optional dependencies}.
Because Python is an optional dependency of Gnuastro, the following functions may not be available on some systems.
To check if the installed Gnuastro library was compiled with the following functions, you can use the @code{GAL_CONFIG_HAVE_PYTHON} macro which is defined in @file{gnuastro/config.h}, see @ref{Configuration information}.
@deftypefun int gal_python_type_to_numpy (uint8_t @code{type})
Returns the NumPy datatype corresponding to a certain Gnuastro @code{type}, see @ref{Library data types}.
@end deftypefun
@deftypefun uint8_t gal_python_type_from_numpy (int @code{type})
Returns Gnuastro's numerical datatype that corresponds to the input NumPy @code{type}.
For Gnuastro's recognized data types, see @ref{Library data types}.
@end deftypefun
@node Unit conversion library, Spectral lines library, Python interface, Gnuastro library
@subsection Unit conversion library (@file{units.h})
Datasets can contain values in various formats or units.
The functions in this section are defined to facilitate the easy conversion between them and are declared in @file{units.h}.
If there are certain conversions that are useful for your work, please get in touch.
@deftypefun int gal_units_extract_decimal (char @code{*convert}, const char @code{*delimiter}, double @code{*args}, size_t @code{n})
Parse the input @code{convert} string with a certain delimiter (for example, @code{01:23:45}, where the delimiter is @code{":"}) as multiple numbers (for example, 1,23,45) and write them as an array in the space that @code{args} is pointing to.
The expected number of values in the string is specified by the @code{n} argument (3 in the example above).
If the function succeeds, it will return 1, otherwise it will return 0 and the values may not be fully written into @code{args}.
If the number of values parsed in the string is different from @code{n}, this function will fail.
@end deftypefun
@deftypefun double gal_units_ra_to_degree (char @code{*convert})
@cindex Right Ascension
Convert the input Right Ascension (RA) string (in the format of hours, minutes and seconds either as @code{_h_m_s} or @code{_:_:_}) to degrees (a single floating point number).
@end deftypefun
@deftypefun double gal_units_dec_to_degree (char @code{*convert})
@cindex Declination
Convert the input Declination (Dec) string (in the format of degrees, arc-minutes and arc-seconds either as @code{_d_m_s} or @code{_:_:_}) to degrees (a single floating point number).
@end deftypefun
@deftypefun {char *} gal_units_degree_to_ra (double @code{decimal}, int @code{usecolon})
@cindex Right Ascension
Convert the input Right Ascension (RA) degree (a single floating point number) to old/standard notation (in the format of hours, minutes and seconds of @code{_h_m_s}).
If @code{usecolon!=0}, then the delimiters between the components will be colons: @code{_:_:_}.
@end deftypefun
@deftypefun {char *} gal_units_degree_to_dec (double @code{decimal}, int @code{usecolon})
@cindex Declination
Convert the input Declination (Dec) degree (a single floating point number) to old/standard notation (in the format of degrees, arc-minutes and arc-seconds of @code{_d_m_s}).
If @code{usecolon!=0}, then the delimiters between the components will be colons: @code{_:_:_}.
@end deftypefun
@deftypefun double gal_units_counts_to_mag (double @code{counts}, double @code{zeropoint})
@cindex Magnitude
Convert counts to magnitudes through the given zero point.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_mag_to_counts (double @code{mag}, double @code{zeropoint})
@cindex Magnitude
Convert magnitudes to counts through the given zero point.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_mag_to_luminosity (double @code{mag}, double @code{mag_absolute_ref}, double @code{distance_modulus})
Convert the observed magnitude of a source into its luminosity knowing the absolute magnitude of a reference object and the (corrected) distance modulus.
The reference object is usually taken to be the Sun, see table 3 of Willmer @url{https://arxiv.org/abs/1804.07788,2018} for values in common filters.
Regarding the distance modulus see the description of @option{--distancemodulus} in @ref{CosmicCalculator basic cosmology calculations}.
In the absence of SED-based estimates, you can use @code{gal_cosmology_to_absolute_mag} (which is the function behind @option{--absmagconv} described in that section).
@end deftypefun
@deftypefun double gal_units_luminosity_to_mag (double @code{mag}, double @code{mag_absolute_ref}, double @code{distance_modulus})
The inverse of @code{gal_units_mag_to_luminosity}.
@end deftypefun
@deftypefun double gal_units_mag_to_sb (double @code{mag}, double @code{area_arcsec2})
@cindex Magnitude
@cindex Surface Brightness
Calculate the surface brightness of a given magnitude, over a certain area in units of arcsec@mymath{^2}.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_sb_to_mag (double @code{sb}, double @code{area_arcsec2})
Calculate the magnitude of a given surface brightness, over a certain area in units of arcsec@mymath{^2}.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_counts_to_sb (double @code{counts}, double @code{zeropoint_ab}, double @code{area_arcsec2})
Calculate the surface brightness of a given count level, over a certain area in units of arcsec@mymath{^2}, assuming a certain AB zero point.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_sb_to_counts (double @code{sb}, double @code{zeropoint_ab}, double @code{area_arcsec2})
Calculate the counts corresponding to a given surface brightness, over a certain area in units of arcsec@mymath{^2}.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_counts_to_jy (double @code{counts}, double @code{zeropoint_ab})
@cindex Jansky (Jy)
@cindex AB Magnitude
@cindex Magnitude, AB
Convert counts to Janskys through an AB magnitude-based zero point.
For more on the equation, see @ref{Brightness flux magnitude}.
@end deftypefun
@deftypefun double gal_units_au_to_pc (double @code{au})
@cindex Parsecs
@cindex Astronomical Units (AU)
Convert the input value (assumed to be in Astronomical Units) to Parsecs.
For the conversion equation, see the description of @code{au-to-pc} operator in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun double gal_units_zeropoint_change (double @code{counts}, double @code{zeropoint_in}, double @code{zeropoint_out})
@cindex Zero point change
Convert the zero point of @code{counts} (which is @code{zeropoint_in}) to @code{zeropoint_out}.
@end deftypefun
@deftypefun double gal_units_counts_to_nanomaggy (double @code{counts}, double @code{zeropoint_ab})
@cindex Nanomaggy
@cindex Magnitude (nanomaggy)
Convert counts to Nanomaggy (with fixed zero point of 22.5) through an AB magnitude-based zero point.
This is just a wrapper around @code{gal_units_zeropoint_change} with @code{zeropoint_out=22.5}.
@end deftypefun
@deftypefun double gal_units_nanomaggy_to_counts (double @code{counts}, double @code{zeropoint_ab})
Convert Nanomaggy (with fixed zero point of 22.5) to counts through an AB magnitude-based zero point.
This is just a wrapper around @code{gal_units_zeropoint_change} with @code{zeropoint_in=22.5}.
@end deftypefun
@deftypefun double gal_units_pc_to_au (double @code{pc})
Convert the input value (assumed to be in Parsecs) to Astronomical Units (AUs).
For the conversion equation, see the description of @code{au-to-pc} operator in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun double gal_units_ly_to_pc (double @code{ly})
@cindex Light-year
Convert the input value (assumed to be in Light-years) to Parsecs.
For the conversion equation, see the description of @code{ly-to-pc} operator in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun double gal_units_pc_to_ly (double @code{pc})
Convert the input value (assumed to be in Parsecs) to Light-years.
For the conversion equation, see the description of @code{ly-to-pc} operator in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun double gal_units_ly_to_au (double @code{ly})
Convert the input value (assumed to be in Light-years) to Astronomical Units.
For the conversion equation, see the description of @code{ly-to-pc} operator in @ref{Arithmetic operators}.
@end deftypefun
@deftypefun double gal_units_au_to_ly (double @code{au})
Convert the input value (assumed to be in Astronomical Units) to Light-years.
For the conversion equation, see the description of @code{ly-to-pc} operator in @ref{Arithmetic operators}.
@end deftypefun
@node Spectral lines library, Cosmology library, Unit conversion library, Gnuastro library
@subsection Spectral lines library (@file{speclines.h})
Gnuastro's library has the following macros and functions for dealing with spectral lines.
All these functions are declared in @file{gnuastro/spectra.h}.
@cindex H-alpha
@cindex H-beta
@cindex H-gamma
@cindex H-delta
@cindex H-epsilon
@cindex OII doublet
@cindex SII doublet
@cindex Lyman-alpha
@cindex Lyman limit
@cindex NII doublet
@cindex Doublet: NII
@cindex Doublet: OII
@cindex Doublet: SII
@cindex OIII doublet
@cindex MgII doublet
@cindex CIII doublet
@cindex Balmer limit
@cindex Doublet: OIII
@cindex Doublet: MgII
@cindex Doublet: CIII
@deffn Macro GAL_SPECLINES_INVALID
@deffnx Macro GAL_SPECLINES_Ne_VIII_770
@deffnx Macro GAL_SPECLINES_Ne_VIII_780
@deffnx Macro GAL_SPECLINES_Ly_epsilon
@deffnx Macro GAL_SPECLINES_Ly_delta
@deffnx Macro GAL_SPECLINES_Ly_gamma
@deffnx Macro GAL_SPECLINES_C_III_977
@deffnx Macro GAL_SPECLINES_N_III_990
@deffnx Macro GAL_SPECLINES_N_III_991_51
@deffnx Macro GAL_SPECLINES_N_III_991_58
@deffnx Macro GAL_SPECLINES_Ly_beta
@deffnx Macro GAL_SPECLINES_O_VI_1032
@deffnx Macro GAL_SPECLINES_O_VI_1038
@deffnx Macro GAL_SPECLINES_Ar_I_1067
@deffnx Macro GAL_SPECLINES_Ly_alpha
@deffnx Macro GAL_SPECLINES_N_V_1238
@deffnx Macro GAL_SPECLINES_N_V_1243
@deffnx Macro GAL_SPECLINES_Si_II_1260
@deffnx Macro GAL_SPECLINES_Si_II_1265
@deffnx Macro GAL_SPECLINES_O_I_1302
@deffnx Macro GAL_SPECLINES_C_II_1335
@deffnx Macro GAL_SPECLINES_C_II_1336
@deffnx Macro GAL_SPECLINES_Si_IV_1394
@deffnx Macro GAL_SPECLINES_O_IV_1397
@deffnx Macro GAL_SPECLINES_O_IV_1400
@deffnx Macro GAL_SPECLINES_Si_IV_1403
@deffnx Macro GAL_SPECLINES_N_IV_1486
@deffnx Macro GAL_SPECLINES_C_IV_1548
@deffnx Macro GAL_SPECLINES_C_IV_1551
@deffnx Macro GAL_SPECLINES_He_II_1640
@deffnx Macro GAL_SPECLINES_O_III_1661
@deffnx Macro GAL_SPECLINES_O_III_1666
@deffnx Macro GAL_SPECLINES_N_III_1747
@deffnx Macro GAL_SPECLINES_N_III_1749
@deffnx Macro GAL_SPECLINES_Al_III_1855
@deffnx Macro GAL_SPECLINES_Al_III_1863
@deffnx Macro GAL_SPECLINES_Si_III
@deffnx Macro GAL_SPECLINES_C_III_1909
@deffnx Macro GAL_SPECLINES_N_II_2143
@deffnx Macro GAL_SPECLINES_O_III_2321
@deffnx Macro GAL_SPECLINES_C_II_2324
@deffnx Macro GAL_SPECLINES_C_II_2325
@deffnx Macro GAL_SPECLINES_Fe_XI_2649
@deffnx Macro GAL_SPECLINES_He_II_2733
@deffnx Macro GAL_SPECLINES_Mg_V_2783
@deffnx Macro GAL_SPECLINES_Mg_II_2796
@deffnx Macro GAL_SPECLINES_Mg_II_2803
@deffnx Macro GAL_SPECLINES_Fe_IV_2829
@deffnx Macro GAL_SPECLINES_Fe_IV_2836
@deffnx Macro GAL_SPECLINES_Ar_IV_2854
@deffnx Macro GAL_SPECLINES_Ar_IV_2868
@deffnx Macro GAL_SPECLINES_Mg_V_2928
@deffnx Macro GAL_SPECLINES_He_I_2945
@deffnx Macro GAL_SPECLINES_O_III_3133
@deffnx Macro GAL_SPECLINES_He_I_3188
@deffnx Macro GAL_SPECLINES_He_II_3203
@deffnx Macro GAL_SPECLINES_O_III_3312
@deffnx Macro GAL_SPECLINES_Ne_V_3346
@deffnx Macro GAL_SPECLINES_Ne_V_3426
@deffnx Macro GAL_SPECLINES_O_III_3444
@deffnx Macro GAL_SPECLINES_N_I_3466_50
@deffnx Macro GAL_SPECLINES_N_I_3466_54
@deffnx Macro GAL_SPECLINES_He_I_3488
@deffnx Macro GAL_SPECLINES_Fe_VII_3586
@deffnx Macro GAL_SPECLINES_Fe_VI_3663
@deffnx Macro GAL_SPECLINES_H_19
@deffnx Macro GAL_SPECLINES_H_18
@deffnx Macro GAL_SPECLINES_H_17
@deffnx Macro GAL_SPECLINES_H_16
@deffnx Macro GAL_SPECLINES_H_15
@deffnx Macro GAL_SPECLINES_H_14
@deffnx Macro GAL_SPECLINES_O_II_3726
@deffnx Macro GAL_SPECLINES_O_II_3729
@deffnx Macro GAL_SPECLINES_H_13
@deffnx Macro GAL_SPECLINES_H_12
@deffnx Macro GAL_SPECLINES_Fe_VII_3759
@deffnx Macro GAL_SPECLINES_H_11
@deffnx Macro GAL_SPECLINES_H_10
@deffnx Macro GAL_SPECLINES_H_9
@deffnx Macro GAL_SPECLINES_Fe_V_3839
@deffnx Macro GAL_SPECLINES_Ne_III_3869
@deffnx Macro GAL_SPECLINES_He_I_3889
@deffnx Macro GAL_SPECLINES_H_8
@deffnx Macro GAL_SPECLINES_Fe_V_3891
@deffnx Macro GAL_SPECLINES_Fe_V_3911
@deffnx Macro GAL_SPECLINES_Ne_III_3967
@deffnx Macro GAL_SPECLINES_H_epsilon
@deffnx Macro GAL_SPECLINES_He_I_4026
@deffnx Macro GAL_SPECLINES_S_II_4069
@deffnx Macro GAL_SPECLINES_Fe_V_4071
@deffnx Macro GAL_SPECLINES_S_II_4076
@deffnx Macro GAL_SPECLINES_H_delta
@deffnx Macro GAL_SPECLINES_He_I_4144
@deffnx Macro GAL_SPECLINES_Fe_II_4179
@deffnx Macro GAL_SPECLINES_Fe_V_4181
@deffnx Macro GAL_SPECLINES_Fe_II_4233
@deffnx Macro GAL_SPECLINES_Fe_V_4227
@deffnx Macro GAL_SPECLINES_Fe_II_4287
@deffnx Macro GAL_SPECLINES_Fe_II_4304
@deffnx Macro GAL_SPECLINES_O_II_4317
@deffnx Macro GAL_SPECLINES_H_gamma
@deffnx Macro GAL_SPECLINES_O_III_4363
@deffnx Macro GAL_SPECLINES_Ar_XIV
@deffnx Macro GAL_SPECLINES_O_II_4415
@deffnx Macro GAL_SPECLINES_Fe_II_4417
@deffnx Macro GAL_SPECLINES_Fe_II_4452
@deffnx Macro GAL_SPECLINES_He_I_4471
@deffnx Macro GAL_SPECLINES_Fe_II_4489
@deffnx Macro GAL_SPECLINES_Fe_II_4491
@deffnx Macro GAL_SPECLINES_N_III_4510
@deffnx Macro GAL_SPECLINES_Fe_II_4523
@deffnx Macro GAL_SPECLINES_Fe_II_4556
@deffnx Macro GAL_SPECLINES_Fe_II_4583
@deffnx Macro GAL_SPECLINES_Fe_II_4584
@deffnx Macro GAL_SPECLINES_Fe_II_4630
@deffnx Macro GAL_SPECLINES_N_III_4634
@deffnx Macro GAL_SPECLINES_N_III_4641
@deffnx Macro GAL_SPECLINES_N_III_4642
@deffnx Macro GAL_SPECLINES_C_III_4647
@deffnx Macro GAL_SPECLINES_C_III_4650
@deffnx Macro GAL_SPECLINES_C_III_5651
@deffnx Macro GAL_SPECLINES_Fe_III_4658
@deffnx Macro GAL_SPECLINES_He_II_4686
@deffnx Macro GAL_SPECLINES_Ar_IV_4711
@deffnx Macro GAL_SPECLINES_Ar_IV_4740
@deffnx Macro GAL_SPECLINES_H_beta
@deffnx Macro GAL_SPECLINES_Fe_VII_4893
@deffnx Macro GAL_SPECLINES_Fe_IV_4903
@deffnx Macro GAL_SPECLINES_Fe_II_4924
@deffnx Macro GAL_SPECLINES_O_III_4959
@deffnx Macro GAL_SPECLINES_O_III_5007
@deffnx Macro GAL_SPECLINES_Fe_II_5018
@deffnx Macro GAL_SPECLINES_Fe_III_5085
@deffnx Macro GAL_SPECLINES_Fe_VI_5146
@deffnx Macro GAL_SPECLINES_Fe_VII_5159
@deffnx Macro GAL_SPECLINES_Fe_II_5169
@deffnx Macro GAL_SPECLINES_Fe_VI_5176
@deffnx Macro GAL_SPECLINES_Fe_II_5198
@deffnx Macro GAL_SPECLINES_N_I_5200
@deffnx Macro GAL_SPECLINES_Fe_II_5235
@deffnx Macro GAL_SPECLINES_Fe_IV_5236
@deffnx Macro GAL_SPECLINES_Fe_III_5270
@deffnx Macro GAL_SPECLINES_Fe_II_5276
@deffnx Macro GAL_SPECLINES_Fe_VII_5276
@deffnx Macro GAL_SPECLINES_Fe_XIV
@deffnx Macro GAL_SPECLINES_Ca_V
@deffnx Macro GAL_SPECLINES_Fe_II_5316_62
@deffnx Macro GAL_SPECLINES_Fe_II_5316_78
@deffnx Macro GAL_SPECLINES_Fe_VI_5335
@deffnx Macro GAL_SPECLINES_Fe_VI_5424
@deffnx Macro GAL_SPECLINES_Cl_III_5518
@deffnx Macro GAL_SPECLINES_Cl_III_5538
@deffnx Macro GAL_SPECLINES_Fe_VI_5638
@deffnx Macro GAL_SPECLINES_Fe_VI_5677
@deffnx Macro GAL_SPECLINES_C_III_5698
@deffnx Macro GAL_SPECLINES_Fe_VII_5721
@deffnx Macro GAL_SPECLINES_N_II_5755
@deffnx Macro GAL_SPECLINES_C_IV_5801
@deffnx Macro GAL_SPECLINES_C_IV_5812
@deffnx Macro GAL_SPECLINES_He_I_5876
@deffnx Macro GAL_SPECLINES_O_I_6046
@deffnx Macro GAL_SPECLINES_Fe_VII_6087
@deffnx Macro GAL_SPECLINES_O_I_6300
@deffnx Macro GAL_SPECLINES_S_III_6312
@deffnx Macro GAL_SPECLINES_Si_II_6347
@deffnx Macro GAL_SPECLINES_O_I_6364
@deffnx Macro GAL_SPECLINES_Fe_II_6369
@deffnx Macro GAL_SPECLINES_Fe_X
@deffnx Macro GAL_SPECLINES_Fe_II_6516
@deffnx Macro GAL_SPECLINES_N_II_6548
@deffnx Macro GAL_SPECLINES_H_alpha
@deffnx Macro GAL_SPECLINES_N_II_6583
@deffnx Macro GAL_SPECLINES_S_II_6716
@deffnx Macro GAL_SPECLINES_S_II_6731
@deffnx Macro GAL_SPECLINES_O_I_7002
@deffnx Macro GAL_SPECLINES_Ar_V
@deffnx Macro GAL_SPECLINES_He_I_7065
@deffnx Macro GAL_SPECLINES_Ar_III_7136
@deffnx Macro GAL_SPECLINES_Fe_II_7155
@deffnx Macro GAL_SPECLINES_Ar_IV_7171
@deffnx Macro GAL_SPECLINES_Fe_II_7172
@deffnx Macro GAL_SPECLINES_C_II_7236
@deffnx Macro GAL_SPECLINES_Ar_IV_7237
@deffnx Macro GAL_SPECLINES_O_I_7254
@deffnx Macro GAL_SPECLINES_Ar_IV_7263
@deffnx Macro GAL_SPECLINES_He_I_7281
@deffnx Macro GAL_SPECLINES_O_II_7320
@deffnx Macro GAL_SPECLINES_O_II_7331
@deffnx Macro GAL_SPECLINES_Ni_II_7378
@deffnx Macro GAL_SPECLINES_Ni_II_7411
@deffnx Macro GAL_SPECLINES_Fe_II_7453
@deffnx Macro GAL_SPECLINES_N_I_7468
@deffnx Macro GAL_SPECLINES_S_XII
@deffnx Macro GAL_SPECLINES_Ar_III_7751
@deffnx Macro GAL_SPECLINES_He_I_7816
@deffnx Macro GAL_SPECLINES_Ar_I_7868
@deffnx Macro GAL_SPECLINES_Ni_III
@deffnx Macro GAL_SPECLINES_Fe_XI_7892
@deffnx Macro GAL_SPECLINES_He_II_8237
@deffnx Macro GAL_SPECLINES_Pa_20
@deffnx Macro GAL_SPECLINES_Pa_19
@deffnx Macro GAL_SPECLINES_Pa_18
@deffnx Macro GAL_SPECLINES_O_I_8446
@deffnx Macro GAL_SPECLINES_Pa_17
@deffnx Macro GAL_SPECLINES_Ca_II_8498
@deffnx Macro GAL_SPECLINES_Pa_16
@deffnx Macro GAL_SPECLINES_Ca_II_8542
@deffnx Macro GAL_SPECLINES_Pa_15
@deffnx Macro GAL_SPECLINES_Cl_II
@deffnx Macro GAL_SPECLINES_Pa_14
@deffnx Macro GAL_SPECLINES_Fe_II_8617
@deffnx Macro GAL_SPECLINES_Ca_II_8662
@deffnx Macro GAL_SPECLINES_Pa_13
@deffnx Macro GAL_SPECLINES_N_I_8680
@deffnx Macro GAL_SPECLINES_N_I_8703
@deffnx Macro GAL_SPECLINES_N_I_8712
@deffnx Macro GAL_SPECLINES_Pa_12
@deffnx Macro GAL_SPECLINES_Pa_11
@deffnx Macro GAL_SPECLINES_Fe_II_8892
@deffnx Macro GAL_SPECLINES_Pa_10
@deffnx Macro GAL_SPECLINES_S_III_9069
@deffnx Macro GAL_SPECLINES_Pa_9
@deffnx Macro GAL_SPECLINES_S_III_9531
@deffnx Macro GAL_SPECLINES_Pa_epsilon
@deffnx Macro GAL_SPECLINES_C_I_9824
@deffnx Macro GAL_SPECLINES_C_I_9850
@deffnx Macro GAL_SPECLINES_S_VIII
@deffnx Macro GAL_SPECLINES_He_I_10028
@deffnx Macro GAL_SPECLINES_He_I_10031
@deffnx Macro GAL_SPECLINES_Pa_delta
@deffnx Macro GAL_SPECLINES_S_II_10287
@deffnx Macro GAL_SPECLINES_S_II_10320
@deffnx Macro GAL_SPECLINES_S_II_10336
@deffnx Macro GAL_SPECLINES_Fe_XIII
@deffnx Macro GAL_SPECLINES_He_I_10830
@deffnx Macro GAL_SPECLINES_Pa_gamma
@deffnx Macro GAL_SPECLINES_NUMBER
Internal values/identifiers for recognized spectral lines as is clear from their names.
They are based on the UV an optical table of galaxy emission lines of Drew Chojnowski@footnote{@url{http://astronomy.nmsu.edu/drewski/tableofemissionlines.html}}.
Note the first and last macros, they can be used when parsing the lines automatically: both do not correspond to any line, but their integer values correspond to the two integers just before and after the first and last line identifier:
@code{GAL_SPECLINES_INVALID} has a value of zero, and allows you to have a fixed integer which never corresponds to a line.
@code{GAL_SPECLINES_INVALID_MAX} is the total number of pre-defined lines, plus one.
So you can parse all the known lines with a @code{for} loop like this:
@example
for(i=1;i<GAL_SPECLINES_INVALID_MAX;++i)
@end example
@end deffn
@deffn Macro GAL_SPECLINES_ANGSTROM_*
Wavelength (in Angstroms) of the named lines.
The @code{*} can take any of the line names of the @code{GAL_SPECLINES_*} Macros above.
@end deffn
@deffn Macro GAL_SPECLINES_NAME_*
Names (as literal stings without any space) that can be used to refer to the lines in your program and converted to and from line identifiers using the functions below.
The @code{*} can take any of the line names of the @code{GAL_SPECLINES_*} Macros above.
@end deffn
@deftypefun {char *} gal_speclines_line_name (int @code{linecode})
Return the literal string of the given spectral line identifier Macro (for
example @code{GAL_SPECLINES_HALPHA} or @code{GAL_SPECLINES_LYLIMIT}).
@end deftypefun
@deftypefun int gal_speclines_line_code (char @code{*name})
Return the spectral line identifier of the given standard name (for example
@code{GAL_SPECLINES_NAME_HALPHA} or @code{GAL_SPECLINES_NAME_LYLIMIT}).
@end deftypefun
@deftypefun double gal_speclines_line_angstrom (int @code{linecode})
Return the wavelength (in Angstroms) of the given line.
@end deftypefun
@deftypefun double gal_speclines_line_redshift (double @code{obsline}, double @code{restline})
@cindex Rest-frame
Return the redshift where the observed wavelength (@code{obsline}) was
emitted from (if its rest frame wavelength was @code{restline}).
@end deftypefun
@deftypefun double gal_speclines_line_redshift_code (double @code{obsline}, int @code{linecode})
Return the redshift where the observed wavelength (@code{obsline}) was emitted from a pre-defined spectral line in the macros above.
For example, you want the redshift where the H-alpha line falls at a wavelength of 8000 Angstroms, you can call this function like this:
@example
gal_speclines_line_redshift_code(8000, GAL_SPECLINES_H_alpha);
@end example
@end deftypefun
@node Cosmology library, SAO DS9 library, Spectral lines library, Gnuastro library
@subsection Cosmology library (@file{cosmology.h})
This library does the main cosmological calculations that are commonly necessary in extra-galactic astronomical studies.
The main variable in this context is the redshift (@mymath{z}).
The cosmological input parameters in the functions below are @code{H0}, @code{o_lambda_0}, @code{o_matter_0}, @code{o_radiation_0} which respectively represent the current (at redshift 0) expansion rate (Hubble constant in units of km/sec/Mpc), cosmological constant (@mymath{\Lambda}), matter and radiation densities.
All these functions are declared in @file{gnuastro/cosmology.h}.
For a more extended introduction/discussion of the cosmological parameters, please see @ref{CosmicCalculator}.
@deftypefun double gal_cosmology_age (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the age of the universe at redshift @code{z} in units of Giga
years.
@end deftypefun
@deftypefun double gal_cosmology_proper_distance (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the proper distance to an object at redshift @code{z} in units of
Mega parsecs.
@end deftypefun
@deftypefun double gal_cosmology_comoving_volume (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the comoving volume over 4pi stradian to @code{z} in units of Mega
parsecs cube.
@end deftypefun
@deftypefun double gal_cosmology_critical_density (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the critical density at redshift @code{z} in units of
@mymath{g/cm^3}.
@end deftypefun
@deftypefun double gal_cosmology_angular_distance (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the angular diameter distance to an object at redshift @code{z} in
units of Mega parsecs.
@end deftypefun
@deftypefun double gal_cosmology_luminosity_distance (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the luminosity diameter distance to an object at redshift @code{z}
in units of Mega parsecs.
@end deftypefun
@deftypefun double gal_cosmology_distance_modulus (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the distance modulus at redshift @code{z} (with no units).
@end deftypefun
@deftypefun double gal_cosmology_to_absolute_mag (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the conversion from apparent to absolute magnitude for an object at redshift @code{z}.
This value has to be added to the apparent magnitude to give the absolute magnitude of an object at redshift @code{z}.
@end deftypefun
@deftypefun double gal_cosmology_velocity_from_z (double @code{z})
Return the velocity (in km/s) corresponding to the given redshift (@code{z}).
@end deftypefun
@deftypefun double gal_cosmology_z_from_velocity (double @code{v})
Return the redshift corresponding to the given velocity (@code{v} in km/s).
@end deftypefun
@node SAO DS9 library, , Cosmology library, Gnuastro library
@subsection SAO DS9 library (@file{ds9.h})
@cindex SAO DS9
This library operates on the output files of SAO DS9@footnote{@url{https://sites.google.com/cfa.harvard.edu/saoimageds9}}.
SAO DS9 is one of the most commonly used FITS image and cube viewers today with an easy to use graphic user interface (GUI), see @ref{SAO DS9}.
But besides merely opening FITS data, it can also produce certain kinds of files that can be useful in common analysis.
For example, on DS9's GUI, it is very easy to define a (possibly complex) polygon as a ``region''.
You can then save that ``region'' into a file and using the functions below, feed the polygon into Gnuastro's programs (or your custom programs).
@deffn Macro GAL_DS9_COORD_MODE_IMG
@deffnx Macro GAL_DS9_COORD_MODE_WCS
@deffnx Macro GAL_DS9_COORD_MODE_INVALID
Macros to identify the coordinate mode of the DS9 file.
Their names are sufficiently descriptive.
The last one (@code{INVALID}) is for sanity checks (for example, to know if the mode is already selected).
@end deffn
@deftypefun {gal_data_t *} gal_ds9_reg_read_polygon (char @code{*filename})
@cindex SAO DS9 region file
@cindex Region file (SAO DS9)
Returns an allocated generic data container (@code{gal_data_t}, with an array of @code{GAL_TYPE_FLOAT64}) containing the vertices of a polygon within the SAO DS9 region file given by @code{*filename}.
Since SAO DS9 region files are 2 dimensional, if there are @mymath{N} vertices in the SAO DS9 region file, the returned dataset will have @mymath{2\times N} elements (first two elements belonging to first vertice, etc.).
The mode to interpret the vertice coordinates is also read from the SAO DS9 region file and written into the @code{status} attribute of the output @code{gal_data_t}.
The coordinate mode can be one of the @code{GAL_DS9_COORD_MODE_*} macros, mentioned above.
It is assumed that the file begins with @code{# Region file format: DS9} and it has two more lines (at least):
a line containing the mode of the coordinates (the line should only contain either @code{icrs} or @code{fk5} or @code{image}),
a line with the polygon vertices following this format: @code{polygon(V1X,V1Y,V2X,V2Y,...)} where @code{V1X} and @code{V1Y} are the horizontal and vertical coordinates of the first vertice, and so on.
For example, here is a minimal acceptable SAO DS9 region file:
@example
# Region file format: DS9
icrs
polygon(53.187414,-27.779152,53.159507,-27.759633,...)
@end example
@end deftypefun
@node Library demo programs, , Gnuastro library, Library
@section Library demo programs
In this final section of @ref{Library}, we give some example Gnuastro programs to demonstrate various features in the library.
All these programs have been tested and once Gnuastro is installed you can compile and run them with Gnuastro's @ref{BuildProgram} program that will take care of linking issues.
If you do not have any FITS file to experiment on, you can use those that are generated by Gnuastro after @command{make check} in the @file{tests/} directory, see @ref{Quick start}.
@menu
* Library demo - reading a FITS image:: Read a FITS image into memory.
* Library demo - inspecting neighbors:: Inspect the neighbors of a pixel.
* Library demo - multi-threaded operation:: Doing an operation on threads.
* Library demo - reading and writing table columns:: Simple Column I/O.
* Library demo - Warp to another image:: Output pixel grid and WCS from another image.
* Library demo - Warp to new grid:: Define a new pixel grid and WCS to resample the input.
@end menu
@node Library demo - reading a FITS image, Library demo - inspecting neighbors, Library demo programs, Library demo programs
@subsection Library demo - reading a FITS image
The following simple program demonstrates how to read a FITS image into memory and use the @code{void *array} pointer in of @ref{Generic data container}.
For easy linking/compilation of this program along with a first run see @ref{BuildProgram} (in short: Compile, link and run ‘myprogram.c' with this command: `@command{astbuildprog myprogram.c}).
Before running, also change the @code{filename} and @code{hdu} variable values to specify an existing FITS file and/or extension/HDU.
This is just intended to demonstrate how to use the @code{array} pointer of @code{gal_data_t}.
Hence it does not do important sanity checks, for example in real datasets you may also have blank pixels.
In such cases, this program will return a NaN value (see @ref{Blank pixels}).
So for general statistical information of a dataset, it is much better to use Gnuastro's @ref{Statistics} program which can deal with blank pixels and many other issues in a generic dataset.
To encourage good coding practices, this script contains a copyright notice with a place holder for your name and your email (as you customize it for your own purpose).
Always keep a one-line description and copyright notice like this in all your scripts, such ``metadata'' is very important to accompany every source file you write.
Of course, when you write the source file from scratch and just learn how to use a single function from this manual, only your name/year should appear.
The existing name of the original author of this example program is only for cases where you copy-paste this whole file.
@example
/* Reading a FITS image into memory.
*
* The following simple program demonstrates how to read a FITS image
* into memory and use the 'void *array' pointer. This is just intended
* to demonstrate how to use the array pointer of 'gal_data_t'.
*
* Copyright (C) 2025 Your Name <your@@email.address>
* Copyright (C) 2020-2025 Mohammad Akhlaghi <mohammad@@akhlaghi.org>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/fits.h> /* includes gnuastro's data.h and type.h */
#include <gnuastro/statistics.h>
int
main(void)
@{
size_t i;
float *farray;
double sum=0.0f;
gal_data_t *image;
char *filename="img.fits", *hdu="1";
/* Read `img.fits' (HDU: 1) as a float32 array. */
image=gal_fits_img_read_to_type(filename, hdu, GAL_TYPE_FLOAT32,
-1, 1, NULL);
/* Use the allocated space as a single precision floating
* point array (recall that `image->array' has `void *'
* type, so it is not directly usable). */
farray=image->array;
/* Calculate the sum of all the values. */
for(i=0; i<image->size; ++i)
sum += farray[i];
/* Report the sum. */
printf("Sum of values in %s (hdu %s) is: %f\n",
filename, hdu, sum);
/* Clean up and return. */
gal_data_free(image);
return EXIT_SUCCESS;
@}
@end example
@node Library demo - inspecting neighbors, Library demo - multi-threaded operation, Library demo - reading a FITS image, Library demo programs
@subsection Library demo - inspecting neighbors
The following simple program shows how you can inspect the neighbors of a pixel using the @code{GAL_DIMENSION_NEIGHBOR_OP} function-like macro that was introduced in @ref{Dimensions}.
For easy linking/compilation of this program along with a first run see @ref{BuildProgram}.
Before running, also change the file name and HDU (first and second arguments to @code{gal_fits_img_read_to_type}) to specify an existing FITS file and/or extension/HDU.
To encourage good coding practices, this script contains a copyright notice with a place holder for your name and your email (as you customize it for your own purpose).
Always keep a one-line description and copyright notice like this in all your scripts, such ``metadata'' is very important to accompany every source file you write.
Of course, when you write the source file from scratch and just learn how to use a single function from this manual, only your name/year should appear.
The existing name of the original author of this example program is only for cases where you copy-paste this whole file.
@example
/* Reading a FITS image into memory.
*
* The following simple program shows how you can inspect the neighbors
* of a pixel using the GAL_DIMENSION_NEIGHBOR_OP function-like macro.
*
* Copyright (C) 2025 Your Name <your@@email.address>
* Copyright (C) 2020-2025 Mohammad Akhlaghi <mohammad@@akhlaghi.org>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <error.h>
#include <stdlib.h>
#include <gnuastro/fits.h>
#include <gnuastro/dimension.h>
int
main(void)
@{
double sum;
float *array;
size_t i, num, *dinc;
gal_data_t *input=gal_fits_img_read_to_type("input.fits", "1",
GAL_TYPE_FLOAT32, -1, 1,
NULL);
/* To avoid the `void *' pointer and have `dinc'. */
array=input->array;
dinc=gal_dimension_increment(input->ndim, input->dsize);
/* Go over all the pixels. */
for(i=0;i<input->size;++i)
@{
num=0;
sum=0.0f;
GAL_DIMENSION_NEIGHBOR_OP( i, input->ndim, input->dsize,
input->ndim, dinc,
@{++num; sum+=array[nind];@} );
printf("%zu: num: %zu, sum: %f\n", i, num, sum);
@}
/* Clean up and return. */
gal_data_free(input);
free(dinc);
return EXIT_SUCCESS;
@}
@end example
@node Library demo - multi-threaded operation, Library demo - reading and writing table columns, Library demo - inspecting neighbors, Library demo programs
@subsection Library demo - multi-threaded operation
The following simple program shows how to use Gnuastro to simplify spinning off threads and distributing different jobs between the threads.
The relevant thread-related functions are defined in @ref{Gnuastro's thread related functions}.
For easy linking/compilation of this program, along with a first run, see Gnuastro's @ref{BuildProgram}.
Before running, also change the @code{filename} and @code{hdu} variable values to specify an existing FITS file and/or extension/HDU.
This is a very simple program to open a FITS image, distribute its pixels between different threads and print the value of each pixel and the thread it was assigned to.
The actual operation is very simple (and would not usually be done with threads in a real-life program).
It is intentionally chosen to put more focus on the important steps in spinning off threads and how the worker function (which is called by each thread) can identify the job-IDs it should work on.
For example, instead of an array of pixels, you can define an array of tiles or any other context-specific structures as separate targets.
The important thing is that each action should have its own unique ID (counting from zero, as is done in an array in C).
You can then follow the process below and use each thread to work on all the targets that are assigned to it.
Recall that spinning off threads is itself an expensive process and we do not want to spin-off one thread for each target (see the description of @code{gal_threads_dist_in_threads} in @ref{Gnuastro's thread related functions}.
There are many (more complicated, real-world) examples of using @code{gal_threads_spin_off} in Gnuastro's actual source code, you can see them by searching for the @code{gal_threads_spin_off} function from the top source (after unpacking the tarball) directory (for example, with this command):
@example
$ grep -r gal_threads_spin_off ./
@end example
To encourage good coding practices, this script contains a copyright notice with a place holder for your name and your email (as you customize it for your own purpose).
Always keep a one-line description and copyright notice like this in all your scripts, such ``metadata'' is very important to accompany every source file you write.
Of course, when you write the source file from scratch and just learn how to use a single function from this manual, only your name/year should appear.
The existing name of the original author of this example program is only for cases where you copy-paste this whole file.
The code of this demonstration program is shown below.
This program was also built and run when you ran @code{make check} during the building of Gnuastro (@code{tests/lib/multithread.c}), so it is already tested for your system and you can safely use it as a guide.
@example
/* Demo of Gnuastro's high-level multi-threaded interface.
*
* This is a very simple program to open a FITS image, distribute its
* pixels between different threads and print the value of each pixel
* and the thread it was assigned to.
*
* Copyright (C) 2025 Your Name <your@@email.address>
* Copyright (C) 2020-2025 Mohammad Akhlaghi <mohammad@@akhlaghi.org>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/fits.h>
#include <gnuastro/threads.h>
/* This structure can keep all information you want to pass onto the
* worker function on each thread. */
struct params
@{
gal_data_t *image; /* Dataset to print values of. */
@};
/* This is the main worker function which will be called by the
* different threads. `gal_threads_params' is defined in
* `gnuastro/threads.h' and contains the pointer to the parameter we
* want. Note that the input argument and returned value of this
* function always must have `void *' type. */
void *
worker_on_thread(void *in_prm)
@{
/* Low-level definitions to be done first. */
struct gal_threads_params *tprm=(struct gal_threads_params *)in_prm;
struct params *p=(struct params *)tprm->params;
/* Subsequent definitions. */
float *array=p->image->array;
size_t i, index, *dsize=p->image->dsize;
/* Go over all the actions (pixels in this case) that were assigned
* to this thread. */
for(i=0; tprm->indexs[i] != GAL_BLANK_SIZE_T; ++i)
@{
/* For easy reading. */
index = tprm->indexs[i];
/* Print the information. */
printf("(%zu, %zu) on thread %zu: %g\n", index%dsize[1]+1,
index/dsize[1]+1, tprm->id, array[index]);
@}
/* Wait for all the other threads to finish, then return. */
if(tprm->b) pthread_barrier_wait(tprm->b);
return NULL;
@}
/* High-level function (called by the operating system). */
int
main(void)
@{
struct params p;
char *filename="input.fits", *hdu="1";
size_t numthreads=gal_threads_number();
/* We are using * `-1' for `minmapsize' to ensure that the image is
* read into * memory and `1' for `quietmmap' (which can also be
* zero), see the "Memory management" section in the book. */
int quietmmap=1;
size_t minmapsize=-1;
/* Read the image into memory as a float32 data type. */
p.image=gal_fits_img_read_to_type(filename, hdu, GAL_TYPE_FLOAT32,
minmapsize, quietmmap, NULL);
/* Print some basic information before the actual contents: */
printf("Pixel values of %s (HDU: %s) on %zu threads.\n", filename,
hdu, numthreads);
printf("Used to check the compiled library's capability in opening "
"a FITS file, and also spinning off threads.\n");
/* A small sanity check: this is only intended for 2D arrays (to
* print the coordinates of each pixel). */
if(p.image->ndim!=2)
@{
fprintf(stderr, "only 2D images are supported.");
exit(EXIT_FAILURE);
@}
/* Spin-off the threads and do the processing on each thread. */
gal_threads_spin_off(worker_on_thread, &p, p.image->size, numthreads,
minmapsize, quietmmap);
/* Clean up and return. */
gal_data_free(p.image);
return EXIT_SUCCESS;
@}
@end example
@node Library demo - reading and writing table columns, Library demo - Warp to another image, Library demo - multi-threaded operation, Library demo programs
@subsection Library demo - reading and writing table columns
Tables are some of the most common inputs to, and outputs of programs.
This section contains a small program for reading and writing tables using the constructs described in @ref{Table input output}.
For easy linking/compilation of this program, along with a first run, see Gnuastro's @ref{BuildProgram}.
Before running, also set the following file and column names in the first two lines of @code{main}.
The input and output names may be @file{.txt} and @file{.fits} tables, @code{gal_table_read} and @code{gal_table_write} will be able to write to both formats.
For plain text tables see @ref{Gnuastro text table format}.
If you do not have any table in text file format to use as your input, you can use the table that is generated in @ref{Sufi simulates a detection} section.
This example program reads three columns from a table.
The first two columns are selected by their name (@code{NAME1} and @code{NAME2}) and the third is selected by its number: column 10 (counting from 1).
Gnuastro's column selection is discussed in @ref{Selecting table columns}.
The first and second columns can be any type, but this program will convert them to @code{int32_t} and @code{float} for its internal usage respectively.
However, the third column must be double for this program.
So if it is not, the program will abort with an error.
Having the columns in memory, it will print them out along with their sum (just a simple application, you can do what ever you want at this stage).
Reading the table finishes here.
The rest of the program is a demonstration of writing a table.
While parsing the rows, this program will change the first column (to be counters) and multiply the second by 10 (so the output will be different).
Then it will define the order of the output columns by setting the @code{next} element (to create a @ref{List of gal_data_t}).
Before writing, this function will also set names for the columns (units and comments can be defined in a similar manner).
Writing the columns to a file is then done through a simple call to @code{gal_table_write}.
The operations that are shown in this example program are not necessary all the time.
For example, in many cases, you know the numerical data type of the column before writing your program (see @ref{Numeric data types}), so type checking and copying to a specific type will not be necessary.
To encourage good coding practices, this script contains a copyright notice with a place holder for your name and your email (as you customize it for your own purpose).
Always keep a one-line description and copyright notice like this in all your scripts, such ``metadata'' is very important to accompany every source file you write.
Of course, when you write the source file from scratch and just learn how to use a single function from this manual, only your name/year should appear.
The existing name of the original author of this example program is only for cases where you copy-paste this whole file.
@example
@verbatim
/* Reading and writing table columns.
*
* This example program reads three columns from a table. Having the
* columns in memory, it will print them out along with their sum. The
* rest of the program is a demonstration of writing a table.
*
* Copyright (C) 2025 Your Name <your@@email.address>
* Copyright (C) 2020-2025 Mohammad Akhlaghi <mohammad@@akhlaghi.org>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/table.h>
int
main(void)
{
/* File names and column names (which may also be numbers). */
char *c1_name="NAME1", *c2_name="NAME2", *c3_name="10";
char *inname="input.fits", *hdu="1", *outname="out.fits";
/* Internal parameters. */
float *array2=NULL;
double *array3=NULL;
int32_t *array1=NULL;
size_t i, counter=0;
gal_data_t *c1=NULL;
gal_data_t *c2=NULL;
gal_data_t tmp, *col, *columns;
gal_list_str_t *column_ids=NULL;
/* Define the columns to read. */
gal_list_str_add(&column_ids, c1_name, 0);
gal_list_str_add(&column_ids, c2_name, 0);
gal_list_str_add(&column_ids, c3_name, 0);
/* The columns were added in reverse, so correct it. */
gal_list_str_reverse(&column_ids);
/* Read the desired columns. */
columns = gal_table_read(inname, hdu, NULL, column_ids,
GAL_TABLE_SEARCH_NAME, 0, 1, -1, 1, NULL, NULL);
/* Go over the columns, we will assume that you do not know their type
* a-priori, so we will check */
counter=1;
for(col=columns; col!=NULL; col=col->next)
switch(counter++)
{
case 1: /* First column: we want it as int32_t. */
c1=gal_data_copy_to_new_type(col, GAL_TYPE_INT32);
array1 = c1->array;
break;
case 2: /* Second column: we want it as float. */
c2=gal_data_copy_to_new_type(col, GAL_TYPE_FLOAT32);
array2 = c2->array;
break;
case 3: /* Third column: it MUST be double. */
if(col->type!=GAL_TYPE_FLOAT64)
{
fprintf(stderr, "Column %s must be float64 type, it is "
"%s", c3_name, gal_type_name(col->type, 1));
exit(EXIT_FAILURE);
}
array3 = col->array;
break;
default:
exit(EXIT_FAILURE);
}
/* As an example application we will just print them out. In the
* meantime (just for a simple demonstration), change the first
* array value to the counter and multiply the second by 10. */
for(i=0;i<c1->size;++i)
{
printf("%zu: %d + %f + %f = %f\n", i+1, array1[i], array2[i],
array3[i], array1[i]+array2[i]+array3[i]);
array1[i] = i+1;
array2[i] *= 10;
}
/* Link the first two columns as a list. */
c1->next = c2;
c2->next = NULL;
/* Set names for the columns and write them out. */
c1->name = "COUNTER";
c2->name = "VALUE";
gal_table_write(c1, NULL, NULL, GAL_TABLE_FORMAT_BFITS, outname,
"MY-COLUMNS", 0, 0);
/* The names were not allocated, so to avoid cleaning-up problems,
* we will set them to NULL. */
c1->name = c2->name = NULL;
/* Clean up and return. */
gal_data_free(c1);
gal_data_free(c2);
gal_list_data_free(columns);
gal_list_str_free(column_ids, 0); /* strings were not allocated. */
return EXIT_SUCCESS;
}
@end verbatim
@end example
@node Library demo - Warp to another image, Library demo - Warp to new grid, Library demo - reading and writing table columns, Library demo programs
@subsection Library demo - Warp to another image
Gnuastro's warp library (that you can access by including @file{gnuastro/warp.h}) allows you to resample an image from a grid to another entirely using the WCSLIB (while accounting for distortions if necessary; see @ref{Warp library}).
The Warp library uses a pixel-mixing or area-based resampling approach which is fully described in @ref{Resampling}.
The most generic uses cases for this library are already available in the @ref{Invoking astwarp} program.
For a related demo (where the output grid and WCS are constructed from scratch), see @ref{Library demo - Warp to new grid}.
In the example below, we are warping the @code{input.fits} file to the same pixel grid and WCS as @code{reference.fits} image (assuming it is in hdu @code{0}).
You can download the FITS files in the @ref{Color channels in same pixel grid} section and use them as @code{input.fits} and @code{reference.fits} files.
Feel free to change these names to your own test file names.
This can be useful when you have a complex grid and WCS containing various keywords such as non-linear distortion coefficients, etc.
For example datasets, see the description of the @option{--gridfile} option in @ref{Align pixels with WCS considering distortions}.
To compile the demonstration program below, copy and paste the contents in a plain-text file (let's assume you named it @file{align-to-img.c}) and use @ref{BuildProgram} with this command: `@command{astbuildprog align-to-img.c}'.
Please note that the demo program does not perform many sanity checks to avoid making it too complex and to highlight this particular feature in the library.
For a robust method write programs with all the necessary sanity checks, see Gnuastro's Warp source code, see @ref{Program source}.
To encourage good coding practices, this script contains a copyright notice with a place holder for your name and your email (as you customize it for your own purpose).
Always keep a one-line description and copyright notice like this in all your scripts, such ``metadata'' is very important to accompany every source file you write.
Of course, when you write the source file from scratch and just learn how to use a single function from this manual, only your name/year should appear.
The existing name of the original author of this example program is only for cases where you copy-paste this whole file.
@example
@verbatim
/* Warp to another image.
*
* In the example below, we are warping the input.fits file to the same
* pixel grid and WCS as reference.fits image.
*
* Copyright (C) 2025 Your Name <your@@email.address>
* Copyright (C) 2022-2025 Pedram Ashofteh-Ardakani <pedramardakani@pm.me>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/wcs.h> /* contains gnuastro's fits.h */
#include <gnuastro/warp.h> /* contains gnuastro's data.h */
#include <gnuastro/array.h> /* contains gnuastro's type.h */
int
main(void)
{
/* Input file's name and HDU. */
char *filename="input.fits", *hdu="1";
/* Reference file's name and HDU. */
char *gridfile="reference.fits", *gridhdu="0";
/* Output file name. */
char *outname="align-to-img.fits";
/* Low-level variables needed to read the reference file's size. */
int nwcs;
size_t ndim, *dsize;
/* Initialize the 'wa' struct with empty values and NULL pointers. */
gal_warp_wcsalign_t wa=gal_warp_wcsalign_template();
/* Read the input image and its WCS. */
wa.input=gal_array_read_one_ch_to_type(filename, hdu, NULL,
GAL_TYPE_FLOAT64, -1, 0, NULL);
wa.input->wcs=gal_wcs_read(filename, hdu, 0, 0, 0, &wa.input->nwcs,
NULL);
/* Prepare the warp input structure, use all threads available. */
wa.coveredfrac=1; wa.edgesampling=0; wa.numthreads=0;
/* Set the target grid to be the same as wcsref.fits file on hdu 0. */
wa.twcs=gal_wcs_read(gridfile, gridhdu, 0, 0, 0, &nwcs, NULL);
if(wa.twcs==NULL)
{
fprintf(stderr, "%s (hdu %s): no WCS! Can't continue\n",
gridfile, gridhdu);
exit(EXIT_FAILURE);
}
/* Read the output image size (from the reference image). Note that
* 'dsize' will be freed while freeing 'widthinpix'). */
dsize=gal_fits_img_info_dim(gridfile, gridhdu, &ndim, NULL);
/* Convert the 'dsize' to a 'gal_data_t' so the library can use it. */
wa.widthinpix=gal_data_alloc(dsize, GAL_TYPE_SIZE_T, 1, &ndim,
NULL, 1, -1, 0, NULL, NULL, NULL);
/* Do the warp, then convert the output to a 32-bit float (the default
* float64 is too much for observational data and just wastes
* storage!). But if you are warping mock data before adding noise
* (where you do have float64 level precision), remove the type
* conversion line. */
gal_warp_wcsalign(&wa);
wa.output=gal_data_copy_to_new_type_free(wa.output, GAL_TYPE_FLOAT32);
/* WARNING: make sure there is no file with same name as 'out.fits'
* or the result will be appended to its final HDU. */
gal_fits_img_write(wa.output, outname, NULL, 0);
/* Clean up. */
gal_data_free(wa.input);
gal_data_free(wa.output);
gal_data_free(wa.widthinpix);
/* Give control back to the operating system. */
return EXIT_SUCCESS;
}
@end verbatim
@end example
@node Library demo - Warp to new grid, , Library demo - Warp to another image, Library demo programs
@subsection Library demo - Warp to new grid
Gnuastro's warp library (that you can access by including @file{gnuastro/warp.h}) allows you to resample an image from a grid to another entirely using the WCSLIB (while accounting for distortions if necessary; see @ref{Warp library}).
The Warp library uses a pixel-mixing or area-based resampling approach which is fully described in @ref{Resampling}.
The most generic uses cases for this library are already available in the @ref{Invoking astwarp} program.
For a related demo (where the output grid and WCS are imported from another file), see @ref{Library demo - Warp to another image}.
In the example below, we'll assume you have the SDSS image downloaded in @ref{Downloading and validating input data}.
After downloading the image as described there, you will have @file{r.fits} in your current directory.
We will therefore use @file{r.fits} as the input to the rest program here.
The image is not aligned to the celestial coordinates, so we will align the pixel and WCS coordinates, but set the center of the pixel grid to be at (RA,Dec) of (202.4173735,47.3374525).
We also give it a @code{TAN} projection with a pixel scale of 0.27 arcsecs, a defined center pixel.
However, we'll let the Warp library measure the proper output image size that will contain the aligned image.
To compile the demonstration program below, copy and paste the contents in a plain-text file (let's assume you named it @file{align-to-new.c}) and use @ref{BuildProgram} with this command: `@command{astbuildprog align-to-new.c}'.
Please note that the demo program does not perform many sanity checks to avoid making it too complex and to highlight this particular feature in the library.
For a robust method write programs with all the necessary sanity checks, see Gnuastro's Warp source code, see @ref{Program source}.
To encourage good coding practices, this script contains a copyright notice with a place holder for your name and your email (as you customize it for your own purpose).
Always keep a one-line description and copyright notice like this in all your scripts, such ``metadata'' is very important to accompany every source file you write.
Of course, when you write the source file from scratch and just learn how to use a single function from this manual, only your name/year should appear.
The existing name of the original author of this example program is only for cases where you copy-paste this whole file.
@example
@verbatim
/* Warp an image to a new grid.
*
* In the example below, We will use 'r.fits' as the input. The image is
* not aligned to the celestial coordinates, so we will align the pixel
* and WCS coordinates. We also give it a TAN projection. However, we’ll
* let the Warp library measure the proper output image size that will
* contain the aligned image.
*
* Copyright (C) 2025 Your Name <your@@email.address>
* Copyright (C) 2022-2025 Pedram Ashofteh-Ardakani <pedramardakani@pm.me>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/wcs.h> /* Contains gnuastro's fits.h */
#include <gnuastro/warp.h> /* Contains gnuastro's data.h */
#include <gnuastro/array.h> /* Contains gnuastro's type.h */
int
main(void)
{
/* Input file's name and HDU. */
char *filename="r.fits", *hdu="0";
/* Output file name. */
char *outname="align-to-new.fits";
/* RA/Dec of the center of the central pixel of output. Please
* change the center based on your input. */
double center[]={202.4173735, 47.3374525};
/* Coordinate and Projection algorithms of output. */
char *ctype[2]={"RA---TAN", "DEC--TAN"};
/* Output pixel scale (in units of degrees/pixel). */
double cdelt[]={0.27/3600, 0.27/3600};
/* For intermediate steps. */
size_t two=2;
/* Initialize the 'wa' struct with empty values and NULL pointers. */
gal_warp_wcsalign_t wa=gal_warp_wcsalign_template();
/* Set the width (and height!) of the output in pixels (as a 1D and
* 2 element 'gal_data_t'). When it is NULL, the library will
* calculate the appropriate width to fully fit the input image
* after alignment. */
wa.widthinpix=NULL;
/* Set the number of threads to use. If the value is '0', the
* library will estimate the maximum available threads at
* run-time on the host operating system. */
wa.numthreads=0;
/* Read the input image and its WCS. */
wa.input=gal_array_read_one_ch_to_type(filename, hdu, NULL,
GAL_TYPE_FLOAT64, -1, 0, NULL);
wa.input->wcs=gal_wcs_read(filename, hdu, 0, 0, 0, &wa.input->nwcs,
NULL);
/* Prepare the warp input structure. */
wa.coveredfrac=1; wa.edgesampling=0;
wa.ctype=gal_data_alloc(ctype, GAL_TYPE_STRING, 1, &two, NULL, 1,
-1, 0, NULL, NULL, NULL);
wa.cdelt=gal_data_alloc(cdelt, GAL_TYPE_FLOAT64, 1, &two, NULL, 1,
-1, 0, NULL, NULL, NULL);
wa.center=gal_data_alloc(center, GAL_TYPE_FLOAT64, 1, &two, NULL, 1,
-1, 0, NULL, NULL, NULL);
/* Do the warp, then convert it to a 32-bit float. */
gal_warp_wcsalign(&wa);
wa.output=gal_data_copy_to_new_type_free(wa.output, GAL_TYPE_FLOAT32);
/* WARNING: make sure there is no file with same name as 'out.fits'
* or the result will be appended to its final HDU. */
gal_fits_img_write(wa.output, outname, NULL, 0);
/* Remove the pointers to arrays that we didn't allocate (and thus,
* should not be freed by 'gal_data_free' below). */
wa.cdelt->array=wa.center->array=wa.ctype->array=NULL;
/* Clean up. */
gal_data_free(wa.cdelt); gal_data_free(wa.ctype);
gal_data_free(wa.input); gal_data_free(wa.output);
gal_data_free(wa.center); gal_data_free(wa.widthinpix);
/* Give control back to the operating system. */
return EXIT_SUCCESS;
}
@end verbatim
@end example
@node Developing, Other useful software, Library, Top
@chapter Developing
The basic idea of GNU Astronomy Utilities is for an interested astronomer to be able to easily understand the code of any of the programs or libraries, be able to modify the code if s/he feels there is an improvement and finally, to be able to add new programs or libraries for their own benefit, and the larger community if they are willing to share it.
In short, we hope that at least from the software point of view, the ``obscurantist faith in the expert's special skill and in his personal knowledge and authority'' can be broken, see @ref{Science and its tools}.
With this aim in mind, Gnuastro was designed to have a very basic, simple, and easy to understand architecture for any interested inquirer.
This chapter starts with very general design choices, in particular @ref{Why C} and @ref{Program design philosophy}.
It will then get a little more technical about the Gnuastro code and file/directory structure in @ref{Coding conventions} and @ref{Program source}.
@ref{The TEMPLATE program} discusses a minimal (and working) template to help in creating new programs or easier learning of a program's internal structure.
Some other general issues about documentation, building and debugging are then discussed.
This chapter concludes with how you can learn about the development and get involved in @ref{Gnuastro project webpage}, @ref{Developing mailing lists} and @ref{Contributing to Gnuastro}.
@menu
* Why C:: Why Gnuastro is designed in C.
* Program design philosophy:: General ideas behind the package structure.
* Coding conventions:: Gnuastro coding conventions.
* Program source:: Conventions for the code.
* Documentation:: Documentation is an integral part of Gnuastro.
* Building and debugging:: Build and possibly debug during development.
* Test scripts:: Understanding the test scripts.
* Bash programmable completion:: Auto-completions for better user experience.
* Developer's checklist:: Checklist to finalize your changes.
* Gnuastro project webpage:: Central hub for Gnuastro activities.
* Developing mailing lists:: Stay up to date with Gnuastro's development.
* Contributing to Gnuastro:: Share your changes with all users.
@end menu
@node Why C, Program design philosophy, Developing, Developing
@section Why C programming language?
@cindex C programming language
@cindex C++ programming language
@cindex Java programming language
@cindex Python programming language
Currently the programming languages that are commonly used in scientific applications are C++@footnote{@url{https://isocpp.org/}}, Java@footnote{@url{https://en.wikipedia.org/wiki/Java_(programming_language)}}; Python@footnote{@url{https://www.python.org/}}, and Julia@footnote{@url{https://julialang.org/}} (which is a newcomer but swiftly gaining ground).
One of the main reasons behind choosing these is their high-level abstractions.
However, GNU Astronomy Utilities is fully written in the C programming language@footnote{@url{https://en.wikipedia.org/wiki/C_(programming_language)}}.
The reasons can be summarized with simplicity, portability and efficiency/speed.
All four are very important in a scientific software and we will discuss them below.
@cindex ANSI C
@cindex ISO C90
@cindex Ritchie, Dennis
@cindex Kernighan, Brian
@cindex Stroustrup, Bjarne
Simplicity can best be demonstrated in a comparison of the main books of C++ and C.
The ``C programming language''@footnote{Brian Kernighan, Dennis Ritchie. @emph{The C programming language}. Prentice Hall, Inc., Second edition, 1988.
It is also commonly known as K&R and is based on the ANSI C and ISO C90 standards.} book, written by the authors of C, is only 286 pages and covers a very good fraction of the language, it has also remained unchanged from 1988.
C is the main programming language of nearly all operating systems and there is no plan of any significant update.
On the other hand, the most recent ``C++ programming language''@footnote{Bjarne Stroustrup.
@emph{The C++ programming language}. Addison-Wesley Professional; 4 edition, 2013.} book, also written by its author, has 1366 pages and its fourth edition came out in 2013! As discussed in @ref{Science and its tools}, it is very important for other scientists to be able to readily read the code of a program at their will with minimum requirements.
@cindex Object oriented programming
In C++ or Java, inheritance in the object oriented programming paradigm and their internal functions make the code very easy to write for a programmer who is deeply invested in those objects and understands all their relations well.
But it simultaneously makes reading the program for a first time reader (a curious scientist who wants to know only how a small step was done) extremely hard.
Before understanding the methods, the scientist has to invest a lot of time and energy in understanding those objects and their relations.
But in C, everything is done with basic language types for example @code{int}s or @code{float}s and their pointers to define arrays.
So when an outside reader is only interested in one part of the program, that part is all they have to understand.
Recently it is also becoming common to write scientific software in Python, or a combination of it with C or C++.
Python is a high level scripting language which does not need compilation.
It is very useful when you want to do something on the go and do not want to be halted by the troubles of compiling, linking, memory checking, etc.
When the datasets are small and the job is temporary, this ability of Python is great and is highly encouraged.
A very good example might be plotting, in which Python is undoubtedly one of the best.
But as the data sets increase in size and the processing becomes more complicated, the speed of Python scripts significantly decrease.
So when the program does not change too often and is widely used in a large community, mostly on large data sets (like astronomical images), using Python will waste a lot of valuable research-hours.
It is possible to wrap C or C++ functions with Python to fix the speed issue.
But this creates further complexity, because the interested scientist has to master two programming languages and their connection (which is not trivial).
Like C++, Python is object oriented, so as explained above, it needs a high level of experience with that particular program to reasonably understand its inner workings.
To make things worse, since it is mainly for on-the-go programming@footnote{Note that Python is good for fast programming, not fast programs.}, it can undergo significant changes.
One recent example is how Python 2.x and Python 3.x are not compatible.
Lots of research teams that invested heavily in Python 2.x cannot benefit from Python 3.x or future versions any more.
Some converters are available, but since they are automatic, lots of complications might arise in the conversion@footnote{For example see Jenness @url{https://arxiv.org/abs/1712.00461,2017}, which describes how LSST is managing the transition.}.
If a research project begins using Python 3.x today, there is no telling how compatible their investments will be when Python 4.x or 5.x will come out.
@cindex JVM: Java virtual machine
@cindex Java Virtual Machine (JVM)
Java is also fully object-oriented, but uses a different paradigm: its compilation generates a hardware-independent @emph{bytecode}, and a @emph{Java Virtual Machine} (JVM) is required for the actual execution of this bytecode on a computer.
Java also evolved with time, and tried to remain backward compatible, but inevitably this evolution required discontinuities and replacements of a few Java components which were first declared as becoming @emph{deprecated}, and removed from later versions.
@cindex Reproducibility
This stems from the core principles of high-level languages like Python or Java: that they evolve significantly on the scale of roughly 5 to 10 years.
They are therefore useful when you want to solve a short-term problem and you are ready to pay the high cost of keeping your software up to date with all the changes in the language.
This is fine for private companies, but usually too expensive for scientific projects that have limited funding for a fixed period.
As a result, the reproducibility of the result (ability to regenerate the result in the future, which is a core principal of any scientific result) and reusability of all the investments that went into the science software will be lost to future generations! Rebuilding all the dependencies of a software in an obsolete language is not easy, or even not possible.
Future-proof code (as long as current operating systems will be used) is therefore written in C.
The portability of C is best demonstrated by the fact that C++, Java and Python are part of the C-family of programming languages which also include Julia, Perl, and many other languages.
C libraries can be immediately included in C++, and it is easy to write wrappers for them in all C-family programming languages.
This will allow other scientists to benefit from C libraries using any C-family language that they prefer.
As a result, Gnuastro's library is already usable in C and C++, and wrappers will be@footnote{@url{http://savannah.gnu.org/task/?13786}} added for higher-level languages like Python, Julia and Java.
@cindex Low level programming
@cindex Programming, low level
The final reason was speed.
This is another very important aspect of C which is not independent of simplicity (first reason discussed above).
The abstractions provided by the higher-level languages (which also makes learning them harder for a newcomer) come at the cost of speed.
Since C is a low-level language@footnote{Low-level languages are those that directly operate the hardware like assembly languages.
So C is actually a high-level language, but it can be considered one of the lowest-level languages among all high-level languages.} (closer to the hardware), it has a direct access to the CPU@footnote{for instance the @emph{long double} numbers with at least 64-bit mantissa are not accessible in Python or Java.}, is generally considered as being faster in its execution, and is much less complex for both the human reader @emph{and} the computer.
The benefits of simplicity for a human were discussed above.
Simplicity for the computer translates into more efficient (faster) programs.
This creates a much closer relation between the scientist/programmer (or their program) and the actual data and processing.
The GNU coding standards@footnote{@url{http://www.gnu.org/prep/standards/}} also encourage the use of C over all other languages when generality of usage and ``high speed'' is desired.
@node Program design philosophy, Coding conventions, Why C, Developing
@section Program design philosophy
@cindex Gnulib
The core processing functions of each program (and all libraries) are written mostly with the basic ISO C90 standard.
We do make lots of use of the GNU additions to the C language in the GNU C library@footnote{Gnuastro uses many GNU additions to the C library.
However, thanks to the GNU Portability library (Gnulib) which is included in the Gnuastro tarball, users of non-GNU/Linux operating systems can also benefit from all these features when using Gnuastro.}, but these functions are mainly used in the user interface functions (reading your inputs and preparing them prior to or after the analysis).
The actual algorithms, which most scientists would be more interested in, are much more closer to ISO C90.
For this reason, program source files that deal with user interface issues and those doing the actual processing are clearly separated, see @ref{Program source}.
If anything particular to the GNU C library is used in the processing functions, it is explained in the comments in between the code.
@cindex GNU Coreutils
All the Gnuastro programs provide very low level and modular operations (modeled on GNU Coreutils).
Almost all the basic command-line programs like @command{ls}, @command{cp} or @command{rm} on GNU/Linux operating systems are part of GNU Coreutils.
This enables you to use shell scripting languages (for example, GNU Bash) to operate on a large number of files or do very complex things through the creative combinations of these tools that the authors had never dreamed of.
We have put a few simple examples in @ref{Tutorials}.
@cindex @LaTeX{}
@cindex GNU Bash
@cindex Python Matplotlib
@cindex Matplotlib, Python
@cindex PGFplots in @TeX{} or @LaTeX{}
For example, all the analysis output can be saved as ASCII tables which can be fed into your favorite plotting program to inspect visually.
Python's Matplotlib is very useful for fast plotting of the tables to immediately check your results.
If you want to include the plots in a document, you can use the PGFplots package within @LaTeX{}, no attempt is made to include such operations in Gnuastro.
In short, Bash can act as a glue to connect the inputs and outputs of all these various Gnuastro programs (and other programs) in any fashion.
Of course, Gnuastro's programs are just front-ends to the main workhorse (@ref{Gnuastro library}), allowing a user to create their own programs (for example, with @ref{BuildProgram}).
So once the functions within programs become mature enough, they will be moved within the libraries for even more general applications.
The advantage of this architecture is that the programs become small and transparent: the starting and finishing point of every program is clearly demarcated.
For nearly all operations on a modern computer (fast file input-output) with a modest level of complexity, the read/write speed is insignificant compared to the actual processing a program does.
Therefore the complexity which arises from sharing memory in a large application is simply not worth the speed gain.
Gnuastro's design is heavily influenced from Eric Raymond's ``The Art of Unix Programming''@footnote{Eric S. Raymond, 2004, @emph{The Art of Unix Programming}, Addison-Wesley Professional Computing Series.} which beautifully describes the design philosophy and practice which lead to the success of Unix-based operating systems@footnote{KISS principle: Keep It Simple, Stupid!}.
@node Coding conventions, Program source, Program design philosophy, Developing
@section Coding conventions
@cindex GNU coding standards
@cindex Gnuastro coding convention
In Gnuastro, we try our best to follow the GNU coding standards.
Added to those, Gnuastro defines the following conventions.
It is very important for readability that the whole package follows the same convention.
@itemize
@item
The code must be easy to read by eye.
So when the order of several lines within a function does not matter (for example, when defining variables at the start of a function).
You should put the lines in the order of increasing length and group the variables with similar types such that this half-pyramid of declarations becomes most visible.
If the reader is interested, a simple search will show them the variable they are interested in.
However, this visual aid greatly helps in general inspections of the code and help the reader get a grip of the function's processing.
@item
A function that cannot be fully displayed (vertically) in your monitor is probably too long and may be more useful if it is broken up into multiple functions.
40 lines is usually a good reference.
When the start and end of a function are clearly visible in one glance, the function is much more easier to understand.
This is most important for low-level functions (which usually define a lot of variables).
Low-level functions do most of the processing, they will also be the most interesting part of a program for an inquiring astronomer.
This convention is less important for higher level functions that do not define too many variables and whose only purpose is to run the lower-level functions in a specific order and with checks.
@cindex Optimization flag
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
In general you can be very liberal in breaking up the functions into smaller parts, the GNU Compiler Collection (GCC) will automatically compile the functions as inline functions when the optimizations are turned on.
So you do not have to worry about decreasing the speed.
By default Gnuastro will compile with the @option{-O3} optimization flag.
@cindex Buffers (Emacs)
@cindex Emacs buffers
@item
All Gnuastro hand-written text files (C source code, Texinfo documentation source, and version control commit messages) should normally be no more than @strong{75} characters per line.
Monitors today are certainly much wider, but with this limit, reading the functions becomes much more easier.
Also for the developers, it allows multiple files (or multiple views of one file) to be displayed beside each other on wide monitors.
Emacs's buffers are excellent for this capability, setting a buffer width of 80 with `@key{C-u 80 C-x 3}' will allow you to view and work on several files or different parts of one file using the wide monitors common today.
Emacs buffers can also be used as a shell prompt and compile the program (with @key{M-x compile}), and 80 characters is the default width in most terminal emulators.
If you use Emacs, Gnuastro sets the 75 character @command{fill-column} variable automatically for you, see cartouche below.
For long comments you can use press @key{Alt-q} in Emacs to separate them into separate lines automatically.
For long literal strings, you can use the fact that in C, two strings immediately after each other are concatenated, for example, @code{"The first part, " "and the second part."}.
Note the space character in the end of the first part.
Since they are now separated, you can easily break a long literal string into several lines and adhere to the maximum 75 character line length policy.
@cindex Header file
@item
The headers required by each source file (ending with @file{.c}) should be defined inside of it.
All the headers a complete program needs should @emph{not} be coadded in another header to include in all source files (for example @file{main.h}).
Although most `professional' programmers choose this single header method, Gnuastro is primarily written for professional/inquisitive astronomers (who are generally amateur programmers).
The list of header files included provides valuable general information and helps the reader.
@file{main.h} may only include the header file(s) that define types that the main program structure needs, see @file{main.h} in @ref{Program source}.
Those particular header files that are included in @file{main.h} can of course be ignored (not included) in separate source files.
@item
The headers should be classified (by an empty line) into separate
groups:
@enumerate
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
@item
@code{#include <config.h>}: This must be the first code line (not commented or blank) in each source file @emph{within Gnuastro}.
It sets macros that the GNU Portability Library (Gnulib) will use for a unified environment (GNU C Library), even when the user is building on a system that does not use the GNU C library.
@item
The C library header files, for example, @file{stdio.h}, @file{stdlib.h}, or @file{math.h}.
@item
Installed library header files, including Gnuastro's installed headers (for example @file{cfitsio.h} or @file{gsl/gsl_rng.h}, or @file{gnuastro/fits.h}).
@item
Gnuastro's internal headers (that are not installed), for example @file{gnuastro-internal/options.h}.
@item
For programs, the @file{main.h} file (which is needed by the next group of headers).
@item
That particular program's header files, for example, @file{mkprof.h}, or @file{noisechisel.h}.
@end enumerate
@noindent
As much as order does not matter when you include the header of each group, sort them by length, as described above.
@item
All function names, variables, etc., should be in lower case.
Macros and constant global @code{enum}s should be in upper case.
@item
For the naming of exported header files, functions, variables, macros, and library functions, we adopt similar conventions to those used by the GNU Scientific Library (GSL)@footnote{@url{https://www.gnu.org/software/gsl/design/gsl-design.html#SEC15}}.
In particular, in order to avoid clashes with the names of functions and variables coming from other libraries the name-space `@code{gal_}' is prefixed to them.
GAL stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
@item
All installed header files should be in the @file{lib/gnuastro} directory (under the top Gnuastro source directory).
After installation, they will be put in the @file{$prefix/include/gnuastro} directory (see @ref{Installation directory} for @file{$prefix}).
Therefore with this convention Gnuastro's headers can be included in internal (to Gnuastro) and external (a library user) source files with the same line
@example
# include <gnuastro/headername.h>
@end example
Note that the GSL convention for header file names is @file{gsl_specialname.h}, so your include directive for a GSL header must be something like @code{#include <gsl/gsl_specialname.h>}.
Gnuastro does not follow this GSL guideline because of the repeated @code{gsl} in the include directive.
It can be confusing and cause bugs for beginners.
All Gnuastro (and GSL) headers must be located within a unique directory and will not be mixed with other headers.
Therefore the `@file{gsl_}' prefix to the header file names is redundant@footnote{For GSL, this prefix has an internal technical application: GSL's architecture mixes installed and not-installed headers in the same directory.
This prefix is used to identify their installation status.
Therefore this filename prefix in GSL a technical internal issue (for developers, not users).}.
@item
@cindex GNU coding standards
All installed functions and variables should also include the base-name of the file in which they are defined as prefix, using underscores to separate words@footnote{The convention to use underscores to separate words, called ``snake case'' (or ``snake_case'').
This is also recommended by the GNU coding standards.}.
The same applies to exported macros, but in upper case.
For example, in Gnuastro's top source directory, the prototype of function @code{gal_box_border_from_center} is in @file{lib/gnuastro/box.h}, and the macro @code{GAL_POLYGON_MAX_CORNERS} is defined in @code{lib/gnuastro/polygon.h}.
This is necessary to give any user (who is not familiar with the library structure) the ability to follow the code.
This convention does make the function names longer (a little harder to write), but the extra documentation it provides plays an important role in Gnuastro and is worth the cost.
@item
@cindex GNU Emacs
@cindex Trailing space
There should be no trailing white space in a line.
To do this automatically every time you save a file in Emacs, add the following line to your @file{~/.emacs} file.
@example
(add-hook 'before-save-hook 'delete-trailing-whitespace)
@end example
@item
@cindex Tabs are evil
There should be no tabs in the indentation@footnote{If you use Emacs, Gnuastro's @file{.dir-locals.el} file will automatically never use tabs for indentation.
To make this a default in all your Emacs sessions, you can add the following line to your @file{~/.emacs} file: @command{(setq-default indent-tabs-mode nil)}}.
@item
@cindex GNU Emacs
@cindex Function groups
@cindex Groups of similar functions
Individual, contextually similar, functions in a source file are separated by 5 blank lines to be easily seen to be related in a group when parsing the source code by eye.
In Emacs you can use @key{CTRL-u 5 CTRL-o}.
@item
One group of contextually similar functions in a source file is separated from another with 20 blank lines.
In Emacs you can use @key{CTRL-u 20 CTRL-o}.
Each group of functions has short descriptive title of the functions in that group.
This title is surrounded by asterisks (@key{*}) to make it clearly distinguishable.
Such contextual grouping and clear title are very important for easily understanding the code.
@item
Always read the comments before the patch of code under it.
Similarly, try to add as many comments as you can regarding every patch of code.
Effectively, we want someone to get a good feeling of the steps, without having to read the C code and only by reading the comments.
This follows similar principles as @url{https://en.wikipedia.org/wiki/Literate_programming, Literate programming}.
@end itemize
The last two conventions are not common and might benefit from a short discussion here.
With a good experience in advanced text editor operations, the last two are redundant for a professional developer.
However, recall that Gnuastro aspires to be friendly to unfamiliar, and inexperienced (in programming) eyes.
In other words, as discussed in @ref{Science and its tools}, we want the code to appear welcoming to someone who is completely new to coding (and text editors) and only has a scientific curiosity.
Newcomers to coding and development, who are curious enough to venture into the code, will probably not be using (or have any knowledge of) advanced text editors.
They will see the raw code in the web page or on a simple text editor (like Gedit) as plain text.
Trying to learn and understand a file with dense functions that are all spaced with one or two blank lines can be very taunting for a newcomer.
But when they scroll through the file and see clear titles and meaningful spaces for similar functions, we are helping them find and focus on the part they are most interested in sooner and easier.
@cartouche
@cindex GNU Emacs
@noindent
@strong{GNU Emacs, the recommended text editor:} GNU Emacs is an extensible and easily customizable text editor which many programmers rely on for developing due to its countless features.
Among them, it allows specification of certain settings that are applied to a single file or to all files in a directory and its sub-directories.
In order to harmonize code coming from different contributors, Gnuastro comes with a @file{.dir-locals.el} file which automatically configures Emacs to satisfy most of the coding conventions above when you are using it within Gnuastro's directories.
Thus, Emacs users can readily start hacking into Gnuastro.
If you are new to developing, we strongly recommend this editor.
Emacs was the first project released by GNU and is still one of its flagship projects.
Some resources can be found at:
@table @asis
@item Official manual
At @url{https://www.gnu.org/software/emacs/manual/emacs.html}.
This is a great and very complete manual which is being improved for over 30 years and is the best starting point to learn it.
It just requires a little patience and practice, but rest assured that you will be rewarded.
If you install Emacs, you also have access to this manual on the command-line with the following command (see @ref{Info}).
@example
$ info emacs
@end example
@item A guided tour of emacs
At @url{https://www.gnu.org/software/emacs/tour/}.
A short visual tour of Emacs, officially maintained by the Emacs developers.
@item Unofficial mini-manual
At @url{https://tuhdo.github.io/emacs-tutor.html}.
A shorter manual which contains nice animated images of using Emacs.
@end table
@end cartouche
@node Program source, Documentation, Coding conventions, Developing
@section Program source
@cindex Source file navigation
@cindex Navigating source files
@cindex Program structure convention
@cindex Convention for program source
@cindex Gnuastro program structure convention
Besides the fact that all the programs share some functions that were explained in @ref{Library}, everything else about each program is completely independent.
Recall that Gnuastro is written for an active astronomer/scientist (not a passive one who just uses a software).
It must thus be easily navigable.
Hence there are fixed source files (that contain fixed operations) that must be present in all programs, these are discussed fully in @ref{Mandatory source code files}.
To easily understand the explanations in this section you can use @ref{The TEMPLATE program} which contains the bare minimum code for one working program.
This template can also be used to easily add new utilities: just copy and paste the directory and change @code{TEMPLATE} with your program's name.
@menu
* Mandatory source code files:: Description of files common to all programs.
* The TEMPLATE program:: Template for easy creation of a new program.
@end menu
@node Mandatory source code files, The TEMPLATE program, Program source, Program source
@subsection Mandatory source code files
Some programs might need lots of source files and if there is no fixed convention, navigating them can become very hard for a new inquirer into the code.
The following source files exist in every program's source directory (which is located in @file{bin/progname}).
For small programs, these files are enough.
Larger programs will need more files and developers are encouraged to define any number of new files.
It is just important that the following list of files exist and do what is described here.
When creating other source files, please choose filenames that are a complete single word: do not abbreviate (abbreviations are cryptic).
For a minimal program containing all these files, see @ref{The TEMPLATE program}.
@vtable @file
@item main.c
@cindex @code{main} function
Each executable has a @code{main} function, which is located in @file{main.c}.
Therefore this file is the starting point when reading any program's source code.
No actual processing functions must be defined in this file, the function(s) in this file are only meant to connect the most high level steps of each program.
Generally, @code{main} will first call the top user interface function to read user input and make all the preparations.
Then it will pass control to the top processing function for that program.
The functions to do both these jobs must be defined in other source files.
@item main.h
@cindex Top root structure
@cindex @code{prognameparams}
@cindex Root parameter structure
@cindex Main parameters C structure
All the major parameters which will be used in the program must be stored in a structure which is defined in @file{main.h}.
The name of this structure is usually @code{prognameparams}, for example, @code{cropparams} or @code{noisechiselparams}.
So @code{#include "main.h"} will be a staple in all the source codes of the program.
It is also regularly the first (and only) argument of many of the program's functions which greatly helps in readability.
Keeping all the major parameters of a program in this structure has the major benefit that most functions will only need one argument: a pointer to this structure.
This will significantly facilitate the job of the programmer, the inquirer and the computer.
All the programs in Gnuastro are designed to be low-level, small and independent parts, so this structure should not get too large.
@cindex @code{p}
The main root structure of all programs contains at least one instance of the @code{gal_options_common_params} structure.
This structure will keep the values to all common options in Gnuastro's programs (see @ref{Common options}).
This top root structure is conveniently called @code{p} (short for parameters) by all the functions in the programs and the common options parameters within it are called @code{cp}.
With this convention any reader can immediately understand where to look for the definition of one parameter.
For example, you know that @code{p->cp->output} is in the common parameters while @code{p->threshold} is in the program's parameters.
@cindex Structure de-reference operator
@cindex Operator, structure de-reference
With this basic root structure, the source code of functions can potentially become full of structure de-reference operators (@command{->}) which can make the code very unreadable.
In order to avoid this, whenever a structure element is used more than a couple of times in a function, a variable of the same type and with the same name (so it can be searched) as the desired structure element should be defined with the value of the root structure inside of it in definition time.
Here is an example:
@example
char *hdu=p->cp.hdu;
float threshold=p->threshold;
@end example
@item args.h
@cindex GNU C library
@cindex Argp argument parser
The options particular to each program are defined in this file.
Each option is defined by a block of parameters in @code{program_options}.
These blocks are all you should modify in this file, leave the bottom group of definitions untouched.
These are fed directly into the GNU C library's Argp facilities and it is recommended to have a look at that for better understand what is going on, although this is not required here.
Each element of the block defining an option is described under @code{argp_option} in @code{bootstrapped/lib/argp.h} (from Gnuastro's top source file).
Note that the last few elements of this structure are Gnuastro additions (not documented in the standard Argp manual).
The values to these last elements are defined in @code{lib/gnuastro/type.h} and @code{lib/gnuastro-internal/options.h} (from Gnuastro's top source directory).
@item ui.h
Besides declaring the exported functions of @code{ui.c}, this header also keeps the ``key''s to every program-specific option.
The first class of keys for the options that have a short-option version (single letter, see @ref{Options}).
The character that is defined here is the option's short option name.
The list of available alphabet characters can be seen in the comments.
Recall that some common options also take some characters, for those, see @file{lib/gnuastro-internal/options.h}.
The second group of options are those that do not have a short option alternative.
Only the first in this group needs a value (@code{1000}), the rest will be given a value by C's @code{enum} definition, so the actual value is irrelevant and must never be used, always use the name.
@item ui.c
@cindex User interface functions
@cindex Functions for user interface
Everything related to reading the user input arguments and options, checking the configuration files and checking the consistency of the input parameters before the actual processing is run should be done in this file.
Since most functions are the same, with only the internal checks and structure parameters differing.
We recommend going through the @code{ui.c} of @ref{The TEMPLATE program}, or several other programs for a better understanding.
The most high-level function in @file{ui.c} is named @code{ui_read_check_inputs_setup}.
It accepts the raw command-line inputs and a pointer to the root structure for that program (see the explanation for @file{main.h}).
This is the function that @code{main} calls.
The basic idea of the functions in this file is that the processing functions should need a minimum number of such checks.
With this convention an inquirer who only wants to understand only one part (mostly the processing part and not user input details and sanity checks) of the code can easily do so in the later files.
It also makes all the errors related to input appear before the processing begins which is more convenient for the user.
@item progname.c, progname.h
@cindex Top processing source file
The high-level processing functions in each program are in a file named @file{progname.c}, for example, @file{crop.c} or @file{noisechisel.c}.
The function within these files which @code{main} calls is also named after the program, for example:
@example
void
crop(struct cropparams *p)
@end example
@noindent
or
@example
void
noisechisel(struct noisechiselparams *p)
@end example
@noindent
In this manner, if an inquirer is interested in the processing steps, they can immediately come and check this file for the first processing step without having to go through @file{main.c} and @file{ui.c} first.
In most situations, any failure in any step of the programs will result in an informative error message and an immediate abort in the program.
So there is usually no need for return values.
Under more complicated situations where a return value might be necessary, @code{void} will be replaced with an @code{int} in the examples above.
This value must be directly returned by @code{main}, so it has to be an @code{int}.
@item authors-cite.h
@cindex Citation information
This header file keeps the global variable for the program authors and its BibTeX record for citation.
They are used in the outputs of the common options @option{--version} and @option{--cite}, see @ref{Operating mode options}.
@item progname-complete.bash
@cindex GNU Bash
@cindex Bash auto-complete
@cindex Completion in the shell
@cindex Bash programmable completion
@cindex Autocomplete (in the shell/Bash)
This shell script is used for implementing auto-completion features when running Gnuastro's programs within GNU Bash.
For more on the concept of shell auto-completion and how it is managed in Gnuastro, see @ref{Bash programmable completion}.
These files assume a set of common shell functions that have the prefix @code{_gnuastro_autocomplete_} in their name and are defined in @file{bin/complete.bash.in} (of the source directory, and under version control) and @file{bin/complete.bash.built} (built during the building of Gnuastro in the build directory).
During Gnuastro's build, all these Bash completion files are merged into one file that is installed and the user can @code{source} them into their Bash startup file, for example, see @ref{Quick start}.
@end vtable
@node The TEMPLATE program, , Mandatory source code files, Program source
@subsection The TEMPLATE program
The extra creativity offered by libraries comes at a cost: you have to actually write your @code{main} function and get your hands dirty in managing user inputs: are all the necessary parameters given a value? is the input in the correct format? do the options and the inputs correspond? and many other similar checks.
So when an operation has well-defined inputs and outputs and is commonly needed, it is much more worthwhile to simply do use all the great features that Gnuastro has already defined for such operations.
To make it easier to learn/apply the internal program infrastructure discussed in @ref{Mandatory source code files}, in the @ref{Version controlled source}, Gnuastro ships with a template program.
This template program is not available in the Gnuastro tarball so it does not confuse people using the tarball.
The @file{bin/TEMPLATE} directory in Gnuastro's Git repository contains the bare minimum files necessary to define a new program and all the basic/necessary files/functions are pre-defined there.
Below you can see a list of initial steps to take for customizing this template.
We just assume that after cloning Gnuastro's history, you have already bootstrapped Gnuastro, if not, please see @ref{Bootstrapping}.
@enumerate
@item
Select a name for your new program (for example, @file{myprog}).
@item
Copy the @file{TEMPLATE} directory to a directory with your program's name:
@example
$ cp -R bin/TEMPLATE bin/myprog
@end example
@item
As with all source files in Gnuastro, all the files in template also have a copyright notice at their top.
Open all the files and correct these notices: 1) The first line contains a single-line description of the program.
2) In the second line only the name or your program needs to be fixed and 3) Add your name and email as a ``Contributing author''.
As your program grows, you will need to add new files, do not forget to add this notice in those new files too, just put your name and email under ``Original author'' and correct the copyright years.
@item
Open @file{configure.ac} in the top Gnuastro source.
This file manages the operations that are done when a user runs @file{./configure}.
Going down the file, you will notice repetitive parts for each program.
You will notice that the program names follow an alphabetic ordering in each part.
There is also a commented line/patch for the @file{TEMPLATE} program in each part.
You can copy one line/patch (from the program above or below your desired name for example) and paste it in the proper place for your new program.
Then correct the names of the copied program to your new program name.
There are multiple places where this has to be done, so be patient and go down to the bottom of the file.
Ultimately add @file{bin/myprog/Makefile} to @code{AC_CONFIG_FILES}, only here the ordering depends on the length of the name (it is not alphabetical).
@item
Open @file{Makefile.am} in the top Gnuastro source.
Similar to the previous step, add your new program similar to all the other programs.
Here there are only two places: 1) at the top where we define the conditionals (three lines per program), and 2) immediately under it as part of the value for @code{SUBDIRS}.
@item
Open @file{doc/Makefile.am} and similar to @file{Makefile.am} (above), add the proper entries for the man page of your program to be created (here, the variable that keeps all the man pages to be created is @code{dist_man_MANS}).
Then scroll down and add a rule to build the man page similar to the other existing rules (in alphabetical order).
Do not forget to add a short one-line description here, it will be displayed on top of the man page.
@item
Change @code{TEMPLATE.c} and @code{TEMPLATE.h} to @code{myprog.c} and
@code{myprog.h} in the file names:
@example
$ cd bin/myprog
$ mv TEMPLATE.c myprog.c
$ mv TEMPLATE.h myprog.h
@end example
@item
@cindex GNU Grep
Correct all occurrences of @code{TEMPLATE} in the input files to @code{myprog} (in short or long format).
You can get a list of all occurrences with the following command.
If you use Emacs, it will be able to parse the Grep output and open the proper file and line automatically.
So this step can be very easy.
@example
$ grep --color -nHi -e template *
@end example
@item
Run the following commands to rebuild the configuration and build system,
and then to configure and build Gnuastro (which now includes your exciting
new program).
@example
$ autoreconf -f
$ ./configure
$ make
@end example
@item
You are done! You can now start customizing your new program to do your special processing.
When it is complete, just do not forget to add checks also, so it can be tested at least once on a user's system with @command{make check}, see @ref{Test scripts}.
Finally, if you would like to share it with all Gnuastro users, inform us so we merge it into Gnuastro's main history.
@end enumerate
@node Documentation, Building and debugging, Program source, Developing
@section Documentation
Documentation (this book) is an integral part of Gnuastro (see @ref{Science and its tools}).
Documentation is not considered a separate project and must be written by its developers.
Users can make edits/corrections, but the initial writing must be by the developer.
So, no change is considered valid for implementation unless the respective parts of the book have also been updated.
The following procedure can be a good suggestion to take when you have a new idea and are about to start implementing it.
The steps below are not a requirement, the important thing is that when you send your work to be included in Gnuastro, the book and the code have to both be fully up-to-date and compatible, with the purpose of the update very clearly explained.
You can follow any strategy you like, the following strategy was what we have found to be most useful until now.
@enumerate
@item
Edit the book and fully explain your desired change, such that your idea is completely embedded in the general context of the book with no sense of discontinuity for a first time reader.
This will allow you to plan the idea much more accurately and in the general context of Gnuastro (a particular program or library).
Later on, when you are coding, this general context will significantly help you as a road-map.
A very important part of this process is the program/library introduction.
These first few paragraphs explain the purposes of the program or library and are fundamental to Gnuastro.
Before actually starting to code, explain your idea's purpose thoroughly in the start of the respective/new section you wish to work on.
While actually writing its purpose for a new reader, you will probably get some valuable and interesting ideas that you had not thought of before.
This has occurred several times during the creation of Gnuastro.
If an introduction already exists, embed or blend your idea's purpose with the existing introduction.
We emphasize that doing this is equally useful for you (as the programmer) as it is useful for the user (reader).
Recall that the purpose of a program is very important, see @ref{Program design philosophy}.
As you have already noticed for every program/library, it is very important that the basics of the science and technique be explained in separate subsections prior to the `Invoking Programname' subsection.
If you are writing a new program or your addition to an existing program involves a new concept, also include such subsections and explain the concepts so a person completely unfamiliar with the concepts can get a general initial understanding.
You do not have to go deep into the details, just enough to get an interested person (with absolutely no background) started with some good pointers/links to where they can continue studying if they are more interested.
If you feel you cannot do that, then you have probably not understood the concept yourself.
If you feel you do not have the time, then think about yourself as the reader in one year: you will forget almost all the details, so now that you have done all the theoretical preparations, add a few more hours and document it.
Therefore in one year, when you find a bug or want to add a new feature, you do not have to prepare as much.
Have in mind that your only limitation in length is the fatigue of the reader after reading a long text, nothing else.
So as long as you keep it relevant/interesting for the reader, there is no page number limit/cost.
It might also help if you start discussing the usage of your idea in the `Invoking ProgramName' subsection (explaining the options and arguments you have in mind) at this stage too.
Actually starting to write it here will really help you later when you are coding.
@item
After you have finished adding your initial intended plan to the book, then start coding your change or new program within the Gnuastro source files.
While you are coding, you will notice that somethings should be different from what you wrote in the book (your initial plan).
So correct them as you are actually coding, but do not worry too much about missing a few things (see the next step).
@item
After your work has been fully implemented, read the section documentation
from the start and check if you did not miss any change in the coding. Also,
ensure that the context is fairly continuous for a first-time reader (who
has not seen the book or has known Gnuastro before you made your change).
@item
If the change is notable, also update the @file{NEWS} file.
@end enumerate
@node Building and debugging, Test scripts, Documentation, Developing
@section Building and debugging
@cindex GNU Libtool
@cindex GNU Autoconf
@cindex GNU Automake
@cindex GNU build system
To build the various programs and libraries in Gnuastro, the GNU build system is used which defines the steps in @ref{Quick start}.
It consists of GNU Autoconf, GNU Automake and GNU Libtool which are collectively known as GNU Autotools.
They provide a very portable system to check the hosts environment and compile Gnuastro based on that.
They also make installing everything in their standard places very easy for the programmer.
Most of the small caps files that you see in the top source directory of the tarball are created by these three tools (see @ref{Version controlled source}).
To facilitate the building and testing of your work during development, Gnuastro comes with two useful scripts:
@table @file
@cindex @file{developer-build}
@item developer-build
This is more fully described in @ref{Configure and build in RAM}.
During development, you will usually run this command only once (at the start of your work).
@cindex @file{tests/during-dev.sh}
@item tests/during-dev.sh
This script is designed to be run each time you make a change and want to test your work (with some possible input and output).
The script itself is heavily commented and thoroughly describes the best way to use it, so we will not repeat it here.
For a usage example, see @ref{Forking tutorial}.
As a short summary: you specify the build directory, an output directory (for the built program to be run in, and also contains the inputs), the program's short name and the arguments and options that it should be run with.
This script will then build Gnuastro, go to the output directory and run the built executable from there.
One option for the output directory might be your desktop, so you can easily see the output files and delete them when you are finished.
The main purpose of these scripts is to keep your source directory clean and facilitate your development.
@end table
@cindex Debugging
@cindex Optimization
By default all the programs are compiled with optimization flags for increased speed.
A side effect of optimization is that valuable debugging information is lost.
All the libraries are also linked as shared libraries by default.
Shared libraries further complicate the debugging process and significantly slow down the compilation (the @command{make} command).
So during development it is recommended to configure Gnuastro as follows:
@example
$ ./configure --enable-debug
@end example
@noindent
In @file{developer-build} you can ask for this behavior through the
@option{--debug} option, see @ref{Separate build and source directories}.
In order to understand the building process, you can go through the Autoconf, Automake and Libtool manuals, like all GNU manuals they provide both a great tutorial and technical documentation.
The ``A small Hello World'' section in Automake's manual (in chapter 2) can be a good starting guide after you have read the separate introductions.
@node Test scripts, Bash programmable completion, Building and debugging, Developing
@section Test scripts
@cindex Test scripts
@cindex Gnuastro test scripts
As explained in @ref{Tests}, for every program some simple tests are written to check the various independent features of the program.
All the tests are placed in the @file{tests/} directory.
The @file{tests/prepconf.sh} script is the first `test' that will be run.
It will copy all the configuration files from the various directories to a @file{tests/.gnuastro} directory (which it will make) so the various tests can set the default values.
This script will also make sure the programs do not go searching for user and system wide configuration files to avoid the mixing of values with different Gnuastro version on the system.
For each program, the tests are placed inside directories with the program name.
Each test is written as a shell script.
The last line of this script is the test which runs the program with certain parameters.
The return value of this script determines the fate of the test, see the ``Support for test suites'' chapter of the Automake manual for a very nice and complete explanation.
In every script, two variables are defined at first: @code{prog} and @code{execname}.
The first specifies the program name and the second the location of the executable.
@cindex Build tree
@cindex Source tree
@cindex @file{developer-build}
The most important thing to have in mind about all the test scripts is that they are run from inside the @file{tests/} directory in the ``build tree''.
Which can be different from the directory they are stored in (known as the ``source tree'')@footnote{The @file{developer-build} script also uses this feature to keep the source and build directories separate (see @ref{Separate build and source directories}).}.
This distinction is made by GNU Autoconf and Automake (which configure, build and install Gnuastro) so that you can install the program even if you do not have write access to the directory keeping the source files.
See the ``Parallel build trees (a.k.a VPATH builds)'' in the Automake manual for a nice explanation.
Because of this, any necessary inputs that are distributed in the tarball@footnote{In many cases, the inputs of a test are outputs of previous tests, this does not apply to this class of inputs.
Because all outputs of previous tests are in the ``build tree''.}, for example, the catalogs necessary for checks in MakeProfiles and Crop, must be identified with the @command{$topsrc} prefix instead of @command{../} (for the top source directory that is unpacked).
This @command{$topsrc} variable points to the source tree where the script can find the source data (it is defined in @file{tests/Makefile.am}).
The executables and other test products were built in the build tree (where they are being run), so they do not need to be prefixed with that variable.
This is also true for images or files that were produced by other tests.
@node Bash programmable completion, Developer's checklist, Test scripts, Developing
@section Bash programmable completion
@cartouche
@strong{Under development:} While work on TAB completion is ongoing, it is not yet fully ready, please see the notice at the start of @ref{Shell TAB completion}.
@end cartouche
@cindex Bash auto-complete
@cindex Completion in the shell
@cindex Bash programmable completion
@cindex Autocomplete (in the shell/Bash)
Gnuastro provides Programmable completion facilities in Bash.
This greatly helps users reach their desired result with minimal keystrokes, and helps them spend less time on figuring out the option names and values their acceptable values.
Gnuastro's completion script not only completes the half-written commands, but also prints suggestions based on previous arguments.
Imagine a scenario where we need to download three columns containing the right ascension, declination, and parallax from the GAIA DR3 dataset.
We have to make sure how these columns are abbreviated or spelled.
So we can call the command below, and store the column names in a file such as @file{gaia-dr3-columns.txt}.
@example
$ astquery gaia --information > gaia-dr3-columns.txt
@end example
@noindent
Then we need to memorize or copy the column names of interest, and specify an output fits file name such as @file{gaia.fits}:
@example
$ astquery gaia --dataset=dr3 --output=gaia.fits \
--column=ra,dec,parallax
@end example
@noindent
However, this is much easier using the auto-completion feature:
@example
$ astquery gaia --dataset=dr3 --output=gaia.fits --column=@key{[TAB]}
@end example
@noindent
After pressing @key{[TAB]}, a full list of gaia dr3 dataset column names will be displayed.
Typing the first key of the desired column and pressing @key{[TAB]} again will limit the displayed list to only the matching ones until the desired column is found.
@menu
* Bash TAB completion tutorial:: Fast tutorial to get you started on concepts.
* Implementing TAB completion in Gnuastro:: How Gnuastro uses Bash auto-completion features.
@end menu
@node Bash TAB completion tutorial, Implementing TAB completion in Gnuastro, Bash programmable completion, Bash programmable completion
@subsection Bash TAB completion tutorial
When a user presses the @key{[TAB]} key while typing commands, Bash will inspect the input to find a relevant ``completion specification'', or @command{compspec}.
If available, the @command{compspec} will generate a list of possible suggestions to complete the current word.
A custom @command{compsec} can be generated for any command using @i{bash completion builtins}@footnote{@url{https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html}} and the bash variables that start with the @code{COMP} keyword@footnote{@url{https://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html}}.
First, let's see a quick example of how you can make a completion script in just one line of code.
With the command below, we are asking Bash to give us three suggestions for @command{echo}: @code{foo}, @code{bar} and @code{bAr}.
Please run it in your terminal for the next steps.
@example
$ complete -W "foo bar bAr" echo
@end example
The possible completion suggestions are fed into @command{complete} using the @option{-W} option followed by a list of space delimited words.
Let's see it in action:
@example
$ echo @key{[TAB][TAB]}
bar bAr foo
@end example
Nicely done!
Just note that the strings are sorted alphabetically, not in the original order.
Also, an arbitrary number of space characters are printed between them (based on the number of suggestions and terminal size, etc.).
Now, if you type @samp{f} and press @key{[TAB]}, bash will automatically figure out that you wanted @code{foo} and it be completed right away:
@example
$ myprogram f@key{[TAB]}
$ myprogram foo
@end example
@noindent
However, nothing will happen if you type @samp{b} and press @key{[TAB]} only @i{once}.
This is because of the ambiguity: there is not enough information to figure out which suggestion you want: @code{bar} or @code{bAr}?
So, if you press @key{[TAB]} twice, it will print out all the options that start with @samp{b}:
@example
$ echo b@key{[TAB][TAB]}
bar bAr
$ echo ba@key{[TAB]}
$ echo bar
@end example
Not bad for a simple program.
But what if you need more control?
By passing the @option{-F} option to @command{complete} instead of @option{-W}, it will run a @i{function} for generating the suggestions, instead of using a static string.
For example, let's assume that the expected value after @code{foo} is the number of files in the current directory.
Since the logic is getting more complex, let's write and save the commands below into a shell script with an arbitrary name such as @file{completion-tutorial.sh}:
@example
@verbatim
$ cat completion-tutorial.sh
_echo(){
if [ "$3" == "foo" ]; then
COMPREPLY=( $(ls | wc -l) )
else
COMPREPLY=( $(compgen -W "foo bar bAr" -- "$2") )
fi
}
complete -F _echo echo
@end verbatim
@end example
@noindent
We will look at it in detail soon.
But for now, let's @command{source} the file into your current terminal and check if it works as expected:
@example
$ source completion-tutorial.sh
$ echo @key{[TAB][TAB]}
foo bar bAr
$ echo foo @key{[TAB]}
$ touch empty.txt
$ echo foo @key{[TAB]}
@end example
@noindent
Success!
As you see, this allows for setting up highly customized completion scripts.
Now let's have a closer look at the @file{completion-tutorial.sh} completion script from above.
First, the @samp{-F} option in front the @command{complete} command indicates that we want shell to execute the @command{_echo} function whenever @command{echo} is called.
As a convention, the function name should be the same as the program name, but prefixed with an underscore (@samp{_}).
Within the @command{_echo} function, we're checking if @code{$3} is equal to @option{foo}.
In Bash's auto-complete, @code{$3} means the word @b{before} current cursor position.
In fact, these are the arguments that the @command{_echo} function is receiving:
@table @code
@item $1
The name of the command, here it is @samp{echo}.
@item $2
The current word being completed (empty unless we are in the middle of typing a word).
@item $3
The word before the word being completed.
@end table
To tell the completion script what to reply with, we use the @command{COMPREPLY} array.
This array holds all the suggestions that @command{complete} will show for the user in the end.
In the example above, we simply give it the string output of @samp{ls | wc -l}.
Finally, we have the @command{compgen} command.
According to bash programmable completion builtins manual, the command @code{compgen [OPTION] [WORD]} generates possible completion matches for @code{[WORD]} according to @code{[OPTIONS]}.
Using the @samp{-W} option asks @command{compgen} to generate a list of words from an input string.
This is known as @i{Word Splitting}@footnote{@url{https://www.gnu.org/software/bash/manual/html_node/Word-Splitting.html}}.
@command{compgen} will automatically use the @command{$IFS} variable to split the string into a list of words.
You can check the default delimiters by calling:
@example
$ printf %q "$IFS"
@end example
@noindent
The default value of @command{$IFS} might be @samp{ \t\n}.
This means the SPACE, TAB, and New-line characters.
Finally, notice the @samp{-- "$2"} in this command:
@example
COMPREPLY=( $(compgen -W "foo bar bAr" -- "$2") )
@end example
@noindent
Here, the @samp{--} instructs @command{compgen} to only reply with a list of words that match @command{$2}, i.e. the current word being completed.
That is why when you type the letter @samp{b}, @command{complete} will reply only with its matches (@samp{bar} and @samp{bAr}), and will exclude @samp{foo}.
Let's get a little more realistic, and develop a very basic completion script for one of Gnuastro's programs.
Since the @option{--help} option will list all the options available in Gnuastro's programs, we are going to use its output and create a very basic TAB completion for it.
Note that the actual TAB completion in Gnuastro is a little more complex than this and fully described in @ref{Implementing TAB completion in Gnuastro}.
But this is a good exercise to get started.
We will use @command{asttable} as the demo, and the goal is to suggest all options that this program has to offer.
You can print all of them (with a lot of extra information) with this command:
@example
$ asttable --help
@end example
Let's write an @command{awk} script that prints all of the long options.
When printing the option names we can safely ignore the short options because if a user knows about the short options, s/he already knows exactly what they want!
Also, due to their single-character length, they will be too cryptic without their descriptions.
One way to catch the long options is through @command{awk} as shown below.
We only keep the lines that 1) starting with an empty space, 2) their first no-white character is @samp{-} and that have the format of @samp{--} followed by any number of numbers or characters.
Within those lines, if the first word ends in a comma (@samp{,}), the first word is the short option, so we want the second word (which is the long option).
Otherwise, the first word is the long option.
But for options that take a value, this will also include the format of the value (for example, @option{--column=STR}).
So with a @command{sed} command, we remove everything that is after the equal sign, but keep the equal sign itself (to highlight to the user that this option should have a value).
@example
$ asttable --help \
| awk '/^ / && $1 ~ /^-/ && /--+[a-zA-Z0-9]*/ @{ \
if($1 ~ /,$/) name=$2; \
else name=$1; \
print name@}' \
| sed -e's|=.*|=|'
@end example
If we wanted to show all the options to the user, we could simply feed the values of the command above to @command{compgen} and @command{COMPREPLY} subsequently.
But, we need @emph{smarter} completions: we want to offer suggestions based on the previous options that have already been typed in.
Just Beware!
Sometimes the program might not be acting as you expected.
In that case, using debug messages can clear things up.
You can add a @command{echo} command before the completion function ends, and check all current variables.
This can save a lot of headaches, since things can get complex.
Take the option @code{--wcsfile=} for example.
This option accepts a FITS file.
Usually, the user is trying to feed a FITS file from the current directory.
So it would be nice if we could help them and print only a list of FITS files sitting in the current directory -- or whatever directory they have typed-in so far.
But there's a catch.
When splitting the user's input line, Bash will consider @samp{=} as a separate word.
To avoid getting caught in changing the @command{IFS} or @command{WORDBREAKS} values, we will simply check for @samp{=} and act accordingly.
That is, if the previous word is a @samp{=}, we will ignore it and take the word before that as the previous word.
Also, if the current word is a @samp{=}, ignore it completely.
Taking all of that into consideration, the code below might serve well:
@verbatim
_asttable(){
if [ "$2" = "=" ]; then word=""
else word="$2"
fi
if [ "$3" = "=" ]; then prev="${COMP_WORDS[COMP_CWORD-2]}"
else prev="${COMP_WORDS[COMP_CWORD-1]}"
fi
case "$prev" in
--wcsfile)
COMPREPLY=( $(compgen -f -X "!*.[fF][iI][tT][sS]" -- "$word") )
;;
esac
}
complete -o nospace -F _asttable asttable
@end verbatim
@noindent
To test the code above, write it into @file{asttable-tutorial.sh}, and load it into your running terminal with this command:
@example
$ source asttable-tutorial.sh
@end example
If you then go to a directory that has at least one FITS file (with a @file{.fits} suffix, among other files), you can checkout the function by typing the following command.
You will see that only files ending in @file{.fits} are shown, not any other file.
@example
asttable --wcsfile=[TAB][TAB]
@end example
The code above first identifies the current and previous words.
It then checks if the previous word is equal to @code{--wcsfile} and if so, fills @code{COMPREPLY} array with the necessary suggestions.
We are using @code{case} here (instead of @code{if}) because in a real scenario, we need to check many more values and @code{case} is far better suited for such cases (cleaner and more efficient code).
The @option{-f} option in @command{compgen} indicates we're looking for a file.
The @option{-X} option @emph{filters out} the filenames that match the next regular expression pattern.
Therefore we should start the regular expression with @samp{!} if we want the files matching the regular expression.
The @code{-- "$word"} component collects only filenames that match the current word being typed.
And last but not least, the @samp{-o nospace} option in the @command{complete} command instructs the completion script to @emph{not} append a white space after each suggestion.
That is important because the long format of an option, its value is more clear when it sticks to the option name with a @samp{=} sign.
You have now written a very basic and working TAB completion script that can easily be generalized to include more options (and be good for a single/simple program).
However, Gnuastro has many programs that share many similar things and the options are not independent.
Also, complex situations do often come up: for example, some people use a @file{.fit} suffix for FITS files and others do not even use a suffix at all!
So in practice, things need to get a little more complicated, but the core concept is what you learnt in this section.
We just modularize the process (breaking logically independent steps into separate functions to use in different situations).
In @ref{Implementing TAB completion in Gnuastro}, we will review the generalities of Gnuastro's implementation of Bash TAB completion.
@node Implementing TAB completion in Gnuastro, , Bash TAB completion tutorial, Bash programmable completion
@subsection Implementing TAB completion in Gnuastro
The basics of Bash auto-completion was reviewed in @ref{Bash TAB completion tutorial}.
Gnuastro is a very complex package of many programs, that have many similar features, so implementing those principles in an easy to maintain manner requires a modular solution.
As a result, Bash's TAB completion is implemented as multiple files in Gnuastro:
@table @asis
@item @file{bin/completion.bash.built} (in build directory, automatically created)
This file contains the values of all Gnuastro options or arguments that take fixed strings as values (not file names).
For example, the names of Arithmetic's operators (see @ref{Arithmetic operators}), or spectral line names (like @option{--obsline} in @ref{CosmicCalculator input options}).
This file is created automatically during the building of Gnuastro.
The recipe to build it is available in Gnuastro's top-level @file{Makefile.am} (under the target @code{bin/completion.bash}).
It parses the respective Gnuastro source file that contains the necessary user-specified strings.
All the acceptable values values are then stored as shell variables (within a function).
@item @file{bin/completion.bash.in} (in source directory, under version control)
All the low-level completion functions that are common to all programs are stored here.
It thus contains functions that will parse the command-line or files, or suggest the completion replies.
@item @file{PROGNAME-complete.bash} (in source directory, under version control)
All Gnuastro programs contain a @file{PROGNAME-complete.bash} script within their source (for more on the fixed files of each program, see @ref{Program source}).
This file contains the very high-level (program-specific) Bash programmable completion features that are almost always defined in Gnuastro-generic Bash completion file (@file{bin/completion.bash.in}).
The top-level function that is called by Bash should be called @code{_gnuastro_autocomplete_PROGNAME} and its last line should be the @command{complete} command of Bash which calls this function.
The contents of @code{_gnuastro_autocomplete_PROGNAME} are almost identical for all the programs, it is just a very high-level function that either calls @code{_gnuastro_autocomplete_PROGNAME_arguments} to manage suggestions for the program's arguments or @code{_gnuastro_autocomplete_PROGNAME_option_value} to manage suggestions for the program's option values.
@end table
@noindent
The scripts above follow the following conventions.
After reviewing the list, please also look into the functions for examples of each point.
@itemize
@item
No global shell variables in any completion script: the contents of the files above are directly loaded into the user's environment.
So to keep the user's environment clean and avoid annoyance to the users, everything should be defined as shell functions, and any variable within the functions should be set as @code{local}.
@item
All the function names should start with `@code{_gnuastro_autocomplete_}', again to avoid populating the user's function name-space with possibly conflicting names.
@item
Outputs of functions should be written in the @code{local} variables of the higher-level functions that called them.
@end itemize
@node Developer's checklist, Gnuastro project webpage, Bash programmable completion, Developing
@section Developer's checklist
This is a checklist of things to do after applying your changes/additions
in Gnuastro:
@enumerate
@item
If the change is non-trivial, write test(s) in the @file{tests/progname/} directory to test the change(s)/addition(s) you have made.
Then add their file names to @file{tests/Makefile.am}.
@item
If your change involves a change in command-line behavior of a Gnuastro program or script (for example, adding a new option or argument), create or update the respective @file{bin/PROGNAME/completion.sh} file described under the @ref{Bash programmable completion} section.
@item
Run @command{$ make check} to make sure everything is working correctly.
@item
Make sure the documentation (this book) is completely up to date with your
changes, see @ref{Documentation}.
@item
Commit the change to your issue branch (see @ref{Production workflow} and @ref{Forking tutorial}).
Afterwards, run Autoreconf to generate the appropriate version number:
@example
$ autoreconf -f
@end example
@cindex Making a distribution package
@item
Finally, to make sure everything will be built, installed and checked correctly run the following command (after re-configuring, and rebuilding).
To greatly speed up the process, use multiple threads (8 in the example below, change it appropriately)
@example
$ make distcheck -j8
@end example
@noindent
This command will create a distribution file (ending with @file{.tar.gz}) and try to compile it in the most general cases, then it will run the tests on what it has built in its own mini-environment.
If @command{$ make distcheck} finishes successfully, then you are safe to send your changes to us to implement or for your own purposes.
See @ref{Production workflow} and @ref{Forking tutorial}.
@end enumerate
@node Gnuastro project webpage, Developing mailing lists, Developer's checklist, Developing
@section Gnuastro project webpage
@cindex Bug
@cindex Issue
@cindex Tracker
@cindex GNU Savannah
@cindex Report a bug
@cindex Management hub
@cindex Feature request
@cindex Central management
@url{https://savannah.gnu.org/projects/gnuastro/, Gnuastro's central management hub}@footnote{@url{https://savannah.gnu.org/projects/gnuastro/}} is located on @url{https://savannah.gnu.org/, GNU Savannah}@footnote{@url{https://savannah.gnu.org/}}.
Savannah is the central software development management system for many GNU projects.
Through this central hub, you can view the list of activities that the developers are engaged in, their activity on the version controlled source, and other things.
Each defined activity in the development cycle is known as an `issue' (or `item').
An issue can be a bug (see @ref{Report a bug}), or a suggested feature (see @ref{Suggest new feature}) or an enhancement or generally any @emph{one} job that is to be done.
In Savannah, issues are classified into three categories or `tracker's:
@table @asis
@cindex Mailing list: bug-gnuastro
@item Support
This tracker is a way that (possibly anonymous) users can get in touch with the Gnuastro developers.
It is a complement to the bug-gnuastro mailing list (see @ref{Report a bug}).
Anyone can post an issue to this tracker.
The developers will not submit an issue to this list.
They will only reassign the issues in this list to the other two trackers if they are valid@footnote{Some of the issues registered here might be due to a mistake on the user's side, not an actual bug in the program.}.
Ideally (when the developers have time to put on Gnuastro, please do not forget that Gnuastro is a volunteer effort), there should be no open items in this tracker.
@item Bugs
This tracker contains all the known bugs in Gnuastro (problems with
the existing tools).
@item Tasks
The items in this tracker contain the future plans (or new
features/capabilities) that are to be added to Gnuastro.
@end table
@noindent
All the trackers can be browsed by a (possibly anonymous) visitor, but to edit and comment on the Bugs and Tasks trackers, you have to be a registered on Savannah.
When posting an issue to a tracker, it is very important to choose the `Category' and `Item Group' options accurately.
The first contains a list of all Gnuastro's programs along with `Installation', `New program' and `Webpage'.
The ``Item Group'' contains the nature of the issue, for example, if it is a `Crash' in the software (a bug), or a problem in the documentation (also a bug) or a feature request or an enhancement.
The set of horizontal links on the top of the page (Starting with `Main' and `Homepage' and finishing with `News') are the easiest way to access these trackers (and other major aspects of the project) from any part of the project web page.
Hovering your mouse over them will open a drop down menu that will link you to the different things you can do on each tracker (for example, `Submit new' or `Browse').
When you browse each tracker, you can use the ``Display Criteria'' link above the list to limit the displayed issues to what you are interested in.
The `Category' and `Group Item' (explained above) are a good starting point.
@cindex Mailing list: gnuastro-devel
Any new issue that is submitted to any of the trackers, or any comments that are posted for an issue, is directly forwarded to the gnuastro-devel mailing list (@url{https://lists.gnu.org/mailman/listinfo/gnuastro-devel}, see @ref{Developing mailing lists} for more).
This will allow anyone interested to be up to date on the over-all development activity in Gnuastro and will also provide an alternative (to Savannah) archiving for the development discussions.
Therefore, it is not recommended to directly post an email to this mailing list, but do all the activities (for example add new issues, or comment on existing ones) on Savannah.
@cartouche
@noindent
@strong{Do I need to be a member in Savannah to contribute to Gnuastro?}
No.
The full version controlled history of Gnuastro is available for anonymous download or cloning.
See @ref{Production workflow} for a description of Gnuastro's Integration-Manager Workflow.
In short, you can either send in patches, or make your own fork.
If you choose the latter, you can push your changes to your own fork and inform us.
We will then pull your changes and merge them into the main project.
Please see @ref{Forking tutorial} for a tutorial.
@end cartouche
@node Developing mailing lists, Contributing to Gnuastro, Gnuastro project webpage, Developing
@section Developing mailing lists
To keep the developers and interested users up to date with the activity
and discussions within Gnuastro, there are two mailing lists which you can
subscribe to:
@table @asis
@item @command{gnuastro-devel@@gnu.org}
@itemx (at @url{https://lists.gnu.org/mailman/listinfo/gnuastro-devel})
@cindex Mailing list: gnuastro-devel
All the posts made in the support, bugs and tasks discussions of @ref{Gnuastro project webpage} are also sent to this mailing address and archived.
By subscribing to this list you can stay up to date with the discussions that are going on between the developers before, during and (possibly) after working on an issue.
All discussions are either in the context of bugs or tasks which are done on Savannah and circulated to all interested people through this mailing list.
Therefore it is not recommended to post anything directly to this mailing list.
Any mail that is sent to it from Savannah to this list has a link under the title ``Reply to this item at:''.
That link will take you directly to the issue discussion page, where you can read the discussion history or join it.
While you are posting comments on the Savannah issues, be sure to update the meta-data.
For example, if the task/bug is not assigned to anyone and you would like to take it, change the ``Assigned to'' box, or if you want to report that it has been applied, change the status and so on.
All these changes will also be circulated with the email very clearly.
@item @command{gnuastro-commits@@gnu.org}
@itemx (at @url{https://lists.gnu.org/mailman/listinfo/gnuastro-commits})
@cindex Mailing list: gnuastro-commits
This mailing list is defined to circulate all commits that are done in Gnuastro's version controlled source, see @ref{Version controlled source}.
If you have any ideas, or suggestions on the commits, please use the bug and task trackers on Savannah to followup the discussion, do not post to this list.
All the commits that are made for an already defined issue or task will state the respective ID so you can find it easily.
@end table
@node Contributing to Gnuastro, , Developing mailing lists, Developing
@section Contributing to Gnuastro
You have this great idea or have found a good fix to a problem which you would like to implement in Gnuastro.
You have also become familiar with the general design of Gnuastro in the previous sections of this chapter (see @ref{Developing}) and want to start working on and sharing your new addition/change with the whole community as part of the official release.
This is great and your contribution is most welcome.
This section and the next (see @ref{Developer's checklist}) are written in the hope of making it as easy as possible for you to share your great idea with the community.
@cindex FSF
@cindex Free Software Foundation
In this section we discuss the final steps you have to take: legal and technical.
From the legal perspective, the copyright of any work you do on Gnuastro has to be assigned to the Free Software Foundation (FSF) and the GNU operating system, or you have to sign a disclaimer.
We do this to ensure that Gnuastro can remain free in the future, see @ref{Copyright assignment}.
From the technical point of view, in this section we also discuss commit guidelines (@ref{Commit guidelines}) and the general version control workflow of Gnuastro in @ref{Production workflow}, along with a tutorial in @ref{Forking tutorial}.
Recall that before starting the work on your idea, be sure to checkout the
bugs and tasks trackers in @ref{Gnuastro project webpage} and announce your
work there so you do not end up spending time on something others have
already worked on, and also to attract similarly interested developers to
help you.
@menu
* Copyright assignment:: Copyright has to be assigned to the FSF.
* Commit guidelines:: Guidelines for commit messages.
* Production workflow:: Submitting your commits (work) for inclusion.
* Forking tutorial:: Tutorial on workflow steps with Git.
@end menu
@node Copyright assignment, Commit guidelines, Contributing to Gnuastro, Contributing to Gnuastro
@subsection Copyright assignment
@cindex Free Software Foundation
Gnuastro's copyright is owned by the Free Software Foundation (FSF) to ensure that Gnuastro always remains free.
The FSF has also provided a @url{https://www.fsf.org/licensing/contributor-faq, Contributor FAQ} to further clarify the reasons, so we encourage you to read it.
Professor Eben Moglen, of the Columbia University Law School has given a nice summary of the reasons for this at @url{https://www.gnu.org/licenses/why-assign}.
Below we are copying it verbatim for self consistency (in case you are offline or reading in print).
@quotation
Under US copyright law, which is the law under which most free software programs have historically been first published, there are very substantial procedural advantages to registration of copyright.
And despite the broad right of distribution conveyed by the GPL, enforcement of copyright is generally not possible for distributors: only the copyright holder or someone having assignment of the copyright can enforce the license.
If there are multiple authors of a copyrighted work, successful enforcement depends on having the cooperation of all authors.
In order to make sure that all of our copyrights can meet the record keeping and other requirements of registration, and in order to be able to enforce the GPL most effectively, FSF requires that each author of code incorporated in FSF projects provide a copyright assignment, and, where appropriate, a disclaimer of any work-for-hire ownership claims by the programmer's employer.
That way we can be sure that all the code in FSF projects is free code, whose freedom we can most effectively protect, and therefore on which other developers can completely rely.
@end quotation
Please get in touch with the Gnuastro maintainer (currently Mohammad Akhlaghi, mohammad -at- akhlaghi -dot- org) to follow the procedures.
It is possible to do this for each change (good for a single contribution), and also more generally for all the changes/additions you do in the future within Gnuastro.
So if you have already assigned the copyright of your work on another GNU software to the FSF, it should be done again for Gnuastro.
The FSF has staff working on these legal issues and the maintainer will get you in touch with them to do the paperwork.
The maintainer will just be informed in the end so your contributions can be merged within the Gnuastro source code.
Gnuastro will gratefully acknowledge (see @ref{Acknowledgments and short history}) all the people who have assigned their copyright to the FSF and have thus helped to guarantee the freedom and reliability of Gnuastro.
The Free Software Foundation will also acknowledge your copyright contributions in the Free Software Supporter: @url{https://www.fsf.org/free-software-supporter} which will circulate to a very large community (225,910 people in July 2021).
See the archives for some examples and subscribe to receive interesting updates.
The very active code contributors (or developers) will also be recognized as project members on the Gnuastro project web page (see @ref{Gnuastro project webpage}) and can be given a @code{gnu.org} email address.
So your very valuable contribution and copyright assignment will not be forgotten and is highly appreciated by a very large community.
If you are reluctant to sign an assignment, a disclaimer is also acceptable.
@cartouche
@noindent
@strong{Do I need a disclaimer from my university or employer?} It depends on the contract with your university or employer.
From the FSF's @file{/gd/gnuorg/conditions.text}: ``If you are employed to do programming, or have made an agreement with your employer that says it owns programs you write, we need a signed piece of paper from your employer disclaiming rights to'' Gnuastro.
The FSF's copyright clerk will kindly help you decide, please consult the following email address: ``assign -at- gnu -dot- org''.
@end cartouche
@node Commit guidelines, Production workflow, Copyright assignment, Contributing to Gnuastro
@subsection Commit guidelines
To be able to cleanly integrate your work with the other developers, @strong{never commit on the @file{master} branch} (see @ref{Production workflow} for a complete discussion and @ref{Forking tutorial} for a cookbook example).
In short, leave @file{master} only for changes you fetch, or pull from the official repository (see @ref{Synchronizing}).
In the Gnuastro commit messages, we strive to follow these standards.
Note that in the early phases of Gnuastro's development, we are experimenting and so if you notice earlier commits do not satisfy some of the guidelines below, it is because they predate that guideline.
@table @asis
@item Commit title
The commits have to start with one short descriptive title.
The title is separated from the body with one blank line.
Run @command{git log} to see some of the most recent commit messages as an example.
In general, the title should satisfy the following conditions:
@itemize
@item
It is best for the title to be short, about 60 (or even 50) characters.
Most emulated command-line terminals are about 80 characters wide.
However, we should also allow for the commit hashes which are printed in @command{git log --oneline}, and also branch names or the graph structure outputs of @command{git log} which are also commonly used.
@item
The title should not finish with any full-stops or periods (`@key{.}').
@end itemize
@item Commit body
@cindex Mailing list: gnuastro-commits
The body of the commit message is separated from the title by one empty line.
Recall that anyone who has subscribed to @command{gnuastro-commits} mailing list will get the commit in their email after it has been pushed to @file{master}.
People will also read them when they synchronize with the main Gnuastro repository (see @ref{Synchronizing}).
Finally, the commit messages will later be used to update the @file{NEWS} file on each release.
Therefore the commit message body plays a very important role in the development of Gnuastro, so please adhere to the following guidelines.
@itemize
@item
The body should be very descriptive.
Start the commit message body by explaining what changes your commit makes from a user's perspective (added, changed, or removed options, or arguments to programs or libraries, or modified algorithms, or new installation step, etc.).
@item
@cindex Mailing list: gnuastro-commits
Try to explain the committed contents as best as you can.
Recall that the readers of your commit message do not necessarily have your current background.
After some time you will also forget the context, so this request is not just for others@footnote{@url{http://catb.org/esr/writings/unix-koans/prodigy.html}}.
Therefore be very descriptive and explain as much as possible: what the bug/task was, justify the way you fixed it and discuss other possible solutions that you might not have included.
For the last item, it is best to discuss them thoroughly as comments in the appropriate section of the code, but only give a short summary in the commit message.
Note that all added and removed source code lines will also be circulated in the @command{gnuastro-commits} mailing list.
@item
Like all other Gnuastro's text files, the lines in the commit body should not be longer than 75 characters, see @ref{Coding conventions}.
This is to ensure that on standard terminal emulators (with 80 character width), the @command{git log} output can be cleanly displayed (note that the commit message is indented in the output of @command{git log}).
If you use Emacs, Gnuastro's @file{.dir-locals.el} file will ensure that your commits satisfy this condition (using @key{M-q}).
@item
@cindex Mailing list: gnuastro-commits
When the commit is related to a task or a bug, please include the respective ID (in the format of @code{bug/task #ID}, note the space) in the commit message (from @ref{Gnuastro project webpage}) for interested people to be able to followup the discussion that took place there.
If the commit fixes a bug or finishes a task, the recommended way is to add a line after the body with `@code{This fixes bug #ID.}', or `@code{This finishes task #ID.}'.
Do not assume that the reader has internet access to check the bug's full description when reading the commit message, so give a short introduction too.
@end itemize
@end table
Below you can see a good commit message example (do not forget to read it, it has tips for you).
After reading this, please run @command{git log} on the @code{master} branch and read some of the recent commits for more realistic examples.
@example
The first line should be the title of the commit
An empty line is necessary after the title so Git does not confuse
lines. This top paragraph of the body of the commit usually describes
the reason this commit was done. Therefore it usually starts with
"Until now ...". It is very useful to explain the reason behind the
change, things that are not immediately obvious when looking into the
code. You do not need to list the names of the files, or what lines
have been changed, do not forget that the code changes are fully stored
within Git :-).
In the second paragraph (or any later paragraph!) of the body, we
describe the solution and why (not "how"!) the particular solution was
implemented. So we usually start this part of the commit body with
"With this commit ...". Again, you do not need to go into the details
that can be seen from the 'git diff' command (like the file names that
have been changed or the code that has been implemented). The important
thing here is the things that are not immediately obvious from looking
into the code.
You can continue the explanation and it is encouraged to be very
explicit about the "human factor" of the change as much as possible, not
technical details.
@end example
@node Production workflow, Forking tutorial, Commit guidelines, Contributing to Gnuastro
@subsection Production workflow
Fortunately `Pro Git' has done a wonderful job in explaining the different workflows in Chapter 5@footnote{@url{http://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows}} and in particular the ``Integration-Manager Workflow'' explained there.
The implementation of this workflow is nicely explained in Section 5.2@footnote{@url{http://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project}} under ``Forked-Public-Project''.
We have also prepared a short tutorial in @ref{Forking tutorial}.
Anything on the master branch should always be tested and ready to be built and used.
As described in `Pro Git', there are two methods for you to contribute to Gnuastro in the Integration-Manager Workflow:
@enumerate
@item
You can send commit patches by email as fully explained in `Pro Git'.
This is good for your first few contributions.
Just note that raw patches (containing only the diff) do not have any meta-data (author name, date, etc.).
Therefore they will not allow us to fully acknowledge your contributions as an author in Gnuastro: in the @file{AUTHORS} file and at the start of the PDF book.
These author lists are created automatically from the version controlled source.
To receive full acknowledgment when submitting a patch, is thus advised to use Git's @code{format-patch} tool.
See Pro Git's @url{https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#Public-Project-over-Email, Public project over email} section for a nice explanation.
If you would like to get more heavily involved in Gnuastro's development, then you can try the next solution.
@item
You can have your own forked copy of Gnuastro on any hosting site you like (Codeberg, Gitlab, GitHub, BitBucket, etc.) and inform us when your changes are ready so we merge them in Gnuastro.
This is more suited for people who commonly contribute to the code (see @ref{Forking tutorial}).
@end enumerate
In both cases, your commits (with your name and information) will be preserved and your contributions will thus be fully recorded in the history of Gnuastro and in the @file{AUTHORS} file and this book (second page in the PDF format) once they have been incorporated into the official repository.
Needless to say that in such cases, be sure to follow the bug or task trackers (or subscribe to the @command{gnuastro-devel} mailing list) and contact us beforehand so you do not do something that someone else is already working on.
In that case, you can get in touch with them and help the job go on faster, see @ref{Gnuastro project webpage}.
This workflow is currently mostly borrowed from the general recommendations of Git@footnote{@url{https://github.com/git/git/blob/master/Documentation/SubmittingPatches}} and GitHub.
But since Gnuastro is currently under heavy development, these might change and evolve to better suit our needs.
@node Forking tutorial, , Production workflow, Contributing to Gnuastro
@subsection Forking tutorial
This is a tutorial on the second suggested method (commonly known as forking) that you can submit your modifications in Gnuastro (see @ref{Production workflow}).
To start, please create an @emph{empty} repository on your hosting service web page (we recommend Codeberg since it is fully free software@footnote{See @url{https://www.gnu.org/software/repo-criteria-evaluation.html} for an evaluation of the major existing repositories.
Gnuastro uses GNU Savannah (which also has the highest ranking in the evaluation), but for starters, Codeberg may be easier (it is fully free software).}).
By empty, we mean that you don't let the web service fill your new repository with a @file{README.md} file (they usually have a check-box for this).
Also, since Gnuastro is a public repository, it is much easier if you define your project as a public repository (not a private one).
If this is your first hosted repository on the web page, you also have to upload your public SSH key@footnote{for example, see this explanation provided by Codeberg: @url{https://docs.codeberg.org/security/ssh-key}.} for the @command{git push} command below to work.
Here we will assume you use the name @file{janedoe} to refer to yourself everywhere and that you choose @file{gnuastro} as the name of your Gnuastro fork.
Any online hosting service will give you an address (similar to the `@file{git@@codeberg.org:...}' below) of the empty repository you have created using their web page, use that address in the third line below.
@example
$ git clone git://git.sv.gnu.org/gnuastro.git
$ cd gnuastro
$ git remote add janedoe git@@codeberg.org:janedoe/gnuastro.git
$ git push janedoe master
@end example
The full Gnuastro history is now pushed onto your hosting service and the @file{janedoe} remote is now also following your @file{master} branch.
If you run @command{git remote show REMOTENAME} for the @file{origin} and @file{janedoe} remotes, you will see their difference: the first has pull access and the second does not.
This nicely summarizes the main idea behind this workflow: you push to your remote repository, we pull from it and merge it into @file{master}, then you finalize it by pulling from the main repository.
To test (compile) your changes during your work, you will need to bootstrap the version controlled source, see @ref{Bootstrapping} for a full description.
The cloning process above is only necessary for your first time setup, you do not need to repeat it.
However, please repeat the steps below for each independent issue you intend to work on.
Let's assume you have found a bug in @file{lib/statistics.c}'s median calculating function.
Before actually doing anything, please announce it (see @ref{Report a bug}) so everyone knows you are working on it, or to confirm if others are not already working on it.
With the commands below, you make a branch, checkout to it, correct the bug and check if it is indeed fixed.
But before all of this, make sure that you are on the @file{master} branch and that your @file{master} branch is up to date with the main Gnuastro repository with the first two commands.
@example
$ git checkout master
$ git pull
$ git checkout -b bug-median-stats # Choose a descriptive name
$ emacs lib/statistics.c
@end example
With the commands above, you have opened your favorite text editor (if it is not Emacs, feel free to use any other!) and are starting to make changes.
Making changes will usually involve checking the compilation and outputs of the parts you have changed.
Gnuastro already has some facilities to help you in your checks during/after development.
@table @file
@item developer-build
This script does a full build (from the configuration phase to producing the final distribution tarball).
During the process, if there is any error or crash, it will abort.
This allows you to find problems that you hadn't predicted while modifying the files.
This script is described more completely in @ref{Separate build and source directories}.
Here is an example of running this script from scratch (the @file{junk} is just a place-holder for a URL):
@example
$ ./developer-build -p junk
@end example
If you just want a fast build to start your developing, the recommended way is to run it in debugging mode like below:
@example
$ ./developer-build -d
@end example
Without debugging mode, building Gnuastro can take several minutes due to the highly optimizable code structure of Gnuastro (which significantly improves the run-time of the programs, but is slower in the compilation phase).
During development, you rarely need high speed at @emph{run-time}.
This is because once you find the bug, you can decrease the size of the dataset to be very small and not be affected by run-time optimizations.
However, during development, you do need a high speed at @emph{build-time} to see the changes fast and also need debugging flags (for example to run with Valgrind).
Debugging flags are lost in the default highly-optimized build.
@item tests/during-dev.sh
This script is most commonly used during the development of a new feature within the library or programs (it is also mentioned in @ref{Building and debugging}).
It assumes that you have built Gnuastro with the @file{./developer-build} script (usually in debugging mode).
In other words, it assumes that all the built products are in the @file{build} directory.
It has internal variables to set the name of the program you are testing, the name of its arguments and options, as well as the location that the built program should be run in.
It is heavily commented, so we recommend reading those comments and will not go into more detail here.
@item make pdf
When making changes in the book, you can run this in the @file{build} directory to see your changes in the final PDF before committing.
Furthermore, if you add or update an example code block of the book, you should copy-paste it into a text editor and check that it runs correctly (typos are very common and can be very annoying for first-time readers).
If there are no problems, you can add your modification and commit it.
@end table
Once you have implemented your bug fix and made sure that it works, through the checks above, you are ready to stage, commit and push your changes with the commands below.
Since Gnuastro is a large project, commit messages have to follow certain standards that you should follow, they are described in @ref{Commit guidelines}.
Please read that section carefully, and view previous commits (with @code{git log}) before writing the commit message:
@example
$ git add lib/statistics.c
$ git commit
$ git push janedoe bug-median-stats
@end example
Your new branch is now on your hosted repository.
Through the respective tacker on Savannah (see @ref{Gnuastro project webpage}) you can then let the other developers know that your @file{bug-median-stats} branch is ready.
They will pull your work, test it themselves and if it is ready to be merged into the main Gnuastro history, they will merge it into the @file{master} branch.
After that is done, you can simply checkout your local @file{master} branch and pull all the changes from the main repository.
After the pull you can run `@command{git log}' as shown below, to see how @file{bug-median-stats} is merged with master.
To finalize, you can push all the changes to your hosted repository and delete the branch:
@example
$ git checkout master
$ git pull
$ git log --oneline --graph --decorate --all
$ git push janedoe master
$ git branch -d bug-median-stats # delete local branch
$ git push janedoe --delete bug-median-stats # delete remote branch
@end example
Just as a reminder, always keep your work on each issue in a separate local and remote branch so work can progress on them independently.
After you make your announcement, other people might contribute to the branch before merging it in to @file{master}, so this is very important.
As a final reminder: before starting each issue branch from @file{master}, be sure to run @command{git pull} in @file{master} as shown above.
This will enable you to start your branch (work) from the most recent commit and thus simplify the final merging of your work.
@node Other useful software, GNU Free Doc License, Developing, Top
@appendix Other useful software
In this appendix the installation of programs and libraries that are not direct Gnuastro dependencies are discussed.
However they can be useful for working with Gnuastro.
@menu
* SAO DS9:: Viewing FITS images.
* TOPCAT:: Plotting tables of data.
* PGPLOT:: Plotting directly in C.
@end menu
@node SAO DS9, TOPCAT, Other useful software, Other useful software
@section SAO DS9
@cindex SAO DS9
@cindex FITS image viewer
@url{http://ds9.si.edu,SAO DS9} is not a requirement of Gnuastro, it is a FITS image viewer.
It is therefore a useful tool to visually inspect the images/cubes of your Gnuastro inputs or outputs (for tables, see @ref{TOPCAT}).
In Gnuastro we have an installed script to run DS9 or TOPCAT on any number of FITS files (depending on it being an image or table), see @ref{Viewing FITS file contents with DS9 or TOPCAT} (which also includes a @file{.desktop} file for GUI integration).
After installing DS9, you can easily use that script to open any FITS file (table, image or cube).
Like the other packages, it might already be available in your distribution's repositories; but these may be outdated.
DS9 is also already pre-compiled for many common operating systems in the download section of its own web page:
@enumerate
@item
Find your operating system in @url{https://ds9.si.edu/download}.
Here are some tips when trying to find the proper directory:
@itemize
@item
Many GNU/Linux operating systems are compatible with Debian or Fedora, so if you don't find your operating system's name, probably the latest Debian or Fedora will also work for you.
@item
macOS uses the low-level ``Darwin'' kernel.
Therefore, if you have a macOS, also consider those directories that start with @code{darwin}.
@item
The CPU architectures (as suffixes) at the end of the directory names can be classified like this:
@table @code
@item @code{x86}
Intel CPUs.
@item @code{arm64}
Apple's M1 CPUs.
@end table
@end itemize
@item
With the operating system directories, you will find a compressed tarball that you need to download (choose the latest one).
@item
Unpack the tarball with a command like below:
@example
$ tar -xf ds9.XXXXXXX.X.X.X.tar.gz
@end example
@item
This should produce a simple @file{ds9} file.
Before installing, it is good to actually test it like below:
@example
$ ./ds9
@end example
@item
If the command above opened DS9 with no error, you can safely install it with this command:
@example
$ rm ds9*.tar.gz
$ sudo mv ds9* /usr/local/bin
@end example
@item
Go to your home directory and try running DS9 with the two commands below.
If it doesn't find it, then you need to add @file{/usr/local/bin} to your @file{PATH}, see @ref{Installation directory}.
@example
$ cd
$ ds9
@end example
@end enumerate
@cartouche
@noindent
@strong{Install without root permissions:} If you do not have root permissions, you can simply replace @file{/usr/local/bin} in the command above with @file{$HOME/.local/bin}.
If this directory is not in your @code{PATH}, you can simply add it with the command below (in your startup file, e.g., @file{~/.bashrc}).
For more on @code{PATH} and the startup files, see @ref{Installation directory}.
@example
export PATH="$HOME/.local/bin:$PATH"
@end example
@end cartouche
@noindent
Below you can see a list of known issues in some operating systems that we have found so far.
You should be able to identify any potential error when running DS9 from the command-line like above.
@itemize
@item
There might be a complaint about the Xss library, which you can find in your distribution package management system.
@item
You might also get an @command{XPA} related error.
In this case, you have to add the following line to your @file{~/.bashrc} and @file{~/.profile} file (you will have to log out and back in again for the latter):
@example
export XPA_METHOD=local
@end example
@item
Your system may not have the SSL library in its standard library path, in this case, put this command in your startup file (for example, @file{~/.bashrc}):
@example
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/ssl/lib"
@end example
@end itemize
@node TOPCAT, PGPLOT, SAO DS9, Other useful software
@section TOPCAT
@cindex VOTable
@cindex Table viewer
@url{http://www.star.bris.ac.uk/~mbt/topcat, TOPCAT} is not a requirement of Gnuastro, it is a table viewer and plotter (in many input formats, including FITS, VOTable, and others).
TOPCAT is therefore a useful tool to visually inspect the tables of your Gnuastro inputs or outputs (for images, see @ref{SAO DS9}).
In Gnuastro we have an installed script to run DS9 or TOPCAT on any number of FITS files (depending on it being an image or table), see @ref{Viewing FITS file contents with DS9 or TOPCAT} (which also includes a @file{.desktop} file for GUI integration).
After installing DS9, you can easily use that script to open any FITS file (table, image or cube).
TOPCAT is a very large package with many capabilities to visualize tables (as plots).
It also has an @url{http://www.star.bris.ac.uk/~mbt/topcat/#docs, extensive documentation} that you can read for optimally using it.
TOPCAT is written in Java, so it just needs a relatively recent (in the last decade) Java Virtual Machine (JVM) and Java Runtime Environment (JRE).
Your operating system already has a relatively recent Java installation in its package manager, and there is a large chance that it is already installed.
So before trying to install Java, try running TOPCAT.
If it complains about not finding a suitable Java environment, then proceed to search your operating system's package manager.
To install TOPCAT, you just need to run the following two commands.
The first @file{.jar} file is the main TOPCAT Java ARchive (JAR).
JAR is a compressed package of Java files and definitions that should be run with a special Java command.
But to avoid bothering users with details of how to call Java, TOPCAT also provides a simple shell script (the second downloaded file below) that is easier to call and will do all the internal checks and call Java properly.
@example
$ wget http://www.star.bris.ac.uk/~mbt/topcat/topcat-full.jar
$ wget http://www.star.bris.ac.uk/~mbt/topcat/topcat
$ chmod +x topcat
$ ./topcat # Just for a check to see if everything works!
$ sudo mv topcat-full.jar topcat /usr/local/bin/
@end example
@noindent
Once the two TOPCAT files are copied in the system-wide directory, you can easily open tables with a command like below from anywhere in your operating system.
@example
$ topcat table.fits
@end example
@cartouche
@noindent
@strong{Install without root permissions:} If you do not have root permissions, you can simply replace @file{/usr/local/bin} in the command above with @file{$HOME/.local/bin}.
If this directory is not in your @code{PATH}, you can simply add it with the command below (in your startup file, e.g., @file{~/.bashrc}).
For more on @code{PATH} and the startup files, see @ref{Installation directory}.
@example
export PATH="$HOME/.local/bin:$PATH"
@end example
@end cartouche
@node PGPLOT, , TOPCAT, Other useful software
@section PGPLOT
@cindex PGPLOT
@cindex C, plotting
@cindex Plotting directly in C
PGPLOT is a package for making plots in C.
It is not directly needed by Gnuastro, but can be used by WCSLIB, see @ref{WCSLIB}.
As explained in @ref{WCSLIB}, you can install WCSLIB without it too.
It is very old (the most recent version was released early 2001!), but remains one of the main packages for plotting directly in C.
WCSLIB uses this package to make plots if you want it to make plots.
If you are interested you can also use it for your own purposes.
@cindex Python Matplotlib
@cindex Matplotlib, Python
@cindex PGFplots in @TeX{} or @LaTeX{}
If you want your plotting codes in between your C program, PGPLOT is currently one of your best options.
The recommended alternative to this method is to get the raw data for the plots in text files and input them into any of the various more modern and capable plotting tools separately, for example, the Matplotlib library in Python or PGFplots in @LaTeX{}.
This will also significantly help code readability.
Let's get back to PGPLOT for the sake of WCSLIB.
Installing it is a little tricky (mainly because it is so old!).
You can download the most recent version from the FTP link in its web page@footnote{@url{http://www.astro.caltech.edu/~tjp/pgplot/}}.
You can unpack it with the @command{tar -xf} command.
Let's assume the directory you have unpacked it to is @file{PGPLOT}, most probably it is: @file{/home/username/Downloads/pgplot/}.
Open the @file{drivers.list} file:
@example
$ gedit drivers.list
@end example
@noindent
Remove the @code{!} for the following lines and save the file in the
end:
@example
PSDRIV 1 /PS
PSDRIV 2 /VPS
PSDRIV 3 /CPS
PSDRIV 4 /VCPS
XWDRIV 1 /XWINDOW
XWDRIV 2 /XSERVE
@end example
@noindent
Do not choose GIF or VGIF, there is a problem in their codes.
Open the @file{PGPLOT/sys_linux/g77_gcc.conf} file:
@example
$ gedit PGPLOT/sys_linux/g77_gcc.conf
@end example
@noindent
change the line saying: @code{FCOMPL="g77"} to @code{FCOMPL="gfortran"}, and save it.
This is a very important step during the compilation of the code if you are in GNU/Linux.
You now have to create a folder in @file{/usr/local}, do not forget to replace @file{PGPLOT} with your unpacked address:
@example
$ su
# mkdir /usr/local/pgplot
# cd /usr/local/pgplot
# cp PGPLOT/drivers.list ./
@end example
To make the Makefile, type the following command:
@example
# PGPLOT/makemake PGPLOT linux g77_gcc
@end example
@noindent
It should finish by saying: @command{Determining object file dependencies}.
You have done the hard part! The rest is easy: run these three commands in order:
@example
# make
# make clean
# make cpg
@end example
Finally you have to place the position of this directory you just made into the @command{LD_LIBRARY_PATH} environment variable and define the environment variable @command{PGPLOT_DIR}.
To do that, you have to edit your @file{.bashrc} file:
@example
$ cd ~
$ gedit .bashrc
@end example
@noindent
Copy these lines into the text editor and save it:
@cindex @file{LD_LIBRARY_PATH}
@example
PGPLOT_DIR="/usr/local/pgplot/"; export PGPLOT_DIR
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/pgplot/
export LD_LIBRARY_PATH
@end example
@noindent
You need to log out and log back in again so these definitions take effect.
After you logged back in, you want to see the result of all this labor, right? Tim Pearson has done that for you, create a temporary folder in your home directory and copy all the demonstration files in it:
@example
$ cd ~
$ mkdir temp
$ cd temp
$ cp /usr/local/pgplot/pgdemo* ./
$ ls
@end example
You will see a lot of pgdemoXX files, where XX is a number.
In order to execute them type the following command and drink your coffee while looking at all the beautiful plots! You are now ready to create your own.
@example
$ ./pgdemoXX
@end example
@node GNU Free Doc License, GNU General Public License, Other useful software, Top
@appendix GNU Free Doc. License
@cindex GNU Free Documentation License
@include fdl.texi
@node GNU General Public License, Index, GNU Free Doc License, Top
@appendix GNU Gen. Pub. License v3
@cindex GPL
@cindex GNU General Public License (GPL)
@include gpl-3.0.texi
@c Print the index and finish:
@node Index, , GNU General Public License, Top
@unnumbered Index: Macros, structures and functions
All Gnuastro library's exported macros start with @code{GAL_}, and its exported structures and functions start with @code{gal_}.
This abbreviation stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
The next element in the name is the name of the header which declares or defines them, so to use the @code{gal_array_fset_const} function, you have to @code{#include <gnuastro/array.h>}.
See @ref{Gnuastro library} for more.
The @code{pthread_barrier} constructs are our implementation and are only available on systems that do not have them, see @ref{Implementation of pthread_barrier}.
@printindex fn
@unnumbered Index
@printindex cp
@bye
|