1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473
|
\input texinfo @c -*-texinfo-*-
@c 19980817: TeX-based systems (texi2dvi, texi2dvi --pdf) require texinfo.tex
@c Hypertext systems (texi2html, makeinfo) do not require texinfo.tex
@ignore
$Header: /cvsroot/nco/nco/doc/nco.texi,v 1.86 2002/01/30 15:50:49 hmb Exp $
Purpose: TeXinfo documentation for NCO suite
After editing any hyperlink locations, use
C-c C-u C-a texinfo-all-menus-update
C-c C-u C-e texinfo-every-node-update
Usage:
cd ~/nco/doc;texi2dvi nco.texi; texi2html -monolithic -verbose nco.texi; makeinfo nco.texi; dvips -o nco.ps nco.dvi;texi2dvi --pdf nco.texi
texi2html -monolithic -verbose nco.texi
texi2dvi nco.texi
texi2dvi --pdf nco.texi
makeinfo nco.texi
texi2html -monolithic -verbose nco.texi
dvips -o nco.ps nco.dvi
ps2pdf -dMaxSubsetPct=100 -dCompatibilityLevel=1.2 -dSubsetFonts=true -dEmbedAllFonts=true nco.ps nco.pdf
cd ~/nco/doc;/usr/bin/scp index.shtml nco_news.shtml ChangeLog TODO README VERSION nco.html nco.dvi nco.info* nco.ps nco.pdf nco.texi nco.sourceforge.net:/home/groups/n/nc/nco/htdocs;cd -
cd ~/nco/doc;scp -p index.shtml nco_news.shtml ChangeLog TODO README VERSION nco.html nco.dvi nco.info* nco.ps nco.pdf nco.texi www.cgd.ucar.edu:/web/web-data/cms/nco;cd -
Resources:
Octave TeXInfo manual is a good example of clean TeXInfo structure
Copyright (C) 1995--2002 Charlie Zender
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. The license is available online at
http://www.gnu.ai.mit.edu/copyleft/fdl.html
@end ignore
@c Start of header
@setfilename nco.info
@settitle @acronym{NCO} @value{nco-edition} User's Guide
@c Uncomment following line to produce guide in smallbook format
@c @smallbook
@c Merge function index into concept index
@syncodeindex fn cp
@c end of header
@ignore
@ifinfo
@format
START-INFO-DIR-ENTRY
* NCO: (nco.info). User's Guide for the netCDF Operator suite
END-INFO-DIR-ENTRY
@end format
@end ifinfo
@end ignore
@c Set smallbook if printing in smallbook format so example of
@c smallbook font is actually written using smallbook.
@c In bigbook, a kludge is used for TeX output
@c set smallbook
@clear smallbook
@c Define edition, date, ...
@set nco-edition 2.0.0
@set doc-edition 2.0.0
@set copyright-years 1995--2002
@set update-year 2001
@set update-date 1 April 2001
@set update-month April 2001
@c Experiment with smaller amounts of whitespace between chapters and sections
@tex
\global\chapheadingskip = 15pt plus 4pt minus 2pt
\global\secheadingskip = 12pt plus 3pt minus 2pt
\global\subsecheadingskip = 9pt plus 2pt minus 2pt
@end tex
@c Experiment with smaller amounts of whitespace between paragraphs in the 8.5 by 11 inch format
@ifclear smallbook
@tex
\global\parskip 6pt plus 1pt
@end tex
@end ifclear
@finalout
@ifinfo
This file documents @acronym{NCO}, a collection of utilities to manipulate and
analyze netCDF files.
Copyright @copyright{} @value{copyright_years} Charlie Zender
This is the first edition of the @cite{NCO User's Guide},@*
and is consistent with version 2 of @file{texinfo.tex}.
Permission is granted to copy, distribute and/or modify this document
under the terms of the @acronym{GNU} Free Documentation License, Version 1.1
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. The license is available online at
@uref{http://www.gnu.ai.mit.edu/copyleft/fdl.html}
@ignore
Permission is granted to process this file through TeX and print the
results, provided the printed document carries copying permission
notice identical to this one except for the removal of this paragraph
(this paragraph not being relevant to the printed manual).
@end ignore
Portions of this document were extracted verbatim from Unidata netCDF
documentation, particularly ``NetCDF Operators and Utilities'' by Russ
Rew and Steve Emmerson.
@end ifinfo
@setchapternewpage odd
@titlepage
@ifhtml
<meta name="Author" CONTENT="Charles S. Zender">
<meta name="Keywords" CONTENT="NCO documentation, NCO User's Guide,
netCDF, operator, GCM, CCM, scientific data, ncdiff, ncea, ncecat,
ncflint, ncks, ncra, ncrcat, ncrename, ncwa">
@end ifhtml
@c fxm: Direct HTML commands have no effect
@ifhtml
<body text="#000000" link="#0000EF" vlink="#008080" alink="#FF0000">
<font face="Arial">
@end ifhtml
@ignore
@end ignore
@title NCO User's Guide
@subtitle A suite of netCDF operators
@subtitle Edition @value{doc-edition}, for @acronym{NCO} Version @value{nco-edition}
@subtitle @value{update-month}
@author by Charles S. Zender
@author Department of Earth System Science
@author University of California at Irvine
@ifhtml
<p>WWW readers: Having trouble finding the section you want?</p>
<p>Search for keywords in the (hyper) index at the end of this page</p>
@end ifhtml
@c Include Distribution inside titlepage so that headings are turned off
@page
@vskip 0pt plus 1filll
Copyright @copyright{} @value{copyright-years} Charlie Zender.
@sp 2
This is the first edition of the @cite{NCO User's Guide},@*
and is consistent with version 2 of @file{texinfo.tex}.
@sp 2
Published by Charlie Zender@*
Department of Earth System Science@*
University of California at Irvine@*
Irvine, CA 92697-3100 USA@*
Permission is granted to copy, distribute and/or modify this document
under the terms of the @acronym{GNU} Free Documentation License, Version 1.1
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. The license is available online at
@uref{http://www.gnu.ai.mit.edu/copyleft/fdl.html}
@sp 2
@c Cover art by Robynn Rudel
@end titlepage
@node Top, Foreword, (dir), (dir)
@comment node-name, next, previous, up
@menu
* Foreword::
* Summary::
* Introduction::
* Strategies::
* Common features::
* Operators::
* Contributing::
* General Index::
@detailmenu
--- The Detailed Node Listing ---
Introduction
* Availability::
* Compatability::
* Libraries::
* netCDF 2.x vs. 3.x::
* Help and Bug reports::
Operating systems compatible with @acronym{NCO}
* Windows Operating System::
Operator Strategies
* Philosophy::
* Climate model paradigm::
* Output files::
* Appending::
* Addition Subtraction Multiplication and Interpolation::
* Averaging vs. Concatenating::
* Large numbers of input files::
* Large files and Memory::
* Memory usage::
* Operator limitations::
Averagers vs. Concatenators
* Concatenation::
* Averaging::
* Interpolating::
Features common to most operators
* Specifying input files::
* Remote storage::
* File retention::
* Variable subsetting::
* Coordinate variables::
* Fortran indexing::
* Hyperslabs::
* Wrapped coordinates::
* Stride::
* Missing values::
* Operation Types::
* Type conversion::
* Suppressing interactive prompts::
* History attribute::
* NCAR CSM Conventions::
* ARM Conventions::
* Operator version::
Accessing files stored remotely
* DODS::
Reference manual for all operators
* ncatted netCDF Attribute Editor::
* ncdiff netCDF Differencer::
* ncea netCDF Ensemble Averager::
* ncecat netCDF Ensemble Concatenator::
* ncflint netCDF File Interpolator::
* ncks netCDF Kitchen Sink::
* ncra netCDF Record Averager::
* ncrcat netCDF Record Concatenator::
* ncrename netCDF Renamer::
* ncwa netCDF Weighted Averager::
@command{ncwa} netCDF Weighted Averager
* Masking condition::
* Normalization::
@end detailmenu
@end menu
@node Foreword, Summary, Top, Top
@unnumbered Foreword
@cindex foreword
@acronym{NCO} is the result of software needs that arose while I worked on
projects funded by @acronym{NCAR}, @acronym{NASA}, and @acronym{ARM}.
Thinking they might prove useful as tools or templates to others,
it is my pleasure to provide them freely to the scientific community.
Many users (most of whom I have never met) have encouraged the
development of @acronym{NCO}.
Thanks espcially to Jan Polcher, Keith Lindsay, Arlindo da Silva, John
Sheldon, and William Weibel for stimulating suggestions and correspondence.
Your encouragment motivated me to complete the @cite{NCO User's Guide}.
So if you like @acronym{NCO}, send me a note!
I should mention that @acronym{NCO} is not connected to or officially
endorsed by Unidata, @acronym{ACD}, @acronym{ASP}, @acronym{CGD}, or
Nike.@*
@sp 1
@noindent
Charlie Zender@*
@sp 1
@noindent
May 1997@*
Boulder, Colorado@*
@ignore
I suppose a major version change entitles me to write another Foreword.
In the last five years a lot of work has been done refining @acronym{NCO}.
@acronym{NCO} is now an honest-to-goodness open source project.
It appears to be much healthier for it.
The illustrious list of institutions which do not endorse @acronym{NCO}
continues to grow, and now includes @acronym{UCI}.
@sp 1
@noindent
Charlie Zender@*
@sp 1
@noindent
October 2000@*
Irvine, California@*
@end ignore
@node Summary, Introduction, Foreword, Top
@unnumbered Summary
@cindex operators
@cindex summary
This manual describes @acronym{NCO}, which stands for netCDF Operators.
@acronym{NCO} is a suite of programs known as @dfn{operators}.
Each operator is a standalone, command line program which is executed at
the @acronym{UNIX} (or @acronym{NT}) shell-level like, e.g., @command{ls} or @command{mkdir}.
The operators take netCDF file(s) (or @acronym{HDF4} files) as input, perform an
operation (e.g., averaging or hyperslabbing), and produce a netCDF file
as output.
The operators are primarily designed to aid manipulation and analysis of
data.
The examples in this documentation are typical applications of the
operators for processing climate model output.
This reflects their origin, but the operators are as general as netCDF
itself.
@node Introduction, Strategies, Summary, Top
@chapter Introduction
@cindex introduction
@menu
* Availability::
* Compatability::
* Libraries::
* netCDF 2.x vs. 3.x::
* Help and Bug reports::
@end menu
@node Availability, Compatability, Introduction, Introduction
@section Availability
@cindex @acronym{NCO} availability
@cindex source code
The complete @acronym{NCO} source distribution is currently distributed as a
@dfn{compressed tarfile} from
@uref{http://sourceforge.net/projects/nco}
and
@uref{ftp://ftp.cgd.ucar.edu/pub/zender/nco/nco.tar.gz}.
The compressed tarfile must be uncompressed and untarred before building
@acronym{NCO}.
Uncompress the file with @samp{gunzip nco.tar.gz}.
Extract the source files from the resulting tarfile with @samp{tar -xvf
nco.tar}.
@acronym{GNU} @code{tar} lets you perform both operations in one step with
@samp{tar -xvzf nco.tar.gz}.
@cindex documentation
@cindex WWW documentation
@cindex on-line documentation
@cindex @acronym{HTML}
@cindex @TeX{}info
@cindex Info
@cindex @cite{User's Guide}
@cindex @cite{NCO User's Guide}
The documentation for @acronym{NCO} is called the @cite{NCO User's Guide}.
The @cite{User's Guide} is available in Postscript, @acronym{HTML}, @acronym{DVI},
@TeX{}info, and Info formats.
These formats are included in the source distribution in the files
@file{nco.ps}, @file{nco.html}, @file{nco.dvi}, @file{nco.texi}, and
@file{nco.info*}, respectively.
All the documentation descends from a single source file,
@file{nco.texi}
@footnote{
To produce these formats, @file{nco.texi} was simply run through the
freely available programs @code{texi2dvi}, @code{dvips},
@code{texi2html}, and @code{makeinfo}.
Due to a bug in @TeX{}, the resulting Postscript file, @file{nco.ps},
contains the Table of Contents as the final pages.
Thus if you print @file{nco.ps}, remember to insert the Table of
Contents after the cover sheet before you staple the manual.
}.
Hence the documentation in every format is very similar.
However, some of the complex mathematical expressions needed to describe
@command{ncwa} can only be displayed in the Postscript and @acronym{DVI} formats.
@cindex @acronym{NCO} homepage
If you want to quickly see what the latest improvements in @acronym{NCO} are
(without downloading the entire source distribution), visit the @acronym{NCO}
homepage at
@uref{http://nco.sourceforge.net}.
The @acronym{HTML} version of the @cite{User's Guide} is also available online through
the World Wide Web at @acronym{URL}
@uref{http://nco.sourceforge.net/nco.html}.
@cindex netCDF
To build and use @acronym{NCO}, you must have netCDF installed.
The netCDF homepage is
@uref{http://www.unidata.ucar.edu/packages/netcdf}.
New @acronym{NCO} releases are announced on the netCDF list and on the
@code{nco-announce} mailing list
@uref{http://lists.sourceforge.net/mailman/listinfo/nco-announce}.
@ignore
This tests incorporates an image using the @code{@@image} command.
@image{/data/zender/ps/odxc,6in,}
@end ignore
@node Compatability, Libraries, Availability, Introduction
@section Operating systems compatible with @acronym{NCO}
@cindex @acronym{OS}
@cindex Microsoft
@cindex Windows
@cindex compatability
@cindex portability
@cindex installation
@acronym{NCO} has been successfully ported and tested on the following platforms:
@acronym{GNU}/Linux, SunOS 4.1.x, Solaris 2.x, @acronym{IRIX} 5.x and
6.x (including 64-bit architectures), @acronym{UNICOS} 8.x--10.x,
@acronym{AIX} 4.x, @acronym{DEC OSF}, and Windows @acronym{NT4}.
If you port the code to a new operating system, please send me a note
and any patches you required.
@cindex @acronym{UNIX}
The major prerequisite for installing @acronym{NCO} on a particular platform is
the successful, prior installation of the netCDF libraries themselves.
Unidata has shown a commitment to maintaining netCDF on all popular @acronym{UNIX}
platforms, and is moving towards full support for the Microsoft Windows
operating system (@acronym{OS}).
Given this, the only difficulty in implementing @acronym{NCO} on a particular
platform is standardization of various C and Fortran interface and
system calls.
The C-code has been tested for @acronym{ANSI} compliance by compiling with @acronym{GNU}
@code{gcc -ansi -pedantic}.
@cindex @acronym{ANSI}
Certain branches in the code were required to satisfy the native SGI and
SunOS @command{cc} compilers, which are strictly @acronym{ANSI} compliant and do not
allow variable-size arrays, a nice feature supported by @acronym{GNU},
@acronym{UNICOS}, Solaris, and @acronym{AIX} compilers.
The most time-intensive portion of @acronym{NCO} execution is spent in
arithmetic operations, e.g., multiplication, averaging, subtraction.
Until August, 1999, these operations were performed in Fortran by
default.
This was a design decision based on the speed of Fortran-based object
code vs. C-based object code in late 1994.
Since 1994 native C compilers have improved their vectorization
capabilities and it has become advantageous to replace all Fortran
subroutines with C subroutines.
Furthermore, this greatly simplifies the task of compiling on nominally
unsupported platforms.
As of August 1999, @acronym{NCO} is built entirely in C by default.
This allows @acronym{NCO} to compile on any machine with an @acronym{ANSI} C compiler.
Furthermore, @acronym{NCO} automatically takes advantage of extensions
to @acronym{ANSI} C when compiled with the @acronym{GNU} compiler
collection, @acronym{GCC}.
As of July 2000 and @acronym{NCO} version 1.2, @acronym{NCO} no longer
supports performing arithmetic operations in Fortran to improve speed.
Supporting Fortran involves maintaining two sets of routines for every
arithmetic operation.
The @code{USE_FORTRAN_ARITHMETIC} flag is retained in the Makefile and
the file containing the Fortran code, @file{nc_fortran.F}, is still
distributed with @acronym{NCO} in case a volunteer decides to resurrect them.
If you would like to volunteer to maintain @file{nc_fortran.F} please
contact me.
Otherwise the Fortran hooks will be completely removed in the next major
release.
@ignore
It is still possible to request Fortran routines to perform arithmetic
operations, however.
@cindex preprocessor tokens
@cindex @code{USE_FORTRAN_ARITHMETIC}
This can be accomplished by defining the preprocessor token
@code{USE_FORTRAN_ARITHMETIC} and rebuilding @acronym{NCO}.
@cindex performance
As its name suggests, the @code{USE_FORTRAN_ARITHMETIC} token instructs
@acronym{NCO} to attempts to interface the C routines with Fortran
arithmetic.
Although using Fortran calls instead of C reduces the portability and
and increases the maintenance of the @acronym{NCO} operators, it may also increase
the performance of the numeric operators.
Presumably this will depend on your machine type, the quality of the C
and Fortran compilers, and the size of the data files
@footnote{If you decide to test the efficiency of the averagers compiled
with @code{USE_FORTRAN_ARITHMETIC} versus the default C averagers I would be most
interested to hear the results.
Please E-mail me the results including the size of the datasets, the
platform, and the change in the wallclock time for execution.}.
@end ignore
@menu
* Windows Operating System::
@end menu
@node Windows Operating System, , Compatability, Compatability
@subsection Compiling @acronym{NCO} for Microsoft Windows @acronym{OS}
@cindex Windows
@cindex Microsoft
@cindex @code{USE_FORTRAN_ARITHMETIC}
@acronym{NCO} has been successfully ported and tested on the Microsoft
Windows @acronym{NT} 4.0 operating system.
The switches necessary to accomplish this are included in the standard
distribution of @acronym{NCO}.
Using the freely available Cygwin (formerly gnu-win32) development
environment
@footnote{The Cygwin package is available from@*
@code{http://sourceware.cygnus.com/cygwin}@*
Currently, Cygwin 20.x comes with the @acronym{GNU} C/C++/Fortran
compilers (@command{gcc}, @command{g++}, @command{g77}).
These @acronym{GNU} compilers may be used to build the netCDF distribution
itself.}, the compilation process is very similar to installing @acronym{NCO} on a
@acronym{UNIX} system.
@cindex preprocessor tokens
@cindex @code{USE_FORTRAN_ARITHMETIC}
@cindex @code{Cygwin}
@cindex @code{WIN32}
@cindex @file{GNUmakefile}
The preprocessor token @code{PVM_ARCH} should be set to @code{WIN32}.
Note that defining @code{WIN32} has the side effect of disabling
Internet features of @acronym{NCO} (see below).
Unless you have a Fortran compiler (like @command{g77} or @command{f90})
available, no other tokens are required.
Users with fast Fortran compilers may wish to activate the Fortran
arithmetic routines.
To do this, define the preprocessor token @code{USE_FORTRAN_ARITHMETIC}
in the makefile which comes with @acronym{NCO}, @file{Makefile}, or in the
compilation shell.
@cindex @acronym{UNIX}
The least portable section of the code is the use of standard @acronym{UNIX} and
Internet protocols (e.g., @code{ftp}, @code{rcp}, @code{scp},
@code{getuid}, @code{gethostname}, and header files
@file{<arpa/nameser.h>} and
@file{<resolv.h>}).
@cindex @code{ftp}
@cindex @code{rcp}
@cindex @code{scp}
@cindex @acronym{SSH}
@cindex remote files
Fortunately, these @acronym{UNIX}y calls are only invoked by the single @acronym{NCO}
subroutine which is responsible for retrieving files stored on remote
systems (@pxref{Remote storage}).
In order to support @acronym{NCO} on the Microsoft Windows platforms,
this single feature was disabled (on Windows @acronym{OS} only).
This was required by Cygwin 18.x---newer versions of Cygwin may support
these protocols (let me know if this is the case).
The @acronym{NCO} operators should behave identically on Windows and
@acronym{UNIX} platforms in all other respects.
@node Libraries, netCDF 2.x vs. 3.x, Compatability, Introduction
@section Libraries
@cindex libraries
@cindex @code{LD_LIBRARY_PATH}
@cindex dynamic linking
@cindex static linking
Like all executables, the @acronym{NCO} operators can be built using dynamic
linking.
@cindex performance
@cindex operator speed
@cindex speed
@cindex execution time
This reduces the size of the executable and can result in significant
performance enhancements on multiuser systems.
Unfortunately, if your library search path (usually the
@env{LD_LIBRARY_PATH} environment variable) is not set correctly, or if
the system libraries have been moved, renamed, or deleted since @acronym{NCO} was
installed, it is possible an @acronym{NCO} operator will fail with a message that
it cannot find a dynamically loaded (aka @dfn{shared object} or
@samp{.so}) library.
This usually produces a distinctive error message, such as
@samp{ld.so.1:@- /usr/local/bin/ncea:@- fatal:@- libsunmath.@-so.1:@- can't
open@- file:@- errno@-=2}.
If you received an error message like this, ask your system
administrator to diagnose whether the library is truly missing
@footnote{The @command{ldd} command, if it is available on your system,
will tell you where the executable is looking for each dynamically
loaded library. Use, e.g., @code{ldd `which ncea`}.}, or whether you
simply need to alter your library search path.
As a final remedy, you can reinstall @acronym{NCO} with all operators statically
linked.
@node netCDF 2.x vs. 3.x, Help and Bug reports, Libraries, Introduction
@section netCDF 2.x vs. 3.x
@cindex netCDF 2.x
@cindex netCDF 3.x
netCDF version 2.x was released in 1993.
@acronym{NCO} (specifically @code{ncks}) began with netCDF 2.x in 1994.
netCDF 3.0 was released in 1996, and we were eager to reap the
performance advantages of the newer netCDF implementation.
One netCDF 3.x interface call (@code{nc_inq_libvers}) was added to
@acronym{NCO} in January, 1998, to aid in maintainance and debugging.
In March, 2001, the final conversion of @acronym{NCO} to netCDF 3.x
was completed (coincidentally on the same day netCDF 3.5 was released).
@acronym{NCO} versions 2.0 and higher are built with the
@code{-DNO_NETCDF_2} flag to ensure no netCDF 2.x interface calls
are used.
@cindex @code{NO_NETCDF_2}
@cindex @acronym{HDF}
@cindex Hierarchical Data Format
However, the ability to compile @acronym{NCO} with only netCDF 2.x calls
is worth maintaining because @acronym{HDF} version 4
@footnote{The Hierarchical Data Format, or @acronym{HDF}, is another
self-describing data format similar to, but more elaborate than, netCDF.}
(available from @uref{http://hdf.ncsa.uiuc.edu, HDF})
supports only the netCDF 2.x library calls
(see @uref{http://hdf.ncsa.uiuc.edu/UG41r3_html/SDS_SD.fm12.html#47784}).
Note that there are multiple versions of @acronym{HDF}.
Currently @acronym{HDF} version 4.x supports netCDF 2.x and thus
@acronym{NCO} version 1.2.x.
If @acronym{NCO} version 1.2.x (or earlier) is built with only netCDF
2.x calls then all @acronym{NCO} operators should work with
@acronym{HDF4} files as well as netCDF files
@footnote{One must link the @acronym{NCO} code to the @acronym{HDF4}
@acronym{MFHDF} library instead of the usual netCDF library.
However, the @acronym{MFHDF} library only supports netCDF 2.x calls.
Thus I will try to keep this capability in @acronym{NCO} as long as it
is not too much trouble.}.
@cindex @code{NETCDF2_ONLY}
The preprocessor token @code{NETCDF2_ONLY} exists
in @acronym{NCO} version 1.2.x to eliminate all netCDF 3.x calls.
Only versions of @acronym{NCO} numbered 1.2.x and earlier have this
capability.
The @acronym{NCO} 1.2.x branch will be maintained with bugfixes only
(no new features) until @acronym{HDF} begins to fully support the
netCDF 3.x interface (which is employed by @acronym{NCO} 2.x).
If, at compilation time, @code{NETCDF2_ONLY} is defined, then
@acronym{NCO} version 1.2.x will not use any netCDF 3.x calls and, if
linked properly, the resulting @acronym{NCO} operators will work with
@acronym{HDF4} files.
The @file{Makefile} supplied with @acronym{NCO} 1.2.x has been written
to simplify building in this @acronym{HDF} capability.
When @acronym{NCO} is built with @code{make HDF4=Y}, the @file{Makefile}
will set all required preprocessor flags and library links to build
with the @acronym{HDF4} libraries (which are assumed to reside under
@code{/usr/local/hdf4}, edit the @file{Makefile} to suit your
installation).
@acronym{HDF} version 5.x became available in 1999, but did not support
netCDF (or, for that matter, Fortran) as of December 1999.
By early 2001, @acronym{HDF} version 5.x did support Fortran90.
However, support for netCDF 3.x in @acronym{HDF} 5.x is incomplete.
Much of the HDF5-netCDF3 interface is complete, however, and it may be
separately downloaded from the
@uref{http://hdf.ncsa.uiuc.edu/HDF5/papers/netcdfh5.html, HDF5-netCDF}
website.
Now that @acronym{NCO} uses only netCDF 3.x system calls we are
eager for HDF5 to complete their netCDF 3.x support.
@node Help and Bug reports, , netCDF 2.x vs. 3.x, Introduction
@section Help and Bug reports
@cindex reporting bugs
@cindex bugs, reporting
@cindex core dump
@cindex help
@cindex features, requesting
We generally receive three categories of mail from user's: requests for
help, bug reports, and requests for new features.
Notes saying the equivalent of "Hey, @acronym{NCO} continues to work great
and it saves me more time everyday than it took to write this note" are
a distant fourth.
There is a different protocol for each type of request.
Our request is that you communicate with the project via @acronym{NCO} Project
Forums.
Before posting to the @acronym{NCO} forums described below, you should
first @uref{https://sourceforge.net/account/register.php, register}
your name and email address with SourceForge.org or else all of your
postings will be attributed to "nobody".
Once registered you may choose to "monitor" any forum and to receive
(or not) email when there are any postings.
If you would like @acronym{NCO} to include a new feature, first check to see
if that feature is already on the
@uref{file:./TODO, TODO} list.
If it is, please consider implementing that feature yourself and sending
us the patch!
If the feature is not yet on the list then send a note to the
@uref{http://sourceforge.net/forum/forum.php?forum_id=9829, NCO
Discussion forum}.
Please read the manual before reporting a bug or posting a request for
help.
Sending questions whose answers are not in the manual is the best
way to motivate us to write more documentation.
We would also like to accentuate the contrapositive of this statement.
If you think you have found a real bug @emph{the most helpful thing you
can do is simplify the problem to a manageable size and report it}.
The first thing to do is to make sure you are running the latest
publicly released version of @acronym{NCO}.
Once you have read the manual, if you are still unable to get @acronym{NCO}
to perform a documented function, write help request.
Follow the same procedure as described below for reporting bugs
(after all, it might be a bug).
That is, describe what you are trying to do, and inlude the complete
commands (with @samp{-D 5}), error messages, and version of NCO.
Post your help request to the
@uref{http://sourceforge.net/forum/forum.php?forum_id=9830, NCO Help forum}.
If you think you are using the right command, but @acronym{NCO} is misbehaving,
then you might have found a bug.
A core dump, sementation violation, or incorrect numerical answers is
always considered a high priority bug.
How do you simplify a problem that may be revealing a bug?
Cut out extraneous variables, dimensions, and metadata from the
offending files and re-run the command until it no longer breaks.
Then back up one step and report the problem.
Usually the file(s) will be very small, i.e., one variable with one or
two small dimensions ought to suffice.
Include in the report your run-time environment, the exact error
messages (and run the operator with @samp{-D 5} to increase the
verbosity of the debugging output), and a copy, or the publically
accessible location, of the file(s).
Post the bug report to the
@uref{http://sourceforge.net/bugs/?group_id=3331, NCO Project buglist}.
@node Strategies, Common features, Introduction, Top
@chapter Operator Strategies
@menu
* Philosophy::
* Climate model paradigm::
* Output files::
* Appending::
* Addition Subtraction Multiplication and Interpolation::
* Averaging vs. Concatenating::
* Large numbers of input files::
* Large files and Memory::
* Memory usage::
* Operator limitations::
@end menu
@node Philosophy, Climate model paradigm, Strategies, Strategies
@section @acronym{NCO} operator philosophy
@cindex philosophy
The main design goal has been to produce operators that can be invoked
from the command line to perform useful operations on netCDF files.
Many scientists work with models and observations which produce too much
data to analyze in tabular format.
Thus, it is often natural to reduce and massage this raw or primary
level data into summary, or second level data, e.g., temporal or spatial
averages.
These second level data may become the inputs to graphical and
statistical packages, and are often more suitable for archival and
dissemination to the scientific community.
@acronym{NCO} performs a suite of operations useful in manipulating data from the
primary to the second level state.
@cindex @acronym{NCL}
@cindex Perl
@cindex Yorick
Higher level interpretive languages (e.g., @acronym{IDL}, Yorick,
Matlab, @acronym{NCL}, Perl, Python),
and lower level compiled languages (e.g., C, Fortran) can always perform
any task performed by @acronym{NCO}, but often with more overhead.
NCO, on the other hand, is limited to a much smaller set of arithmetic
and metadata operations than these full blown languages.
Another goal has been to implement enough command line switches so that
frequently used sequences of these operators can be executed from a
shell script or batch file.
Finally, @acronym{NCO} was written to consume the absolute minimum amount of
system memory required to perform a given job.
The arithmetic operators are extremely efficient; their exact memory
usage is detailed in @ref{Memory usage}.
@node Climate model paradigm, Output files, Philosophy, Strategies
@section Climate model paradigm
@cindex climate modeling
@cindex @acronym{NCAR}
@cindex @acronym{GCM}
@acronym{NCO} was developed at @acronym{NCAR} to aid analysis and manipulation of
datasets produced by General Circulation Models (@acronym{GCM}s).
Datasets produced by @acronym{GCM}s share many features with all gridded
scientific datasets and so provide a useful paradigm for the explication
of the @acronym{NCO} operator set.
Examples in this manual use a @acronym{GCM} paradigm because latitude, longitude,
time, temperature and other fields related to our natural environment
are as easy to visualize for the layman as the expert.
@node Output files, Appending, Climate model paradigm, Strategies
@section Temporary output files
@cindex data safety
@cindex error tolerance
@cindex safeguards
@cindex temporary output files
@acronym{NCO} operators are designed to be reasonably fault tolerant, so that
if there is a system failure or the user aborts the operation (e.g.,
with @kbd{C-c}), then no data is lost.
The user-specified @var{output-file} is only created upon successful
completion of the operation
@footnote{The @command{ncrename} operator is an exception to this rule.
@xref{ncrename netCDF Renamer}.}.
This is accomplished by performing all operations in a temporary copy
of @var{output-file}.
The name of the temporary output file is constructed by appending
@code{.pid@var{<process ID>}.@var{<operator name>}.tmp} to the
user-specified @var{output-file} name.
When the operator completes its task with no fatal errors, the temporary
output file is moved to the user-specified @var{output-file}.
Note the construction of a temporary output file uses more disk space
than just overwriting existing files ``in place'' (because there may be
two copies of the same file on disk until the @acronym{NCO} operation successfully
concludes and the temporary output file overwrites the existing
@var{output-file}).
@cindex performance
@cindex operator speed
@cindex speed
@cindex execution time
Also, note this feature increases the execution time of the operator
by approximately the time it takes to copy the @var{output-file}.
Finally, note this feature allows the @var{output-file} to be the same
as the @var{input-file} without any danger of ``overlap''.
@cindex overwriting files
@cindex appending to files
Other safeguards exist to protect the user from inadvertently
overwriting data.
If the @var{output-file} specified for a command is a pre-existing file,
then the operator will prompt the user whether to overwrite (erase) the
existing @var{output-file}, attempt to append to it, or abort the
operation.
However, in processing large amounts of data, too many interactive
questions can be a curse to productivity.
Therefore @acronym{NCO} also implements two ways to override its own safety
features, the @samp{-O} and @samp{-A} switches.
Specifying @samp{-O} tells the operator to overwrite any existing
@var{output-file} without prompting the user interactively.
Specifying @samp{-A} tells the operator to attempt to append to any
existing @var{output-file} without prompting the user interactively.
These switches are useful in batch environments because they suppress
interactive keyboard input.
@node Appending, Addition Subtraction Multiplication and Interpolation, Output files, Strategies
@section Appending variables to a file
A frequently useful operation is adding variables from one file to
another.
@cindex concatenation
@cindex appending variables
@cindex merging files
@cindex pasting variables
This is referred to as @dfn{appending}, although some prefer the
terminology @dfn{merging} @footnote{The terminology @dfn{merging} is
reserved for an (unwritten) operator which replaces hyperslabs of a
variable in one file with hyperslabs of the same variable from another
file} or @dfn{pasting}.
Appending is often confused with what @acronym{NCO} calls @dfn{concatenation}.
In @acronym{NCO}, concatenation refers to splicing a variable along the record
dimension.
Appending, on the other hand, refers to adding variables from one file
to another
@footnote{Yes, the terminology is confusing.
By all means mail me if you think of a better nomenclature.
Should @acronym{NCO} use @dfn{paste} instead of @dfn{append}?
}.
In this sense, @command{ncks} can append variables from one file to another
file.
This capability is invoked by naming two files on the command line,
@var{input-file} and @var{output-file}.
When @var{output-file} already exists, the user is prompted whether to
@dfn{overwrite}, @dfn{append/replace}, or @dfn{exit} from the command.
Selecting @dfn{overwrite} tells the operator to erase the existing
@var{output-file} and replace it with the results of the operation.
Selecting @dfn{exit} causes the operator to exit---the @var{output-file}
will not be touched in this case.
Selecting @dfn{append/replace} causes the operator to attempt to place
the results of the operation in the existing @var{output-file},
@xref{ncks netCDF Kitchen Sink}.
@node Addition Subtraction Multiplication and Interpolation, Averaging vs. Concatenating, Appending, Strategies
@section Addition Subtraction Multiplication and Interpolation
Users comfortable with @acronym{NCO} semantics may find it easier to perform
some simple mathematical operations in @acronym{NCO} rather than higher level
languages.
@command{ncdiff} (@pxref{ncdiff netCDF Differencer}) can be used for
subtraction and broadcasting.
@command{ncflint} (@pxref{ncflint netCDF File Interpolator}) can be used
for addition, subtraction, multiplication and interpolation.
Sequences of these commands can accomplish simple but powerful
operations at the command line.
@node Averaging vs. Concatenating, Large numbers of input files, Addition Subtraction Multiplication and Interpolation, Strategies
@section Averagers vs. Concatenators
The most frequently used operators of @acronym{NCO} are probably the averagers and
concatenators.
Because there are so many permutations of averaging (e.g., across files,
within a file, over the record dimension, over other dimensions, with or
without weights and masks) and of concatenating (across files, along the
record dimension, along other dimensions), there are currently no fewer
than five operators which tackle these two purposes: @command{ncra},
@command{ncea}, @command{ncwa}, @command{ncrcat}, and @command{ncecat}.
These operators do share many capabilities @footnote{Currently
@command{ncea} and @command{ncrcat} are symbolically linked to the @command{ncra}
executable, which behaves slightly differently based on its invocation
name (i.e., @samp{argv[0]}).
These three operators share the same source code, but merely have
different inner loops.}, but each has its unique specialty.
Two of these operators, @command{ncrcat} and @command{ncecat}, are for
concatenating hyperslabs across files.
The other two operators, @command{ncra} and @command{ncea}, are for averaging
hyperslabs across files
@footnote{The third averaging operator, @command{ncwa}, is the most
sophisticated averager in @acronym{NCO}.
However, @command{ncwa} is in a different class than @command{ncra} and
@command{ncea} because it can only operate on a single file per invocation
(as opposed to multiple files).
On that single file, however, @command{ncwa} provides a richer set of
averaging options---including weighting, masking, and broadcasting.}.
First, let's describe the concatenators, then the averagers.
@menu
* Concatenation::
* Averaging::
* Interpolating::
@end menu
@node Concatenation, Averaging, Averaging vs. Concatenating, Averaging vs. Concatenating
@subsection Concatenators @command{ncrcat} and @command{ncecat}
@cindex @command{ncecat}
@cindex @command{ncrcat}
Joining independent files together along a record coordinate is called
@dfn{concatenation}.
@command{ncrcat} is designed for concatenating record variables, while
@command{ncecat} is designed for concatenating fixed length variables.
Consider 5 files, @file{85.nc}, @file{86.nc}, @dots{}
@file{89.nc} each containing a year's worth of data.
Say you wish to create from them a single file, @file{8589.nc}
containing all the data, i.e., spanning all 5 years.
If the annual files make use of the same record variable, then
@command{ncrcat} will do the job nicely with, e.g., @code{ncrcat 8?.nc
8589.nc}.
The number of records in the input files is arbitrary and can vary from
file to file.
@xref{ncrcat netCDF Record Concatenator}, for a complete description of
@command{ncrcat}.
However, suppose the annual files have no record variable, and thus
their data is all fixed length.
@cindex ensemble
For example, the files may not be conceptually sequential, but rather
members of the same group, or @dfn{ensemble}.
Members of an ensemble may have no reason to contain a record dimension.
@command{ncecat} will create a new record dimension (named @var{record} by
default) with which to glue together the individual files into the single
ensemble file.
If @command{ncecat} is used on files which contain an existing record
dimension, that record dimension will be converted into a fixed length
dimension of the same name and a new record dimension will be created.
Consider five realizations, @file{85a.nc}, @file{85b.nc}, @dots{}
@file{85e.nc} of 1985 predictions from the same climate model.
Then @code{ncecat 85?.nc 85_ens.nc} glues the individual realizations
together into the single file, @file{85_ens.nc}.
If an input variable was dimensioned [@code{lat},@code{lon}], it will have
dimensions [@code{record},@code{lat},@code{lon}] in the output file.
A restriction of @command{ncecat} is that the hyperslabs of the processed
variables must be the same from file to file.
Normally this means all the input files are the same size, and contain
data on different realizations of the same variables.
@xref{ncecat netCDF Ensemble Concatenator}, for a complete description of
@command{ncecat}.
Note that @command{ncrcat} cannot concatenate fixed-length variables,
whereas @command{ncecat} can concatenate both fixed-length and record
variables.
To conserve system memory, use @command{ncrcat} rather than
@command{ncecat} when concatenating record variables.
@node Averaging, Interpolating, Concatenation, Averaging vs. Concatenating
@subsection Averagers @command{ncea}, @command{ncra}, and @command{ncwa}
@cindex @command{ncea}
@cindex @command{ncra}
@cindex @command{ncwa}
The differences between the averagers @command{ncra} and @command{ncea} are
analogous to the differences between the concatenators.
@command{ncra} is designed for averaging record variables from at least one
file, while @command{ncea} is designed for averaging fixed length variables
from multiple files.
@command{ncra} performs a simple arithmetic average over the record
dimension of all the input files, with each record having an equal
weight in the average.
@command{ncea} performs a simple arithmetic average of all the input files,
with each file having an equal weight in the average.
Note that @command{ncra} cannot average fixed-length variables,
but @command{ncea} can average both fixed-length and record variables.
To conserve system memory, use @command{ncra} rather than
@command{ncea} where possible (e.g., if each @var{input-file} is one record
long).
The file output from @command{ncea} will have the same dimensions (meaning
dimension names as well as sizes) as the input hyperslabs
(@pxref{ncea netCDF Ensemble Averager}, for a complete description of
@command{ncea}).
The file output from @command{ncra} will have the same dimensions as
the input hyperslabs except for the record dimension, which will have a
size of 1 (@pxref{ncra netCDF Record Averager}, for a complete
description of @command{ncra}).
@node Interpolating, , Averaging, Averaging vs. Concatenating
@subsection Interpolator @command{ncflint}
@cindex @command{ncflint}
@command{ncflint} can interpolate data between or two files.
Since no other operators have this ability, the description of
interpolation is given fully on the @command{ncflint} reference page
(@pxref{ncflint netCDF File Interpolator}).
Note that this capability also allows @command{ncflint} to linearly rescale
any data in a netCDF file, e.g., to convert between differing units.
@node Large numbers of input files, Large files and Memory, Averaging vs. Concatenating, Strategies
@section Working with large numbers of input files
@cindex files, numerous input
@cindex @code{-n @var{loop}}
Occasionally one desires to digest (i.e., concatenate or average)
hundreds or thousands of input files.
One brave user, for example, recently created a five year time-series of
satellite observations by using @command{ncecat} to join thousands of daily
data files together.
Unfotunately, data archives (e.g., @acronym{NASA EOSDIS}) are unlikely to
distribute netCDF files conveniently named in a format the @samp{-n
@var{loop}} switch (which automatically generates arbitrary numbers of
input filenames) understands.
If there is not a simple, arithmetic pattern to the input filenames
(e.g., @file{h00001.nc}, @file{h00002.nc}, @dots{} @file{h90210.nc})
then the @samp{-n @var{loop}} switch is useless.
Moreover, when the input files are so numerous that the input filenames
are too lengthy (when strung together as a single argument) to be passed
by the calling shell to the @acronym{NCO} operator
@footnote{The exact length which exceeds the operating system internal
limit for command line lengths varies from @acronym{OS} to @acronym{OS}
and from shell to shell.
@acronym{GNU} @code{bash} may not have any arbitrary fixed limits to the size of
command line arguments.
Many @acronym{OS}s cannot handle command line arguments longer than a
few thousand characters.
When this occurs, the @acronym{ANSI} C-standard @code{argc}-@code{argv}
method of passing arguments from the calling shell to a C-program (i.e.,
an @acronym{NCO} operator) breaks down.}, then the following strategy
has proven useful to specify the input filenames to @acronym{NCO}.
Write a script that creates symbolic links between the irregular input
filenames and a set of regular, arithmetic filenames that @samp{-n
@var{loop}} switch understands.
The @acronym{NCO} operator will then succeed at automatically generating the
filnames with the @samp{-n @var{loop}} option (which circumvents any
@acronym{OS} and shell limits on command line size).
You can remove the symbolic links once the operator completes its task.
@node Large files and Memory, Memory usage, Large numbers of input files, Strategies
@section Working with large files
@cindex large files
@dfn{Large files} are those files that are comparable in size to the
amount of memory (@acronym{RAM}) in your computer.
Many users of @acronym{NCO} work with files larger than 100 Mb.
Files this large not only push the current edge of storage technology,
they present special problems for programs which attempt to access the
entire file at once, such as @command{ncea}, and @command{ncecat}.
@cindex swap space
If you need to work with a 300 Mb file on a machine with only 32 Mb of
memory then you will need large amounts of swap space (virtual
memory on disk) and @acronym{NCO} will work slowly, or else
@acronym{NCO} will fail.
There is no easy solution for this and the best strategy is to work on a
machine with massive amounts of memory and swap space.
@cindex server
@cindex @acronym{UNICOS}
@cindex Cray
That is, if your local machine has problems working with large files,
try running @acronym{NCO} from a more powerful machine, such as a
network server.
Certain machine architectures, e.g., Cray @acronym{UNICOS}, have special
commands which allow one to increase the amount of interactive memory.
@cindex @code{ilimit}
@cindex @code{core dump}
If you get a core dump on a Cray system (e.g., @samp{Error exit (core
dumped)}), try increasing the available memory by using the
@code{ilimit} command.
@cindex speed
The speed of the @acronym{NCO} operators also depends on file size.
When processing large files the operators may appear to hang, or do
nothing, for large periods of time.
In order to see what the operator is actually doing, it is useful to
activate a more verbose output mode.
This is accomplished by supplying a number greater than 0 to the
@samp{-D @var{debug_level}} switch.
@cindex @var{debug_level}
@cindex debugging
When the @var{debug_level} is nonzero, the operators report their
current status to the terminal through the @var{stderr} facility.
Using @samp{-D} does not slow the operators down.
Choose a @var{debug_level} between 1 and 3 for most situations, e.g.,
@code{ncea -D 2 85.nc 86.nc 8586.nc}.
A full description of how to estimate the actual amount of memory the
multi-file @acronym{NCO} operators consume is given in @ref{Memory usage}.
@node Memory usage, Operator limitations, Large files and Memory, Strategies
@section Approximate @acronym{NCO} memory requirements
@cindex memory requirements
The multi-file operators currently comprise the record operators,
@command{ncra} and @command{ncrcat}, and the ensemble operators, @command{ncea}
and @command{ncecat},
The record operators require @emph{much less} memory than the ensemble
operators.
This is because the record operators are designed to operate on a single
record of a file at a time, while the ensemble operators must retrieve
an entire variable at a time into memory.
Let @math{MS} be the peak sustained memory demand of an operator,
@math{FT} be the memory required to store the entire contents of all the
variables to be processed in an input file,
@math{FR} be the memory required to store the entire contents of a
single record of each of the variables to be processed in an input file,
@math{VR} be the memory required to store a single record of the
largest record variable to be processed in an input file,
@math{VT} be the memory required to store the largest variable
to be processed in an input file,
@math{VI} be the memory required to store the largest variable
which is not processed, but is copied from the initial file to the
output file.
All operators require @math{MI = VI} during the initial copying of
variables from the first input file to the output file.
This is the @emph{initial} (and transient) memory demand.
The @emph{sustained} memory demand is that memory required by the
operators during the processing (i.e., averaging, concatenation)
phase which lasts until all the input files have been processed.
The operators have the following memory requirements:
@command{ncrcat} requires @math{MS <= VR}.
@command{ncecat} requires @math{MS <= VT}.
@command{ncra} requires @math{MS = 2FR + VR}.
@command{ncea} requires @math{MS = 2FT + VT}.
@command{ncdiff} requires @math{MS <= 2VT}.
@command{ncflint} requires @math{MS <= 2VT}.
Note that only variables which are processed, i.e., averaged or
concatenated, contribute to @math{MS}.
Memory is never allocated to hold variables which do not appear in the
output file (@pxref{Variable subsetting}).
@node Operator limitations, , Memory usage, Strategies
@section Performance limitations of the operators
@enumerate
@item
@cindex buffering
No buffering of data is performed during @command{ncvarget} and
@command{ncvarput} operations.
@cindex performance
@cindex operator speed
@cindex speed
@cindex execution time
Hyperslabs too large too hold in core memory will suffer substantial
performance penalties because of this.
@item
@cindex monotonic coordinates
Since coordinate variables are assumed to be monotonic, the search for
bracketing the user-specified limits should employ a quicker algorithm,
like bisection, than the two-sided incremental search currently
implemented.
@item
@cindex @var{C_format}
@cindex @var{FORTRAN_format}
@cindex @var{signedness}
@cindex @var{scale_format}
@cindex @var{add_offset}
@var{C_format}, @var{FORTRAN_format}, @var{signedness},
@var{scale_format} and @var{add_offset} attributes are ignored by
@command{ncks} when printing variables to screen.
@item
@cindex Yorick
Some random access operations on large files on certain architectures
(e.g., 400 Mb on @acronym{UNICOS}) are @emph{much} slower with these operators
than with similar operations performed using languages that bypass the
netCDF interface (e.g., Yorick).
The cause for this is not understood at present.
@end enumerate
@node Common features, Operators, Strategies, Top
@chapter Features common to most operators
Many features have been implemented in more than one operator and are
described here for brevity.
The description of each feature is preceded by a box listing the
operators for which the feature is implemented.
Command line switches for a given feature are consistent across all
operators wherever possible.
If no ``key switches'' are listed for a feature, then that particular
feature is automatic and cannot be controlled by the user.
@menu
* Specifying input files::
* Remote storage::
* File retention::
* Variable subsetting::
* Coordinate variables::
* Fortran indexing::
* Hyperslabs::
* Wrapped coordinates::
* Stride::
* Missing values::
* Operation Types::
* Type conversion::
* Suppressing interactive prompts::
* History attribute::
* NCAR CSM Conventions::
* ARM Conventions::
* Operator version::
@end menu
@node Specifying input files, Remote storage, Common features, Common features
@section Specifying input files
@cindex globbing
@cindex regular expressions
@cindex @code{NINTAP}
@cindex Processor, @acronym{CCM}
@cindex @acronym{CCM} Processor
@cindex @code{-n @var{loop}}
@cindex @code{-p @var{input-path}}
@cindex @var{input-path}
@cartouche
@noindent
Availability: All operators@*
Key switches: @samp{-n}, @samp{-p}@*
@end cartouche
It is important that the user be able to specify multiple input files
without tediously typing in each by its full name.
@cindex @acronym{UNIX}
There are four different ways of specifying input files to @acronym{NCO}:
explicitly typing each, using @acronym{UNIX} shell wildcards, and using the @acronym{NCO}
@samp{-n} and @samp{-p} switches.
To illustrate these methods, consider the simple problem of using
@command{ncra} to average five input files, @file{85.nc}, @file{86.nc},
@dots{} @file{89.nc}, and store the results in @file{8589.nc}.
Here are the four methods in order.
They produce identical answers.
@example
ncra 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc
ncra 8[56789].nc 8589.nc
ncra -p @var{input-path} 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc
ncra -n 5,2,1 85.nc 8589.nc
@end example
The first method (explicitly specifying all filenames) works by brute
force.
The second method relies on the operating system shell to @dfn{glob}
(expand) the @dfn{regular expression} @code{8[56789].nc}.
The shell passes valid filenames which match the expansion to
@command{ncra}.
The third method uses the @samp{-p @var{input-path}} argument to specify
the directory where all the input files reside.
@acronym{NCO} prepends @var{input-path} (e.g., @file{/data/usrname/model}) to all
@var{input-files} (but not to @var{output-file}).
Thus, using @samp{-p}, the path to any number of input files need only
be specified once.
Note @var{input-path} need not end with @samp{/}; the @samp{/} is
automatically generated if necessary.
The last method passes (with @samp{-n}) syntax concisely describing
the entire set of filenames
@footnote{The @samp{-n} option is a backward compatible superset of the
@code{NINTAP} option from the @acronym{NCAR} @acronym{CCM} Processor.}.
@cindex multi-file operators
@cindex files, multiple
This option is only available with the @dfn{multi-file operators}:
@command{ncra}, @command{ncrcat}, @command{ncea}, and @command{ncecat}.
By definition, multi-file operators are able to process an arbitrary
number of @var{input-files}.
This option is very useful for abbreviating lists of filenames
representable as
@var{alphanumeric_prefix}+@var{numeric_suffix}+@file{.}+@var{filetype}
where @var{alphanumeric_prefix} is a string of arbitrary length and
composition, @var{numeric_suffix} is a fixed width field of digits, and
@var{filetype} is a standard filetype indicator.
For example, in the file @file{ccm3_h0001.nc}, we have
@var{alphanumeric_prefix} = @file{ccm3_h}, @var{numeric_suffix} =
@file{0001}, and @var{filetype} = @file{nc}.
@acronym{NCO} is able to decode lists of such filenames encoded using the
@samp{-n} option.
The simpler (3-argument) @samp{-n} usage takes the form
@code{-n @var{file_number},@var{digit_number},@var{numeric_increment}}
where @var{file_number} is the number of files, @var{digit_number} is
the fixed number of numeric digits comprising the @var{numeric_suffix},
and @var{numeric_increment} is the constant, integer-valued difference
between the @var{numeric_suffix} of any two consecutive files.
The value of @var{alphanumeric_prefix} is taken from the input file,
which serves as a template for decoding the filenames.
In the example above, the encoding @code{-n 5,2,1} along with the input
file name @file{85.nc} tells @acronym{NCO} to
construct five (5) filenames identical to the template @file{85.nc}
except that the final two (2) digits are a numeric suffix to be
incremented by one (1) for each successive file.
Currently @var{filetype} may be either be empty, @file{nc},
@file{cdf}, @file{hdf}, or @file{hd5}.
If present, these @var{filetype} suffixes (and the preceding @file{.})
are ignored by @acronym{NCO} as it uses the @samp{-n} arguments to locate,
evaluate, and compute the @var{numeric_suffix} component of filenames.
@cindex wrapped filenames
Recently the @samp{-n} option has been extended to allow convenient
specification of filenames with ``circular'' characteristics.
This means it is now possible for @acronym{NCO} to automatically generate
filenames which increment regularly until a specified maximum value, and
then wrap back to begin again at a specified minimum value.
The corresponding @samp{-n} usage becomes more complex, taking one or
two additional arguments for a total of four or five, respectively:
@code{-n
@var{file_number},@var{digit_number},@var{numeric_increment}[,@var{numeric_max}[,@var{numeric_min}]]}
where @var{numeric_max}, if present, is the maximum integer-value of
@var{numeric_suffix} and @var{numeric_min}, if present, is the minimum
integer-value of @var{numeric_suffix}.
Consider, for example, the problem of specifying non-consecutive input
files where the filename suffixes end with the month index.
In climate modeling it is common to create summertime and wintertime
averages which contain the averages of the months June--July--August,
and December--January--February, respectively:
@example
ncra -n 3,2,1 85_06.nc 85_0608.nc
ncra -n 3,2,1,12 85_12.nc 85_1202.nc
ncra -n 3,2,1,12,1 85_12.nc 85_1202.nc
@end example
The first example shows that three arguments to the @samp{-n} option
suffice to specify consecutive months (@code{06, 07, 08}) which do not
``wrap'' back to a minimum value.
The second example shows how to use the optional fourth and fifth
elements of the @samp{-n} option to specify a wrap value to @acronym{NCO}.
The fourth argument to @samp{-n}, if present, specifies the maximum
integer value of @var{numeric_suffix}.
In this case the maximum value is 12, and will be formatted as @file{12}
in the filename string.
The fifth argument to @samp{-n}, if present, specifies the minimum
integer value of @var{numeric_suffix}.
The default minimum filename suffix is 1, which is formatted as
@file{01} in this case.
Thus the second and third examples have the same effect, that is, they
automatically generate, in order, the filenames @file{85_12.nc},
@file{85_01.nc}, and @file{85_02.nc} as input to @acronym{NCO}.
@node Remote storage, File retention, Specifying input files, Common features
@section Accessing files stored remotely
@cindex @code{rcp}
@cindex @code{scp}
@cindex @file{.rhosts}
@cindex @acronym{NCAR MSS}
@cindex @acronym{MSS}
@cindex Mass Store System
@cindex @acronym{URL}
@cindex @code{ftp}
@cindex remote files
@cindex synchronous file access
@cindex asynchronous file access
@cartouche
@noindent
Availability: All operators@*
Key switches: @samp{-p}, @samp{-l}, @samp{-R}@*
@end cartouche
All @acronym{NCO} operators can retrieve files from remote sites as well as the
local file system.
A remote site can be an anonymous @acronym{FTP} server, a machine on which the
user has @command{rcp} or @command{scp} privileges, or @acronym{NCAR}'s Mass
Storage System (@acronym{MSS}).
To access a file via an anonymous @acronym{FTP} server, supply the remote file's
@acronym{URL}.
To access a file using @command{rcp} or @command{scp}, specify the
Internet address of the remote file.
Of course in this case you must have @command{rcp} or @command{scp}
privileges which allow transparent (no password entry required) access
to the remote machine.
This means that @file{~/.rhosts} or @file{~/ssh/authorized_keys} must
be set accordingly on both local and remote machines.
@cindex @command{msrcp}
@cindex @command{msread}
@cindex @command{nrnet}
To access a file on @acronym{NCAR}'s @acronym{MSS}, specify the full @acronym{MSS} pathname of the
remote file.
@acronym{NCO} will attempt to detect whether the local machine has direct
(synchronous) @acronym{MSS} access.
In this case, @acronym{NCO} attempts to use the @acronym{NCAR}
@command{msrcp} command
@footnote{The @command{msrcp} command must be in the user's path and
located in one of the following directories: @code{/usr/local/bin},
@code{/usr/bin}, @code{/opt/local/bin}, or @code{/usr/local/dcs/bin}.
}, or, failing that, @code{/usr/local/bin/msread}.
Otherwise @acronym{NCO} attempts to retrieve the @acronym{MSS} file
through the (asynchronous) Masnet Interface Gateway System
(@acronym{MIGS}) using the @command{nrnet} command.
The following examples show how one might analyze files stored on
remote systems.
@example
ncks -H -l ./ ftp://ftp.cgd.ucar.edu/pub/zender/nco/in.nc
ncks -H -l ./ dust.ps.uci.edu:/home/zender/nco/in.nc
ncks -H -l ./ /ZENDER/nco/in.nc
ncks -H -l ./ mss:/ZENDER/nco/in.nc
ncks -H -l ./ -p http://www.cdc.noaa.gov/cgi-bin/nph-nc/Datasets/-
ncep.reanalysis.dailyavgs/surface air.sig995.1975.nc
@end example
@noindent
The first example will work verbatim on your system if your system is
connected to the Internet and is not behind a firewall.
The second example will work on your system if you have @command{rcp}
or @command{scp} access to the machine @code{dust.ps.uci.edu}.
The third example will work from @acronym{NCAR} computers with local access to
the @command{msrcp}, @command{msread}, or @command{nrnet} commands.
The fourth command will work if your local version of @acronym{NCO} was built with
@acronym{DODS} capability (@pxref{DODS}).
The above commands can be rewritten using the @samp{-p @var{input-path}}
option as follows:
@cindex @code{-p @var{input-path}}
@cindex @var{input-path}
@cindex @code{-l @var{output-path}}
@cindex @var{output-path}
@example
ncks -H -p ftp://ftp.cgd.ucar.edu/pub/zender/nc -l ./ in.nc
ncks -H -p dust.ps.uci.edu:/home/zender/nc -l ./ in.nc
ncks -H -p /ZENDER/nco -l ./ in.nc
ncks -H -p mss:/ZENDER/nco -l ./ in.nc
@end example
@noindent
Using @samp{-p} is recommended because it clearly separates the
@var{input-path} from the filename itself, sometimes called the
@dfn{stub}.
@cindex stub
When @var{input-path} is not explicitly specified using @samp{-p}, @acronym{NCO}
internally generates an @var{input-path} from the first input filename.
The automatically generated @var{input-path} is constructed by stripping
the input filename of everything following the final @samp{/} character
(i.e., removing the stub).
The @samp{-l @var{output-path}} option tells @acronym{NCO} where to store the
remotely retrieved file and the output file.
Often the path to a remotely retrieved file is quite different than the
path on the local machine where you would like to store the file.
If @samp{-l} is not specified then @acronym{NCO} internally generates an
@var{output-path} by simply setting @var{output-path} equal to
@var{input-path} stripped of any machine names.
If @samp{-l} is not specified and the remote file resides on the @acronym{NCAR}
@acronym{MSS} system, then the leading character of @var{input-path}, @samp{/}, is
also stripped from @var{output-path}.
Specifying @var{output-path} as @samp{-l ./} tells @acronym{NCO} to store the
remotely retrieved file and the output file in the current directory.
Note that @samp{-l .} is equivalent to @samp{-l ./} though the latter is
syntactically more clear.
@menu
* DODS::
@end menu
@node DODS, , Remote storage, Remote storage
@subsection @acronym{DODS}
@cindex @acronym{DODS}
@cindex HTTP protocol
@cindex @env{DODS_ROOT}
@cindex Distributed Oceanographic Data System
The Distributed Oceanographic Data System (@acronym{DODS}) provides replacements
for common data interface libraries like netCDF.
The @acronym{DODS} versions of these libraries implement network transparent
access to data using the HTTP protocol.
@acronym{NCO} may be @acronym{DODS}-enabled by linking @acronym{NCO} to the @acronym{DODS} libraries.
Examples of how to do this are given in the @acronym{DODS} documentation and
in the @file{Makefile} distributed with @acronym{NCO}.
Building @acronym{NCO} with @code{make DODS=Y} adds the (non-intuitive) commands
to link to the @acronym{DODS} libraries installed in the @env{$DODS_ROOT}
directory.
You will probably need to visit the
@uref{http://www.unidata.ucar.edu/packages/dods, DODS Homepage}
to learn which libraries to obtain and link to for the
@acronym{DODS}-enabled @acronym{NCO} executables.
Once @acronym{NCO} is @acronym{DODS}-enabled the operators are @acronym{DODS} clients.
All @acronym{DODS} clients have network transparent access to any files controlled
by a @acronym{DODS} server.
Simply specify the path to the file in @acronym{URL} notation
@example
ncks -C -d lon,0 -v lon -l ./ -p http://www.cdc.noaa.gov/cgi-bin/nph-nc/
Datasets/ncep.reanalysis.dailyavgs/surface air.sig995.1975.nc foo.nc
@end example
@noindent
@acronym{NCO} operates on these remote files without having to transfer the
files to the local disk.
@acronym{DODS} causes all the I/O to appear to @acronym{NCO} as if the files were local.
The advantage to this is that only the required data (e.g., the variable
or hyperslab specified) are transferred over the network.
The advantages of this are obvious if you are examining small parts of
large files stored at remote locations.
Note that the remote retrieval features of @acronym{NCO} can be used to
retrieve @emph{any} file, including non-netCDF files, via @command{SSH},
anonymous @acronym{FTP}, or @command{msrcp}.
Often this method is quicker than using a browser, or running an @acronym{FTP}
session from a shell window yourself.
For example, say you want to obtain a @acronym{JPEG} file from a weather server.
@example
ncks -p ftp://weather.edu/pub/pix/jpeg -l ./ storm.jpg
@end example
@noindent
In this example, @command{ncks} automatically performs an anonymous @acronym{FTP}
login to the remote machine and retrieves the specified file.
When @command{ncks} attempts to read the local copy of @file{storm.nc}
as a netCDF file, it fails and exits, leaving @file{storm.nc} in
the current directory.
@node File retention, Variable subsetting, Remote storage, Common features
@section Retention of remotely retrieved files
@cindex file deletion
@cindex file removal
@cindex file retention
@cindex @code{-R}
@cartouche
@noindent
Availability: All operators@*
Key switches: @samp{-R}@*
@end cartouche
In order to conserve local file system space, files retrieved from
remote locations are automatically deleted from the local file system
once they have been processed.
Many @acronym{NCO} operators were constructed to work with numerous large (e.g.,
200 Mb) files.
Retrieval of multiple files from remote locations is done serially.
Each file is retrieved, processed, then deleted before the cycle
repeats.
In cases where it is useful to keep the remotely-retrieved files on the
local file system after processing, the automatic removal feature may be
disabled by specifying @samp{-R} on the command line.
@node Variable subsetting, Coordinate variables, File retention, Common features
@section Including/Excluding specific variables
@cindex @code{-v @var{var}}
@cindex @code{-x}
@cartouche
@noindent
Availability: @command{ncdiff}, @command{ncea}, @command{ncecat}, @command{ncflint},
@command{ncks}, @command{ncra}, @command{ncrcat}, @command{ncwa}@*
Key switches: @samp{-v}, @samp{-x}@*
@end cartouche
Variable subsetting is implemented with the @samp{-v @var{var}[,@dots{}]} and
@samp{-x} options.
A list of variables to extract is specified following the @samp{-v}
option, e.g., @samp{-v time,lat,lon}.
Not using the @samp{-v} option is equivalent to specifying all
variables.
The @samp{-x} option causes the list of variables specified with
@samp{-v} to be excluded rather than extracted.
Thus @samp{-x} saves typing when you only want to extract fewer than
half of the variables in a file.
@cindex memory requirements
Remember, if you are stretching the limits of your system's memory by
averaging or concatenating large files, then the easiest solution is
often to use the @samp{-v} option to retain only the variables you
really need (@pxref{Memory usage}).
@node Coordinate variables, Fortran indexing, Variable subsetting, Common features
@section Including/Excluding coordinate variables
@cindex @code{-C}
@cindex @code{-c}
@cartouche
@noindent
Availability: @command{ncdiff}, @command{ncea}, @command{ncecat}, @command{ncflint},
@command{ncks}, @command{ncra}, @command{ncrcat}, @command{ncwa}@*
Key switches: @samp{-C}, @samp{-c}@*
@end cartouche
By default, coordinates variables associated with any variable appearing
in the @var{output-file} will also appear in the @var{output-file}, even
if they are not explicitly specified, e.g., with the @samp{-v} switch.
Thus variables with a latitude coordinate @code{lat} always carry the
values of @code{lat} with them into the @var{output-file}.
This feature can be disabled with @samp{-C}, which causes @acronym{NCO} to not
automatically add coordinates to the variables appearing in the
@var{output-file}.
However, using @samp{-C} does not preclude the user from including some
coordinates in the output files simply by explicitly selecting the
coordinates with the @var{-v} option.
The @samp{-c} option, on the other hand, is a shorthand way of
automatically specifying that @emph{all} coordinate variables in the
@var{input-files} should appear in the @var{output-file}.
Thus @samp{-c} allows the user to select all the coordinate variables
without having to know their names.
@node Fortran indexing, Hyperslabs, Coordinate variables, Common features
@section C & Fortran index conventions
@cindex index conventions
@cindex Fortran index convention
@cindex C index convention
@cindex @code{-F}
@cartouche
@noindent
Availability: @command{ncdiff}, @command{ncea}, @command{ncecat}, @command{ncflint},
@command{ncks}, @command{ncra}, @command{ncrcat}, @command{ncwa}@*
Key switches: @samp{-F}@*
@end cartouche
By default, @acronym{NCO} uses C-style (0-based) indices for all I/O.
The @samp{-F} switch tells @acronym{NCO} to switch to reading and writing with
Fortran index conventions.
In Fortran, indices begin counting from 1 (rather than 0), and
dimensions are ordered from fastest varying to slowest varying.
Consider a file @file{85.nc} containing 12 months of data in the record
dimension @code{time}.
The following hyperslab operations produce identical results, a
June-July-August average of the data:
@example
ncra -d time,5,7 85.nc 85_JJA.nc
ncra -F -d time,6,8 85.nc 85_JJA.nc
@end example
Printing variable @var{three_dmn_var} in file @file{in.nc} first with C
indexing conventions, then with Fortran indexing conventions results in
the following output formats:
@example
% ncks -H -v three_dmn_var in.nc
% lat[0]=-90 lev[0]=1000 lon[0]=-180 three_dmn_var[0]=0
@dots{}
% ncks -F -H -v three_dmn_var in.nc
% lon(1)=-180 lev(1)=1000 lat(1)=-90 three_dmn_var(1)=0
@end example
@node Hyperslabs, Wrapped coordinates, Fortran indexing, Common features
@section Hyperslabs
@cindex hyperslab
@cindex dimension limits
@cindex coordinate limits
@cindex @code{-d @var{dim},[@var{min}][,[@var{max}]]}
@cartouche
@noindent
Availability: @command{ncdiff}, @command{ncea}, @command{ncecat}, @command{ncflint},
@command{ncks}, @command{ncra}, @command{ncrcat}, @command{ncwa}@*
Key switches: @samp{-d}@*
@end cartouche
A @dfn{hyperslab} is a subset of a variable's data.
The coordinates of a hyperslab are specified with the @code{-d
@var{dim},[@var{min}][,[@var{max}]]} option.
The bounds of the hyperslab to be extracted are specified by the
associated @var{min} and @var{max} values.
A half-open range is specified by omitting either the @var{min} or
@var{max} parameter but including the separating comma.
The unspecified limit is interpreted as the maximum or minimum value in
the unspecified direction.
A cross-section at a specific coordinate is extracted by specifying only
the @var{min} limit and omitting a trailing comma.
Dimensions not mentioned are passed with no reduction in range.
The dimensionality of variables is not reduced (in the case of a
cross-section, the size of the constant dimension will be one).
If values of a coordinate-variable are used to specify a range or
cross-section, then the coordinate variable must be monotonic (values
either increasing or decreasing).
In this case, command-line values need not exactly match coordinate
values for the specified dimension.
Ranges are determined by seeking the first coordinate value to occur in
the closed range [@var{min},@var{max}] and including all subsequent values until one
falls outside the range.
The coordinate value for a cross-section is the coordinate-variable
value closest to the specified value and must lie within the range or
coordinate-variable values.
Coordinate values should be specified using real notation with a decimal
point required in the value, whereas dimension indices are specified
using integer notation without a decimal point.
Note that this convention is only to differentiate coordinate values
from dimension indices, and is independent of the actual type of netCDF
coordinate variables, if any.
For a given dimension, the specified limits must both be coordinate
values (with decimal points) or dimension indices (no decimal points).
@cindex @code{NC_BYTE}
@cindex @code{NC_CHAR}
User-specified coordinate limits are promoted to double precision values
while searching for the indices which bracket the range.
Thus, hyperslabs on coordinates of type @code{NC_BYTE} and
@code{NC_CHAR} are computed numerically rather than lexically, so the
results are unpredictable.
@cindex wrapped coordinates
The relative magnitude of @var{min} and @var{max} indicate to the
operator whether to expect a @dfn{wrapped coordinate}
(@pxref{Wrapped coordinates}), such as longitude.
If @math{@var{min} > @var{max}}, the @acronym{NCO} expects the coordinate to be
wrapped, and a warning message will be printed.
When this occurs, @acronym{NCO} selects all values outside the domain
[@math{@var{max} < @var{min}}], i.e., all the values exclusive of the
values which would have been selected if @var{min} and @var{max} were
swapped.
If this seems confusing, test your command on just the coordinate
variables with @command{ncks}, and then examine the output to ensure @acronym{NCO}
selected the hyperslab you expected (coordinate wrapping is only
supported by @command{ncks}).
Because of the way wrapped coordinates are interpreted, it is very
important to make sure you always specify hyperslabs in the
monotonically increasing sense, i.e., @math{@var{min} < @var{max}}
(even if the underlying coordinate variable is monotonically
decreasing).
The only exception to this is when you are indeed specifying a wrapped
coordinate.
The distinction is crucial to understand because the points selected by,
e.g., @code{-d longitude,50.,340.}, are exactly the complement of the
points selected by @code{-d longitude,340.,50.}.
Not specifying any hyperslab option is equivalent to specifying full
ranges of all dimensions.
This option may be specified more than once in a single command
(each hyperslabed dimension requires its own @code{-d} option).
@node Wrapped coordinates, Stride, Hyperslabs, Common features
@section Wrapped coordinates
@cindex wrapped coordinates
@cindex longitude
@cindex @code{-d @var{dim},[@var{min}][,[@var{max}]]}
@cartouche
@noindent
Availability: @command{ncks}@*
Key switches: @samp{-d}@*
@end cartouche
A @dfn{wrapped coordinate} is a coordinate whose values increase or
decrease monotonically (nothing unusual so far), but which represents a
dimension that ends where it begins (i.e., wraps around on itself).
Longitude (i.e., degrees on a circle) is a familiar example of a wrapped
coordinate.
Longitude increases to the East of Greenwich, England, where it is
defined to be zero.
Halfway around the globe, the longitude is 180 degrees East (or West).
Continuing eastward, longitude increases to 360 degrees East at
Greenwich.
The longitude values of most geophysical data is either in the range
[0,360), or [-180,180).
In either case, the Westernmost and Easternmost longitudes are
numerically separated by 360 degrees, but represent contiguous regions
on the globe.
For example, the Saharan desert stretches from roughly 340 to 50 degrees
East.
Extracting the hyperslab of data representing the Sahara from a global
dataset presents special problems when the global dataset is stored
consecutively in longitude from 0 to 360 degrees.
This is because the data for the Sahara will not be contiguous in the
@var{input-file} but is expected by the user to be contiguous in the
@var{output-file}.
In this case, @command{ncks} must invoke special software routines to assemble
the desired output hyperslab from multiple reads of the @var{input-file}.
Assume the domain of the monotonically increasing longitude coordinate
@code{lon} is @math{0 < @var{lon} < 360}.
@command{ncks} will extract a hyperslab which crosses the Greenwich
meridian simply by specifying the westernmost longitude as @var{min} and
the easternmost longitude as @var{max}.
Thus, the following commands extract a hyperslab containing the Saharan desert:
@example
ncks -d lon,340.,50. in.nc out.nc
ncks -d lon,340.,50. -d lat,10.,35. in.nc out.nc
@end example
@noindent
The first example selects data in the same longitude range as the Sahara.
The second example further constrains the data to having the same
latitude as the Sahara.
The coordinate @code{lon} in the @var{output-file}, @file{out.nc}, will
no longer be monotonic!
The values of @code{lon} will be, e.g., @samp{340, 350, 0, 10, 20, 30,
40, 50}.
This can have serious implications should you run @file{out.nc} through
another operation which expects the @code{lon} coordinate to be
monotonically increasing.
Fortunately, the chances of this happening are slim, since @code{lon}
has already been hyperslabbed, there should be no reason to hyperslab
@code{lon} again.
Should you need to hyperslab @code{lon} again, be sure to give
dimensional indices as the hyperslab arguments, rather than coordinate
values (@pxref{Hyperslabs}).
@node Stride, Missing values, Wrapped coordinates, Common features
@section Stride
@cindex stride
@cindex @code{-d @var{dim},[@var{min}][,[@var{max}]]}
@cartouche
@noindent
Availability: @command{ncks}, @command{ncra}, @command{ncrcat}@*
Key switches: @samp{-d}@*
@end cartouche
@command{ncks} offers support for specifying a @dfn{stride} for any
hyperslab, while @command{ncra} and @command{ncrcat} suport the @var{stride}
argument only for the record dimension.
The @var{stride} is the spacing between consecutive points in a
hyperslab.
A @var{stride} of 1 means pick all the elements of the hyperslab, but a
@var{stride} of 2 means skip every other element, etc.
Using the @var{stride} option with @command{ncra} and @command{ncrcat} makes
it possible, for instance, to average or concatenate regular intervals
across multi-file input data sets.
The @var{stride} is specified as the optional fourth argument to the
@samp{-d} hyperslab specification:
@code{-d @var{dim},[@var{min}][,[@var{max}]][,[@var{stride}]]}.
Specify @var{stride} as an integer (i.e., no decimal point) following
the third comma in the @samp{-d} argument.
There is no default value for @var{stride}.
Thus using @samp{-d time,,,2} is valid but @samp{-d time,,,2.0} and
@samp{-d time,,,} are not.
When @var{stride} is specified but @var{min} is not, there is an
ambiguity as to whether the extracted hyperslab should begin with (using
C-style, 0-based indexes) element 0 or element @samp{stride-1}.
@acronym{NCO} must resolve this ambiguity and it chooses element 0 as the first
element of the hyperslab when @var{min} is not specified.
Thus @samp{-d time,,,@var{stride}} is syntactically equivalent to
@samp{-d time,0,,@var{stride}}.
This means, for example, that specifying the operation @samp{-d
time,,,2} on the array @samp{1,2,3,4,5} selects the hyperslab @samp{1,3,5}.
To obtain the hyperslab @samp{2,4} instead, simply explicitly specify
the starting index as 1, i.e., @samp{-d time,1,,2}.
For example, consider a file @file{8501_8912.nc} which contains 60
consecutive months of data.
Say you wish to obtain just the March data from this file.
Using 0-based subscripts (@pxref{Fortran indexing}) these
data are stored in records 2, 14, @dots{} 50 so the desired @var{stride}
is 12.
Without the @var{stride} option, the procedure is very awkward.
One could use @command{ncks} five times and then use @command{ncrcat} to
concatenate the resulting files together:
@example
foreach idx (02 14 26 38 50)
ncks -d time,$idx 8501_8912.nc foo.$idx
end
ncrcat foo.?? 8589_03.nc
rm foo.??
@end example
With the @var{stride} option, @command{ncks} performs this hyperslab
extraction in one operation:
@example
ncks -d time,2,,12 8501_8912.nc 8589_03.nc
@end example
@xref{ncks netCDF Kitchen Sink}, for more information on @command{ncks}.
The @var{stride} option is supported by @command{ncra} and @command{ncrcat}
for the record dimension only.
This makes it possible, for instance, to average or concatenate regular
intervals across multi-file input data sets.
@example
ncra -F -d time,3,,12 85.nc 86.nc 87.nc 88.nc 89.nc 8589_03.nc
ncrcat -F -d time,3,,12 85.nc 86.nc 87.nc 88.nc 89.nc 8503_8903.nc
@end example
@node Missing values, Operation Types, Stride, Common features
@section Missing values
@cindex missing values
@cindex data, missing
@cindex averaging data
@cindex @code{missing_value} attribute
@cartouche
@noindent
Availability: @command{ncdiff}, @command{ncea}, @command{ncflint}, @command{ncra},
@command{ncwa}@*
Key switches: None@*
@end cartouche
The phrase @dfn{missing data} refers to data points that are missing,
invalid, or for any reason not intended to be arithmetically processed
in the same fashion as valid data.
@cindex arithmetic operators
The @acronym{NCO} arithmetic operators attempt to handle missing data in an
intelligent fashion.
There are four steps in the @acronym{NCO} treatment of missing data:
@enumerate
@item
Identifying variables which may contain missing data.
@acronym{NCO} follows the convention that missing data should be stored
with the @var{missing_value} specified in the variable's
@code{missing_value} attribute
@footnote{@acronym{NCO} averagers have a bug (TODO 121) which may cause
them to behave incorrectly if the @var{missing_value} = @samp{0.0} for a
variable to be averaged.
The workaround for this bug is to change @var{missing_value} to anything
besides zero.}.
The @emph{only} way @acronym{NCO} recognizes that a variable @emph{may} contain
missing data is if the variable has a @code{missing_value} attribute.
In this case, any elements of the variable which are numerically equal
to the @var{missing_value} are treated as missing data.
@item
Converting the @var{missing_value} to the type of the variable, if
neccessary.
Consider a variable @var{var} of type @var{var_type} with a
@code{missing_value} attribute of type @var{att_type} containing the
value @var{missing_value}.
As a guideline, the type of the @code{missing_value} attribute should be
the same as the type of the variable it is attached to.
If @var{var_type} equals @var{att_type} then @acronym{NCO} straightforwardly
compares each value of @var{var} to @var{missing_value} to determine
which elements of @var{var} are to be treated as missing data.
If not, then @acronym{NCO} will internally convert @var{att_type} to
@var{var_type} by using the implicit conversion rules of C, or, if
@var{att_type} is @code{NC_CHAR}
@footnote{For example, the @acronym{DOE} @acronym{ARM} program often uses @var{att_type} =
@code{NC_CHAR} and @var{missing_value} = @samp{-99999.}.
}, by typecasting the results of the C function
@code{strtod(@var{missing_value})}.
@cindex @command{ncatted}
You may use the @acronym{NCO} operator @command{ncatted} to change the
@code{missing_value} attribute and all data whose data is
@var{missing_value} to a new value
(@pxref{ncatted netCDF Attribute Editor}).
@item
Identifying missing data during arithmetic operations.
@cindex performance
@cindex operator speed
@cindex speed
@cindex execution time
@cindex arithmetic operators
When an @acronym{NCO} arithmetic operator is processing a variable @var{var} with
a @code{missing_value} attribute, it compares each value of @var{var}
to @var{missing_value} before performing an operation.
Note the @var{missing_value} comparison inflicts a performance penalty
on the operator.
Arithmetic processing of variables which contain the
@code{missing_value} attribute always incurs this penalty, even when
none of the data is missing.
Conversely, arithmetic processing of variables which do not contain the
@code{missing_value} attribute never incurs this penalty.
In other words, do not attach a @code{missing_value} attribute to a
variable which does not contain missing data.
This exhortation can usually be obeyed for model generated data, but it
may be harder to know in advance whether all observational data will be
valid or not.
@item
Treatment of any data identified as missing in arithmetic operators.
@cindex @command{ncea}
@cindex @command{ncra}
@cindex @command{ncwa}
@cindex @command{ncdiff}
@cindex @command{ncflint}
@acronym{NCO} averagers (@command{ncra}, @command{ncea}, @command{ncwa}) do not count any
element with the value @var{missing_value} towards the average.
@command{ncdiff} and @command{ncflint} define a @var{missing_value} result
when either of the input values is a @var{missing_value}.
Sometimes the @var{missing_value} may change from file to file in a
multi-file operator, e.g., @command{ncra}.
@acronym{NCO} is written to account for this (it always compares a variable to the
@var{missing_value} assigned to that variable in the current file).
Suffice it to say that, in all known cases, @acronym{NCO} does ``the right thing''.
@end enumerate
@node Operation Types, Type conversion, Missing values, Common features
@section Operation Types
@cindex operation types
@cindex @code{avg}
@cindex @code{sqravg}
@cindex @code{avgsqr}
@cindex @code{min}
@cindex @code{max}
@cindex @code{rmssdn}
@cindex @code{rms}
@cindex @code{ttl}
@cindex @code{sqrt}
@cindex average
@cindex mean
@cindex total
@cindex minimum
@cindex maximum
@cindex root-mean-square
@cindex standard deviation
@cindex variance
@cindex @code{-y}
@cartouche
@noindent
Availability: @command{ncra},@command{ncea},@command{ncwa} @*
Key switches: @samp{-y}@*
@end cartouche
@noindent
The @samp{-y @var{op_typ}} switch allows specification of many different
types of operations
Set @var{op_typ} to the abbreviated key for the corresponding operation:
@table @code
@item avg
Mean value (default)
@item sqravg
Square of the mean
@item avgsqr
Mean of sum of squares
@item max
Maximium value
@item min
Minimium value
@item rms
Root-mean-square (normalized by N)
@item rmssdn
Root-mean square normalized by N-1
@item sqrt
Square root of the mean
@item ttl
Sum of values
@end table
@noindent
If an operation type is not specified with @samp{-y} then the operator
will perform an arithmetic average by default.
The mathematical definition of each operation is given below.
@xref{ncwa netCDF Weighted Averager}, for additional information on
masks and normalization.
Averaging is the default, and will be described first so the
terminology for the other operations will be familiar.
@ifhtml
<p><b>Note for HTML user's</b>:
<br>The definition of mathematical operations involving rank reduction
(e.g., averaging) relies heavily on mathematical expressions which
cannot be easily represented in HTML.
<b>See the printed manual for complete documentation.</b>
@end ifhtml
@ifinfo
Note for Info user's:
The definition of mathematical operations involving rank reduction
(e.g., averaging) relies heavily on mathematical expressions which
cannot be easily represented in Info.
See the printed manual for complete documentation.
@end ifinfo
@tex
The masked, weighted average of a variable $x$ can be generally
represented as
$$
\bar x_j = {\sum_{i = 1}^{i = N} \mu_i m_i w_i x_i \over \sum_{i =
1}^{i = N} \mu_i m_i w_i}
$$
where $\bar x_j$ is the $j$'th element of the output hyperslab, $x_i$ is
the $i$'th element of the input hyperslab, $\mu_i$ is 1 unless $x_i$
equals the missing value, $m_i$ is 1 unless $x_i$ is masked, and $w_i$
is the weight.
This formiddable looking formula represents a simple weighted average
whose bells and whistles are all explained below.
It is not early to note, however, that when $\mu_i = m_i = w_i = 1$, the
generic averaging expression above reduces to a simple arithmetic
average.
Furthermore, $m_i = w_i = 1$ for all operators besides @command{ncwa}, but
these variables are included in the discussion below for completeness
and for possible future use in other operators.
The size $J$ of the output hyperslab for a given variable is the product
of all the dimensions of the input variable which are not averaged over.
The size $N$ of the input hyperslab contributing to each $\bar x_j$ is
simply the product of the sizes of all dimensions which are averaged
over (i.e., dimensions specified with @samp{-a}).
Thus $N$ is the number of input elements which @emph{potentially}
contribute to each output element.
An input element $x_i$ contributes to the output element $x_j$ except
in two conditions:
@cindex missing values
@enumerate
@item $x_i$ equals the @var{missing value} (@pxref{Missing values}) for the
variable.
@item $x_i$ is located at a point where the masking condition
(@pxref{Masking condition}) is false.
@end enumerate
Points $x_i$ in either of these two categories do not contribute to
$x_j$, they are ignored.
We now define these criteria more rigorously.
Each $x_i$ has an associated Boolean weight $\mu_i$ whose value is 0 or
1 (false or true).
The value of $\mu_i$ is 1 (true) unless $x_i$ equals the @var{missing
value} (@pxref{Missing values}) for the variable.
Thus, for a variable with no @code{missing_value} attribute, $\mu_i$ is
always 1.
All @acronym{NCO} arithmetic operators (@command{ncdiff}, @command{ncra},
@command{ncea}, @command{ncflint}, @command{ncwa}) treat missing values
analogously.
Besides (weighted) averaging, @command{ncwa}, @command{ncra}, and @command{ncea}
also compute some common non-linear operations which may be specified
with the @samp{-y} switch (@pxref{Operation Types}).
The other rank-reducing operations are simple variations of the generic
weighted mean described above.
The total value of $x$ (@code{-y ttl}) is
$$
\bar x_j = \sum_{i = 1}^{i = N} \mu_i m_i w_i x_i
$$
Note that the total is the same as the numerator of the mean
of $x$, and may also be obtained in @command{ncwa} by using the @samp{-N}
switch (@pxref{ncwa netCDF Weighted Averager}).
The minimum value of $x$ (@code{-y min}) is
$$
\bar x_j = \min [ \mu_1 m_1 w_1 x_1, \mu_2 m_2 w_2 x_2, \ldots, \mu_N
m_N w_N x_N ]
$$
Analogously, the maximum value of $x$ (@code{-y max}) is
$$
\bar x_j = \max [ \mu_1 m_1 w_1 x_1, \mu_2 m_2 w_2 x_2, \ldots, \mu_N
m_N w_N x_N ]
$$
Thus the minima and maxima are determined after any weights are applied.
The square of the mean value of $x$ (@code{-y sqravg}) is
$$
\bar x_j = \left( {\sum_{i = 1}^{i = N} \mu_i m_i w_i x_i \over \sum_{i =
1}^{i = N} \mu_i m_i w_i} \right)^2
$$
The mean of the sum of squares of $x$ (@code{-y avgsqr}) is
$$
\bar x_j = {\sum_{i = 1}^{i = N} \mu_i m_i w_i x^2_i \over \sum_{i =
1}^{i = N} \mu_i m_i w_i}
$$
If $x$ represents a deviation from the mean of another variable, $x_i =
y_i - \bar{y}$ (possibly created by @command{ncdiff} in a previous step),
then applying @code{avgsqr} to $x$ computes the approximate variance of
$y$.
Computing the true variance of $y$ requires subtracting 1 from the
denominator, discussed below.
For a large sample size however, the two results will be nearly
indistinguishable.
The root mean square of $x$ (@code{-y rms}) is
$$
\bar x_j = \sqrt{ {\sum_{i = 1}^{i = N} \mu_i m_i w_i x^2_i \over \sum_{i =
1}^{i = N} \mu_i m_i w_i} }
$$
Thus @code{rms} simply computes the squareroot of the quantity computed
by @code{avgsqr}.
The root mean square of $x$ with standard-deviation-like normalization
(@code{-y rmssdn}) is implemented as follows.
When weights are not specified, this function is the same as the root
mean square of $x$ except one is subtracted from the sum in the
denominator
$$
\bar x_j = \sqrt{ {\sum_{i = 1}^{i = N} \mu_i m_i x^2_i \over -1 +
\sum_{i = 1}^{i = N} \mu_i m_i} }
$$
If $x$ represents the deviation from the mean of another variable,
$x_i = y_i - \bar{y}$, then applying @code{rmssdn} to $x$ computes the
standard deviation of $y$.
In this case the $-1$ in the denominator compensates for the degree of
freedom already used in computing $\bar{y}$ in the numerator.
Consult a statistics book for more details.
When weights are specified it is unclear how to compensate for this
extra degree of freedom.
Weighting the numerator and denominator of the above by $w_i$ and
subtracting one from the denominator is only appropriate when all the
weights are 1.0.
When the weights are arbitrary (e.g., Gaussian weights), subtracting one
from the sum in the denominator does not necessarily remove one degree
of freedom.
Therefore when @code{-y rmssdn} is requested and weights are specified,
@command{ncwa} actually implements the @code{rms} procedure.
@command{ncea} and @command{ncra}, which do not allow weights to be specified,
always implement the @code{rmssdn} procedure when asked.
The square root of the mean of $x$ (@code{-y sqrt}) is
$$
\bar x_j = \sqrt{ {\sum_{i = 1}^{i = N} \mu_i m_i w_i x_i \over \sum_{i =
1}^{i = N} \mu_i m_i w_i} }
$$
@end tex
The definitions of some of these operations are not universally useful.
Mostly they were chosen to facilitate standard statistical
computations within the @acronym{NCO} framework.
We are open to redefining and or adding to the above.
If you are interested in having other statistical quantities
defined in @acronym{NCO} please contact the @acronym{NCO} project (@pxref{Help and Bug
reports}).
@noindent
EXAMPLES
@noindent
Suppose you wish to examine the variable @code{prs_sfc(time,lat,lon)}
which contains a time series of the surface pressure as a function of
latitude and longitude.
Find the minimium value of @code{prs_sfc} over all dimensions:
@example
ncwa -y min -v prs_sfc in.nc foo.nc
@end example
@noindent
Find the maximum value of @code{prs_sfc} at each time interval for each
latitude:
@example
ncwa -y max -v prs_sfc -a lon in.nc foo.nc
@end example
@noindent
Find the root-mean-square value of the time-series of @code{prs_sfc} at
every gridpoint:
@example
ncra -y rms -v prs_sfc in.nc foo.nc
ncwa -y rms -v prs_sfc -a time in.nc foo.nc
@end example
@noindent
The previous two commands give the same answer but @command{ncra} is
preferred because it has a smaller memory footprint.
Also, @command{ncra} leaves the (degenerate) @code{time} dimension in the
output file (which is usually useful) whereas @command{ncwa} removes the
@code{time} dimension.
@noindent
These operations work as expected in multi-file operators.
Suppose that @code{prs_sfc} is stored in multiple timesteps per file
across multiple files, say @file{jan.nc}, @file{feb.nc},
@file{march.nc}.
We can now find the three month maximium surface pressure at every point.
@example
ncea -y max -v prs_sfc jan.nc feb.nc march.nc out.nc
@end example
@noindent
It is possible to use a combination of these operations to compute
the variance and standard deviation of a field stored in a single file
or across multiple files.
The procedure to compute the temporal standard deviation of the surface
pressure at all points in a single file @file{in.nc} involves three
steps.
@example
ncwa -O -v prs_sfc -a time in.nc out.nc
ncdiff -O -v prs_sfc in.nc out.nc out.nc
ncra -O -y rmssdn out.nc out.nc
@end example
First the output file @file{out.nc} is contructed containing the
temporal mean of @code{prs_sfc}.
Next @file{out.nc} is overwritten with the deviation from the mean.
Finally @file{out.nc} is overwritten with the root-mean-square of
itself.
Note the use of @samp{-y rmssdn} (rather than @samp{-y rms}) in the
final step.
This ensures the standard deviation is correctly normalized by one fewer
than the number of time samples.
The procedure to compute the variance is identical except for the use of
@samp{-y var} instead of @samp{-y rmssdn} in the final step.
The procedure to compute the spatial standard deviation of a field
in a single file @file{in.nc} involves three steps.
@example
ncwa -O -v prs_sfc,gw -a lat,lon -w gw in.nc out.nc
ncdiff -O -v prs_sfc,gw in.nc out.nc out.nc
ncwa -O -y rmssdn -v prs_sfc -a lat,lon -w gw out.nc out.nc
@end example
First the appropriately weighted (with @samp{-w gw}) spatial mean values
are written to the output file.
This example includes the use of a weighted variable specified with
@samp{-w gw}.
When using weights to compute standard deviations one must remember to
include the weights in the initial output files so that they may be used
again in the final step.
The initial output file is then overwritten with the gridpoint
deviations from the spatial mean.
Finally the root-mean-square of the appropriately weighted spatial
deviations is taken.
The procedure to compute the standard deviation of a time-series across
multiple files involves one extra step since all the input must first be
collected into one file.
@example
ncrcat -O -v tpt in.nc in.nc foo1.nc
ncwa -O -a time foo1.nc foo2.nc
ncdiff -O -v tpt foo1.nc foo2.nc foo2.nc
ncra -O -y rmssdn foo2.nc out.nc
@end example
The first step assembles all the data into a single file.
This may require a lot of temporary disk space, but is more or less
required by the @command{ncdiff} operation in the fourth step.
@node Type conversion, Suppressing interactive prompts, Operation Types, Common features
@section Type conversion
@cindex type conversion
@cartouche
@noindent
Availability: @command{ncea}, @command{ncra}, @command{ncwa}@*
Key switches: None@*
@end cartouche
Type conversion refers to the casting of one fundamental data type
to another, e.g., converting @code{NC_SHORT} (2 bytes) to
@code{NC_DOUBLE} (8 bytes).
As a general rule, type conversions should be avoided for at least two
reasons.
First, type conversions are expensive since they require creating
(temporary) buffers and casting each element of a variable from
the type it was stored at to some other type.
Second, the dataset's creator probably had a good reason
for storing data as, say, @code{NC_FLOAT} rather than @code{NC_DOUBLE}.
In a scientific framework there is no reason to store data with more
precision than the observations were made.
Thus @acronym{NCO} tries to avoid performing type conversions when performing
arithmetic.
Type conversion during arithmetic in the languages C and Fortran is
performed only when necessary.
All operands in an operation are converted to the most precise type
before the operation takes place.
However, following this parsimonious conversion rule dogmatically
results in numerous headaches.
For example, the average of the two @code{NC_SHORT}s @code{17000s} and
@code{17000s} results in garbage since the intermediate value which
holds their sum is also of type @code{NC_SHORT} and thus cannot
represent values greater than 32,767
@footnote{
@set flg
@tex
$32767 = 2^{15}-1$
@clear flg
@end tex
@ifinfo
@math{32767 = 2^15-1}
@clear flg
@end ifinfo
@ifset flg
@c texi2html does not like @math{}
@math{32767 = 2^15-1}
@clear flg
@end ifset
}.
There are valid reasons for expecting this operation to succeed and
the @acronym{NCO} philosophy is to make operators do what you want, not what is
most pure.
Thus, unlike C and Fortran, but like many other higher level interpreted
languages, @acronym{NCO} arithmetic operators will perform automatic type
conversion when all the following conditions are met
@footnote{Operators began performing type conversions before arithmetic
in @acronym{NCO} version 1.2, August, 2000.
Previous version never performed unnecessary type conversion for
arithmetic.}:
@enumerate
@item The operator is @command{ncea}, @command{ncra}, or @command{ncwa}.
@command{ncdiff} is not included because subtraction does not benefit from
type conversion.
@item The arithmetic operation could benefit from type conversion.
Operations that could benefit (e.g., from larger representable sums)
include averaging, summation, or any "hard" arithmetic.
Type conversion does not benefit searching for minima and maxima
(@samp{-y min}, or @samp{-y max}).
@item The variable on disk is of type @code{NC_BYTE}, @code{NC_CHAR},
@code{NC_SHORT}, or @code{NC_INT}.
Type @code{NC_DOUBLE} is not type converted because there is no type of
higher precision to convert to.
Type @code{NC_FLOAT} is not type converted because, in our judgement,
the performance penalty of always doing so would outweigh the (extremely
rare) potential benefits.
@end enumerate
When these criteria are all met, the operator converts the variable in
question to type @code{NC_DOUBLE}, performs all the arithmetic
operations, casts the @code{NC_DOUBLE} type back to the original type,
and finally writes the result to disk.
The result written to disk may not be what you expect, because of
incommensurate ranges represented by different types, and because of
(lack of) rounding.
First, continuing the example given above, the average (e.g., @samp{-y avg})
of @code{17000s} and @code{17000s} is written to disk as @code{17000s}.
The type conversion feature of @acronym{NCO} makes this possible since the
arithmetic and intermediate values are stored as @code{NC_DOUBLE}s,
i.e., @code{34000.0d} and only the final result must be represented
as an @code{NC_SHORT}.
Without the type conversion feature of @acronym{NCO}, the average would have
been garbage (albeit predictable garbage near @code{-15768s}).
Similarly, the total (e.g., @samp{-y ttl}) of @code{17000s} and
@code{17000s} written to disk is garbage (actually @code{-31536s}) since
the final result (the true total) of @math{34000} is outside the range
of type @code{NC_SHORT}.
Type conversions use the @code{floor} function to convert floating point
number to integers.
Type conversions do not attempt to round floating point numbers to the
nearest integer.
Thus the average of @code{1s} and @code{2s} is computed in double
precisions arithmetic as
@math{(@code{1.0d} + @code{1.5d})/2) = @code{1.5d}}.
This result is converted to @code{NC_SHORT} and stored on disk as
@math{@code{floor(1.5d)} = @code{1s}}
@footnote{
@cindex C
The actual type conversions are handled by intrinsic C-language type
conversion, so the @code{floor()} function is not explicitly called, but
the results are the same as if it were.}.
Thus no "rounding up" is performed.
The type conversion rules of C can be stated as follows:
If @var{n} is an integer then any floating point value @var{x}
satisfying
@set flg
@tex
$n \le x < n+1$
@clear flg
@end tex
@ifinfo
@math{n <= x < n+1}
@clear flg
@end ifinfo
@ifset flg
@c texi2html does not like @math{}
@var{n} <= @var{x} < @var{n+1}
@clear flg
@end ifset
will have the value @var{n} when converted to an integer.
@node Suppressing interactive prompts, History attribute, Type conversion, Common features
@section Suppressing interactive prompts
@cindex overwriting files
@cindex appending to files
@cindex force overwrite
@cindex force append
@cindex @code{-O}
@cindex @code{-A}
@cartouche
@noindent
Availability: All operators@*
Key switches: @samp{-O}, @samp{-A}@*
@end cartouche
If the @var{output-file} specified for a command is a pre-existing file,
then the operator will prompt the user whether to overwrite (erase) the
existing @var{output-file}, attempt to append to it, or abort the
operation.
However, in processing large amounts of data, too many interactive
questions can be a curse to productivity.
Therefore @acronym{NCO} also implements two ways to override its own safety
features, the @samp{-O} and @samp{-A} switches.
Specifying @samp{-O} tells the operator to overwrite any existing
@var{output-file} without prompting the user interactively.
Specifying @samp{-A} tells the operator to attempt to append to any
existing @var{output-file} without prompting the user interactively.
These switches are useful in batch environments because they suppress
interactive keyboard input.
@node History attribute, NCAR CSM Conventions, Suppressing interactive prompts, Common features
@section History attribute
@cindex @code{history} attribute
@cindex timestamp
@cindex global attributes
@cindex attributes, global
@cindex @code{-h}
@cartouche
@noindent
Availability: All operators@*
Key switches: @samp{-h}@*
@end cartouche
All operators automatically append a @code{history} global attribute to
any file they modify or create.
The @code{history} attribute consists of a timestamp and the full string
of the invocation command to the operator, e.g., @samp{Mon May 26 20:10:24
1997: ncks in.nc foo.nc}.
The full contents of an existing @code{history} attribute are copied
from the first @var{input-file} to the @var{output-file}.
The timestamps appear in reverse chronological order, with the most
recent timestamp appearing first in the @code{history} attribute.
Since @acronym{NCO} and many other netCDF operators adhere to the @code{history}
convention, the entire data processing path of a given netCDF file may
often be deduced from examination of its @code{history} attribute.
@cindex @command{ncatted}
To avoid information overkill, all operators have an optional switch
(@samp{-h}) to override automatically appending the @code{history}
attribute (@pxref{ncatted netCDF Attribute Editor}).
@node NCAR CSM Conventions, ARM Conventions, History attribute, Common features
@section @acronym{NCAR CSM} Conventions
@cindex @acronym{NCAR CSM} conventions
@cindex @acronym{CSM} conventions
@cindex @code{gw}
@cindex @code{ORO}
@cindex @code{date}
@cindex @code{datesec}
@cindex @code{time}
@cartouche
@noindent
Availability: @command{ncdiff}, @command{ncea}, @command{ncecat}, @command{ncflint},
@command{ncra}, @command{ncwa}@*
Key switches: None@*
@end cartouche
@acronym{NCO} recognizes @acronym{NCAR CSM} history tapes, and treats them specially.
If you do not work with @acronym{NCAR CSM} data then you may skip this section.
The @acronym{CSM} netCDF convention is described at
@uref{http://www.cgd.ucar.edu/csm/experiments/output.format.html"}.
Most of the @acronym{CSM} netCDF convention is transparent to @acronym{NCO}
@footnote{
The exception is appending/altering the attributes @code{x_op},
@code{y_op}, @code{z_op}, and @code{t_op} for variables which have been
averaged across space and time dimensions.
This feature is scheduled for future inclusion in @acronym{NCO}.
}.
There are no known pitfalls associated with using any @acronym{NCO} operator on
files adhering to this convention
@footnote{
The @acronym{CSM} convention recommends @code{time} be stored in the format
@var{time} since @var{base_time}, e.g., the @code{units} attribute of
@code{time} might be @samp{days since 1992-10-8 15:15:42.5 -6:00}.
A problem with this format occurs when using @command{ncrcat} to
concatenate multiple files together, each with a different
@var{base_time}.
That is, any @code{time} values from files following the first file to
be concatenated should be corrected to the @var{base_time} offset
specified in the @code{units} attribute of @code{time} from the first
file.
The analogous problem has been fixed in @acronym{ARM} files (@pxref{ARM Conventions})
and could be fixed for @acronym{CSM} files if there is sufficient lobbying, and if
Unidata fixes the UDUNITS package to build out of the box on Linux.
}.
However, to facilitate maximum user friendliness, @acronym{NCO} does treat certain
variables in some @acronym{CSM} files specially.
The special functions are not required by the @acronym{CSM} netCDF convention,
but experience has shown they do make life easier.
Currently, @acronym{NCO} determines whether a datafile is a @acronym{CSM} output datafile
simply by checking whether value of the global attribute
@code{convention} (if it exists) equals @samp{NCAR-CSM}.
Should @code{convention} equal @samp{NCAR-CSM} in the (first)
@var{input-file}, @acronym{NCO} will attempt to treat certain variables specially,
because of their meaning in @acronym{CSM} files.
@acronym{NCO} will not average the following variables often found in @acronym{CSM} files:
@code{ntrm}, @code{ntrn}, @code{ntrk}, @code{ndbase}, @code{nsbase},
@code{nbdate}, @code{nbsec}, @code{mdt}, @code{mhisf}.
These variables contain scalar metadata such as the resolution of the
host @acronym{CSM} model and it makes no sense to change their values.
Furthermore, the @command{ncdiff} operator will not attempt to difference
the following variables: @code{gw}, @code{ORO}, @code{date},
@code{datesec}, @code{hyam}, @code{hybm}, @code{hyai}, @code{hybi}.
These variables represent the Gaussian weights, the orography field,
time fields, and hybrid pressure coefficients.
These are fields which you want to remain unaltered in the differenced
file 99% of the time.
If you decide you would like any of the above @acronym{CSM} fields processed, you
must use @command{ncrename} to rename them first.
@node ARM Conventions, Operator version, NCAR CSM Conventions, Common features
@section @acronym{ARM} Conventions
@cindex @acronym{ARM} conventions
@cindex @code{time_offset}
@cindex @code{base_time}
@cindex @code{time}
@cartouche
@noindent
Availability: @command{ncrcat}@*
Key switches: None@*
@end cartouche
@command{ncrcat} has been programmed to recognize @acronym{ARM} (Atmospheric
Radiation Measurement Program) data files.
If you do not work with @acronym{ARM} data then you may skip this section.
@acronym{ARM} data files store time information in two variables, a scalar,
@code{base_time}, and a record variable, @code{time_offset}.
Subtle but serious problems can arise when these type of files are
just blindly concatenated.
Therefore @command{ncrcat} has been specially programmed to be able to
chain together consecutive @acronym{ARM} @var{input-files} and produce and
an @var{output-file} which contains the correct time information.
Currently, @command{ncrcat} determines whether a datafile is an @acronym{ARM}
datafile simply by testing for the existence of the variables
@code{base_time}, @code{time_offset}, and the dimension @code{time}.
If these are found in the @var{input-file} then @command{ncrcat} will
automatically perform two non-standard, but hopefully useful,
procedures.
First, @command{ncrcat} will ensure that values of @code{time_offset}
appearing in the @var{output-file} are relative to the @code{base_time}
appearing in the first @var{input-file} (and presumably, though not
necessarily, also appearing in the @var{output-file}).
Second, if a coordinate variable named @code{time} is not found in the
@var{input-files}, then @command{ncrcat} automatically creates the
@code{time} coordinate in the @var{output-file}.
The values of @code{time} are defined by the @acronym{ARM} convention
@math{@var{time} = @var{base_time} + @var{time_offset}}.
Thus, if @var{output-file} contains the @code{time_offset}
variable, it will also contain the @code{time} coordinate.
@cindex @code{history} attribute
@cindex global attributes
@cindex attributes, global
A short message is added to the @code{history} global attribute whenever
these @acronym{ARM}-specific procedures are executed.
@node Operator version, , ARM Conventions, Common features
@section Operator version
@cindex version
@cindex @acronym{RCS}
@cindex @code{-r}
@cartouche
@noindent
Availability: All operators@*
Key switches: @samp{-r}@*
@end cartouche
All operators can be told to print their internal version number and
copyright notice and then quit with the @samp{-r} switch.
The internal version number varies between operators, and indicates the
most recent change to a particular operator's source code.
This is useful in making sure you are working with the most recent
operators.
The version of @acronym{NCO} you are using might be, e.g., 1.2.
However using @samp{-r} on, say, @command{ncks}, will produce something
like @samp{NCO netCDF Operators version 1.2
Copyright (C) 1995--2000 Charlie Zender
ncks version 1.30 (2000/07/31) "Bolivia"}.
This tells you @command{ncks} contains all patches up to version 1.30,
which dates from July 31, 2000.
@node Operators, Contributing, Common features, Top
@chapter Reference manual for all operators
This chapter presents reference pages for each of the operators
individually.
The operators are presented in alphabetical order.
All valid command line switches are included in the syntax statement.
Recall that descriptions of many of these command line switches are
provided only in @ref{Common features}, to avoid redundancy.
Only options specific to, or most useful with, a particular operator are
described in any detail in the sections below.
@menu
* ncatted netCDF Attribute Editor::
* ncdiff netCDF Differencer::
* ncea netCDF Ensemble Averager::
* ncecat netCDF Ensemble Concatenator::
* ncflint netCDF File Interpolator::
* ncks netCDF Kitchen Sink::
* ncra netCDF Record Averager::
* ncrcat netCDF Record Concatenator::
* ncrename netCDF Renamer::
* ncwa netCDF Weighted Averager::
@end menu
@page
@node ncatted netCDF Attribute Editor, ncdiff netCDF Differencer, Operators, Operators
@section @command{ncatted} netCDF Attribute Editor
@cindex attributes
@cindex attribute names
@cindex editing attributes
@findex ncatted
@noindent
SYNTAX
@example
ncatted [-a @var{att_dsc}] [-a @dots{}] [-D] [-h]
[-l path] [-O] [-p path] [-R] [-r]
@var{input-file} [@var{output-file}]
@end example
@noindent
DESCRIPTION
@command{ncatted} edits attributes in a netCDF file.
If you are editing attributes then you are spending too much time in the
world of metadata, and @command{ncatted} was written to get you back out as
quickly and painlessly as possible.
@command{ncatted} can @dfn{append}, @dfn{create}, @dfn{delete},
@dfn{modify}, and @dfn{overwrite} attributes (all explained below).
Furthermore, @command{ncatted} allows each editing operation to be applied
to every variable in a file, thus saving you time when you want to
change attribute conventions throughout a file.
@command{ncatted} interprets character attributes as strings.
@cindex @code{history} attribute
@cindex @code{-h}
Because repeated use of @command{ncatted} can considerably increase the size
of the @code{history} global attribute (@pxref{History attribute}), the
@samp{-h} switch is provided to override automatically appending the
command to the @code{history} global attribute in the @var{output-file}.
@cindex missing values
@cindex data, missing
@cindex @code{missing_value} attribute
When @command{ncatted} is used to change the @code{missing_value} attribute,
it changes the associated missing data self-consistently.
If the internal floating point representation of a missing value,
e.g., 1.0e36, differs between two machines then netCDF files produced
on those machines will have incompatible missing values.
This allows @command{ncatted} to change the missing values in files from
different machines to a single value so that the files may then be
concatenated together, e.g., by @command{ncrcat}, without losing any
information.
@xref{Missing values}, for more information.
The key to mastering @command{ncatted} is understanding the meaning of the
structure describing the attribute modification, @var{att_dsc}.
Each @var{att_dsc} contains five elements, which makes using
@command{ncatted} somewhat complicated, but powerful.
The @var{att_dsc} argument structure contains five arguments in the
following order:@*
@var{att_dsc} = @var{att_nm}, @var{var_nm}, @var{mode}, @var{att_type},
@var{att_val}@*
@table @var
@item att_nm
Attribute name.
Example: @code{units}
@item var_nm
Variable name.
Example: @code{pressure}
@item mode
Edit mode abbreviation.
Example: @code{a}.
See below for complete listing of valid values of @var{mode}.
@item att_type
Attribute type abbreviation. Example: @code{c}.
See below for complete listing of valid values of @var{att_type}.
@item att_val
Attribute value. Example: @code{pascal}.
@end table
@noindent
There should be no empty space between these five consecutive
arguments.
The description of these arguments follows in their order of
appearance.
The value of @var{att_nm} is the name of the attribute you want to edit.
This meaning of this should be clear to all users of the @command{ncatted}
operator.
@cindex global attributes
@cindex attributes, global
The value of @var{var_nm} is the name of the variable containing the
attribute (named @var{att_nm}) that you want to edit.
There are two very important and useful exceptions to this rule.
The value of @var{var_nm} can also be used to direct @command{ncatted} to
edit global attributes, or to repeat the editing operation for every
variable in a file.
A value of @var{var_nm} of ``global'' indicates that @var{att_nm} refers
to a global attribute, rather than a particular variable's attribute.
This is the method @command{ncatted} supports for editing global
attributes.
If @var{var_nm} is left blank, on the other hand, then @command{ncatted}
attempts to perform the editing operation on every variable in the file.
This option may be convenient to use if you decide to change the
conventions you use for describing the data.
The value of @var{mode} is a single character abbreviation (@code{a},
@code{c}, @code{d}, @code{m}, or @code{o}) standing for one of
five editing modes:@*
@cindex attributes, appending
@cindex attributes, creating
@cindex attributes, deleting
@cindex attributes, modifying
@cindex attributes, editing
@cindex attributes, overwriting
@table @code
@item a
@dfn{Append}.
Append value @var{att_val} to current @var{var_nm} attribute
@var{att_nm} value @var{att_val}, if any.
If @var{var_nm} does not have an attribute @var{att_nm}, there is no
effect.
@item c
@dfn{Create}.
Create variable @var{var_nm} attribute @var{att_nm} with @var{att_val}
if @var{att_nm} does not yet exist.
If @var{var_nm} already has an attribute @var{att_nm}, there is no
effect.
@item d
@dfn{Delete}.
Delete current @var{var_nm} attribute @var{att_nm}.
If @var{var_nm} does not have an attribute @var{att_nm}, there is no
effect.
When @dfn{Delete} mode is selected, the @var{att_type} and @var{att_val}
arguments are superfluous and may be left blank.
@item m
@dfn{Modify}.
Change value of current @var{var_nm} attribute @var{att_nm} to value
@var{att_val}.
If @var{var_nm} does not have an attribute @var{att_nm}, there is no
effect.
@item o
@dfn{Overwrite}.
Write attribute @var{att_nm} with value @var{att_val} to variable
@var{var_nm}, overwriting existing attribute @var{att_nm}, if any.
This is the default mode.
@end table
The value of @var{att_type} is a single character abbreviation (@code{f},
@code{d}, @code{l}, @code{i}, @code{s}, @code{c}, or @code{b}) standing for one of
the six primitive netCDF data types:@*
@table @code
@item f
@dfn{Float}.
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_FLOAT.
@item d
@dfn{Double}.
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_DOUBLE.
@item i
@dfn{Integer}.
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_INT.
@item l
@dfn{Long}.
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_LONG.
@item s
@dfn{Short}.
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_SHORT.
@item c
@dfn{Char.}
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_CHAR.
@item b
@dfn{Byte}.
Value(s) specified in @var{att_val} will be stored as netCDF intrinsic
type NC_BYTE.
@end table
@noindent
The specification of @var{att_type} is optional in @dfn{Delete} mode.
The value of @var{att_val} is what you want to change attribute
@var{att_nm} to contain.
The specification of @var{att_val} is optional in @dfn{Delete} mode.
Attribute values for all types besides NC_CHAR must have an attribute
length of at least one.
Thus @var{att_val} may be a single value or one-dimensional array of
elements of type @code{att_type}.
If the @var{att_val} is not set or is set to empty space,
and the @var{att_type} is NC_CHAR, e.g., @code{-a units,T,o,c,""} or
@code{-a units,T,o,c,}, then the corresponding attribute is set to
have zero length.
When specifying an array of values, it is safest to enclose
@var{att_val} in double or single quotes, e.g.,
@code{-a levels,T,o,s,"1,2,3,4"} or
@code{-a levels,T,o,s,'1,2,3,4'}.
The quotes are strictly unnecessary around @var{att_val} except
when @var{att_val} contains characters which would confuse the calling
shell, such as spaces, commas, and wildcard characters.
@cindex Perl
@cindex @acronym{ASCII}
@acronym{NCO} processing of NC_CHAR attributes is a bit like Perl in that
it attempts to do what you want by default (but this sometimes causes
unexpected results if you want unusual data storage).
@cindex @code{printf()}
@cindex @code{\n} (@acronym{ASCII} LF, linefeed)
@cindex characters, special
@cindex @code{\t} (@acronym{ASCII} HT, horizontal tab)
If the @var{att_type} is NC_CHAR then the argument is interpreted as a
string and it may contain C-language escape sequences, e.g., @code{\n},
which @acronym{NCO} will interpret before writing anything to disk.
@acronym{NCO} translates valid escape sequences and stores the
appropriate @acronym{ASCII} code instead.
Since two byte escape sequences, e.g., @code{\n}, represent one byte
@acronym{ASCII} codes, e.g., @acronym{ASCII} 10 (decimal), the stored
string attribute is one byte shorter than the input string length for
each embedded escape sequence.
The most frequently used C-language escape sequences are @code{\n} (for
linefeed) and @code{\t} (for horizontal tab).
These sequences in particular allow convenient editing of formatted text
attributes.
@cindex @code{\a} (@acronym{ASCII} BEL, bell)
@cindex @code{\b} (@acronym{ASCII} BS, backspace)
@cindex @code{\f} (@acronym{ASCII} FF, formfeed)
@cindex @code{\r} (@acronym{ASCII} CR, carriage return)
@cindex @code{\v} (@acronym{ASCII} VT, vertical tab)
@cindex @code{\\} (@acronym{ASCII} \, backslash)
The other valid @acronym{ASCII} codes are @code{\a}, @code{\b}, @code{\f},
@code{\r}, @code{\v}, and @code{\\}.
@xref{ncks netCDF Kitchen Sink}, for more examples of string formatting
(with the @command{ncks} @samp{-s} option) with special characters.
@cindex @code{\'} (protected end quote)
@cindex @code{\"} (protected double quote)
@cindex @code{\?} (protected question mark)
@cindex @code{\\} (protected backslash)
@cindex @code{'} (end quote)
@cindex @code{"} (double quote)
@cindex @code{?} (question mark)
@cindex @code{\} (backslash)
@cindex special characters
@cindex @acronym{ASCII}
Analogous to @code{printf}, other special characters are also allowed by
@command{ncatted} if they are "protected" by a backslash.
The characters @code{"}, @code{'}, @code{?}, and @code{\} may be
input to the shell as @code{\"}, @code{\'}, @code{\?}, and @code{\\}.
@acronym{NCO} simply strips away the leading backslash from these characters
before editing the attribute.
No other characters require protection by a backslash.
Backslashes which precede any other character (e.g., @code{3}, @code{m},
@code{$}, @code{|}, @code{&}, @code{@@}, @code{%}, @code{@{}, and
@code{@}}) will not be filtered and will be included in the attribute.
@cindex strings
@cindex NUL-termination
@cindex NUL
@cindex @code{0} (NUL)
Note that the NUL character @code{\0} which terminates C language
strings is assumed and need not be explicitly specified.
If @code{\0} is input, it will not be translated (because it would
terminate the string in an additional location).
Because of these context-sensitive rules, if wish to use an attribute of
type NC_CHAR to store data, rather than text strings, you should use
@command{ncatted} with care.
@noindent
EXAMPLES
Append the string "Data version 2.0.\n" to the global attribute
@code{history}:
@example
ncatted -O -a history,global,a,c,"Data version 2.0\n" in.nc
@end example
Note the use of embedded C language @code{printf()}-style escape
sequences.
Change the value of the @code{long_name} attribute for variable @code{T}
from whatever it currently is to "temperature":
@example
ncatted -O -a long_name,T,o,c,temperature in.nc
@end example
Delete all existing @code{units} attributes:
@example
ncatted -O -a units,,d,, in.nc
@end example
@noindent
The value of @var{var_nm} was left blank in order to select all
variables in the file.
The values of @var{att_type} and @var{att_val} were left blank because
they are superfluous in @dfn{Delete} mode.
@cindex @code{units}
Modify all existing @code{units} attributes to "meter second-1"
@example
ncatted -O -a units,,m,c,"meter second-1" in.nc
@end example
Overwrite the @code{quanta} attribute of variable
@code{energy} to an array of four integers.
@example
ncatted -O -a quanta,energy,o,s,"010,101,111,121" in.nc
@end example
Demonstrate input of C-language escape sequences (e.g., @code{\n}) and
other special characters (e.g., @code{\"})
@example
ncatted -h -a special,global,o,c,
'\nDouble quote: \"\nTwo consecutive double quotes: \"\"\n
Single quote: Beyond my shell abilities!\nBackslash: \\\n
Two consecutive backslashes: \\\\\nQuestion mark: \?\n' in.nc
@end example
Note that the entire attribute is protected from the shell by single
quotes.
These outer single quotes are necessary for interactive use, but may be
omitted in batch scripts.
@page
@node ncdiff netCDF Differencer, ncea netCDF Ensemble Averager, ncatted netCDF Attribute Editor, Operators
@section @command{ncdiff} netCDF Differencer
@cindex subtraction
@cindex differencing data
@cindex anomalies
@findex ncdiff
@noindent
SYNTAX
@example
ncdiff [-A] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]]] [-F] [-h] [-l @var{path}]
[-O] [-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]]
[-x] @var{file_1} @var{file_2} @var{file_3}
@end example
@noindent
DESCRIPTION
@command{ncdiff} subtracts variables in @var{file_2} from the corresponding
variables (those with the same name) in @var{file_1} and stores the
results in @var{file_3}.
@cindex broadcasting variables
Variables in @var{file_2} are @dfn{broadcast} to conform to the
corresponding variable in @var{file_1} if necessary.
Broadcasting a variable means creating data in non-existing dimensions
from the data in existing dimensions.
For example, a two dimensional variable in @var{file_2} can be
subtracted from a four, three, or two (but not one or zero)
dimensional variable (of the same name) in @code{file_1}.
This functionality allows the user to compute anomalies from the mean.
Note that variables in @var{file_1} are @emph{not} broadcast to conform
to the dimensions in @var{file_2}.
@cindex rank
Thus, @command{ncdiff}, the number of dimensions, or @dfn{rank}, of any
processed variable in @var{file_1} must be greater than or equal to the
rank of the same variable in @var{file_2}.
Furthermore, the size of all dimensions common to both @var{file_1} and
@var{file_2} must be equal.
When computing anomalies from the mean it is often the case that
@var{file_2} was created by applying an averaging operator to a file
with the same dimensions as @var{file_1}, if not @var{file_1} itself.
In these cases, creating @var{file_2} with @command{ncra} rather than
@command{ncwa} will cause the @command{ncdiff} operation to fail.
For concreteness say the record dimension in @code{file_1} is
@code{time}.
If @var{file_2} were created by averaging @var{file_1} over the
@code{time} dimension with the @command{ncra} operator rather than with the
@command{ncwa} operator, then @var{file_2} will have a @code{time}
dimension of size 1 rather than having no @code{time} dimension at all
@cindex degenerate dimensions
@footnote{This is because @command{ncra} collapses the record dimension
to a size of 1 (making it a @dfn{degenerate} dimension), but does not
remove it, while @command{ncwa} removes all dimensions it averages over.
In other words, @command{ncra} changes the size but not the rank of
variables, while @command{ncwa} changes both the size and the rank of
variables.}.
In this case the input files to @command{ncdiff}, @var{file_1} and
@var{file_2}, will have unequally sized @code{time} dimensions which
causes @command{ncdiff} to fail.
To prevent this from occuring, use @command{ncwa} to remove the @code{time}
dimension from @var{file_2}.
An example is given below.
@command{ncdiff} will never difference coordinate variables or variables of
type @code{NC_CHAR} or @code{NC_BYTE}.
This ensures that coordinates like (e.g., latitude and longitude) are
physically meaningful in the output file, @var{file_3}.
This behavior is hardcoded.
@cindex @acronym{NCAR CSM} conventions
@cindex @acronym{CSM} conventions
@command{ncdiff} applies special rules to some @acronym{NCAR CSM} fields (e.g.,
@code{ORO}).
See @ref{NCAR CSM Conventions} for a complete description.
Finally, we note that @command{ncflint} (@pxref{ncflint netCDF File
Interpolator}) can be also perform file subtraction (as well as
addition, multiplication and interpolation).
@noindent
EXAMPLES
Say files @file{85_0112.nc} and @file{86_0112.nc} each contain 12 months
of data.
Compute the change in the monthly averages from 1985 to 1986:
@example
ncdiff 86_0112.nc 85_0112.nc 86m85_0112.nc
@end example
The following examples demonstrate the broadcasting feature of
@command{ncdiff}.
Say we wish to compute the monthly anomalies of @code{T} from the yearly
average of @code{T} for the year 1985.
First we create the 1985 average from the monthly data, which is stored
with the record dimension @code{time}.
@example
ncra 85_0112.nc 85.nc
ncwa -O -a time 85.nc 85.nc
@end example
@noindent
The second command, @command{ncwa}, gets rid of the @code{time} dimension
of size 1 that @command{ncra} left in @file{85.nc}.
Now none of the variables in @file{85.nc} has a @code{time} dimension.
A quicker way to accomplish this is to use @command{ncwa} from the
beginning:
@example
ncwa -a time 85_0112.nc 85.nc
@end example
@noindent
We are now ready to use @command{ncdiff} to compute the anomalies for 1985:
@example
ncdiff -v T 85_0112.nc 85.nc t_anm_85_0112.nc
@end example
@noindent
Each of the 12 records in @file{t_anm_85_0112.nc} now contains the
monthly deviation of @code{T} from the annual mean of @code{T} for each
gridpoint.
Say we wish to compute the monthly gridpoint anomalies from the zonal
annual mean.
A @dfn{zonal mean} is a quantity that has been averaged over the
longitudinal (or @var{x}) direction.
First we use @command{ncwa} to average over longitudinal direction
@code{lon}, creating @file{xavg_85.nc}, the zonal mean of @file{85.nc}.
Then we use @command{ncdiff} to subtract the zonal annual means from the
monthly gridpoint data:
@example
ncwa -a lon 85.nc xavg_85.nc
ncdiff 85_0112.nc xavg_85.nc tx_anm_85_0112.nc
@end example
@noindent
Assuming @file{85_0112.nc} has dimensions @code{time} and @code{lon},
this example only works if @file{xavg_85.nc} has no @code{time} or
@code{lon} dimension.
As a final example, say we have five years of monthly data (i.e., 60
months) stored in @file{8501_8912.nc} and we wish to create a file
which contains the twelve month seasonal cycle of the average monthly
anomaly from the five-year mean of this data.
The following method is just one permutation of many which will
accomplish the same result.
First use @command{ncwa} to create the file containing the five-year mean:
@example
ncwa -a time 8501_8912.nc 8589.nc
@end example
@noindent
Next use @command{ncdiff} to create a file containing the difference of
each month's data from the five-year mean:
@example
ncdiff 8501_8912.nc 8589.nc t_anm_8501_8912.nc
@end example
@noindent
Now use @command{ncks} to group the five January anomalies together in one
file, and use @command{ncra} to create the average anomaly for all five
Januarys.
These commands are embedded in a shell loop so they are repeated for all
twelve months:
@example
foreach idx (01 02 03 04 05 06 07 08 09 10 11 12)
ncks -F -d time,$idx,,12 t_anm_8501_8912.nc foo.$idx
ncra foo.$idx t_anm_8589_$idx.nc
end
@end example
@noindent
Note that @command{ncra} understands the @code{stride} argument so the two
commands inside the loop may be combined into the single command
@example
ncra -F -d time,$idx,,12 t_anm_8501_8912.nc foo.$idx
@end example
@noindent
Finally, use @command{ncrcat} to concatenate the 12 average monthly anomaly
files into one twelve-record file which contains the entire seasonal
cycle of the monthly anomalies:
@example
ncrcat t_anm_8589_??.nc t_anm_8589_0112.nc
@end example
@noindent
@page
@node ncea netCDF Ensemble Averager, ncecat netCDF Ensemble Concatenator, ncdiff netCDF Differencer, Operators
@section @command{ncea} netCDF Ensemble Averager
@cindex averaging data
@cindex ensemble average
@findex ncea
@noindent
SYNTAX
@example
ncea [-A] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]]] [-F] [-h] [-l @var{path}]
[-n @var{loop}] [-O] [-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]]
[-x] [-y @var{op_typ}] @var{input-files} @var{output-file}
@end example
@noindent
DESCRIPTION
@command{ncea} performs gridpoint averages of variables across an arbitrary
number (an @dfn{ensemble}) of input files, with each file receiving an
equal weight in the average.
@cindex ensemble
Each variable in the @var{output-file} will be the same size as the same
variable in any one of the in the @var{input-files}, and all
@var{input-files} must be the same size.
Whereas @command{ncra} only performs averages over the record dimension
(e.g., time), and weights each record in the record dimension evenly,
@command{ncea} averages entire files, and weights each file evenly.
All dimensions, including the record dimension, are treated identically
and preserved in the @var{output-file}.
@xref{Averaging vs. Concatenating}, for a description of the
distinctions between the various averagers and concatenators.
The file is the logical unit of organization for the results of many
scientific studies.
Often one wishes to generate a file which is the gridpoint average of
many separate files.
This may be to reduce statistical noise by combining the results of a
large number of experiments, or it may simply be a step in a procedure
whose goal is to compute anomalies from a mean state.
In any case, when one desires to generate a file whose properties are
the mean of all the input files, then @command{ncea} is the operator to
use.
@command{ncea} assumes coordinate variable are properties common to all of
the experiments and so does not average them across files.
Instead, @command{ncea} copies the values of the coordinate variables from
the first input file to the output file.
@noindent
EXAMPLES
Consider a model experiment which generated five realizations of one
year of data, say 1985.
You can imagine that the experimenter slightly perturbs the
initial conditions of the problem before generating each new solution.
Assume each file contains all twelve months (a seasonal cycle) of data
and we want to produce a single file containing the ensemble average
(mean) seasonal cycle.
Here the numeric filename suffix denotes the experiment number
(@emph{not} the month):
@example
ncea 85_01.nc 85_02.nc 85_03.nc 85_04.nc 85_05.nc 85.nc
ncea 85_0[1-5].nc 85.nc
ncea -n 5,2,1 85_01.nc 85.nc
@end example
@noindent
These three commands produce identical answers.
@xref{Specifying input files}, for an explanation of the distinctions
between these methods.
The output file, @file{85.nc}, is the same size as the inputs files.
It contains 12 months of data (which might or might not be stored in the
record dimension, depending on the input files), but each value in the
output file is the average of the five values in the input files.
In the previous example, the user could have obtained the ensemble
average values in a particular spatio-temporal region by adding a
hyperslab argument to the command, e.g.,
@example
ncea -d time,0,2 -d lat,-23.5,23.5 85_??.nc 85.nc
@end example
@noindent
In this case the output file would contain only three slices of data in
the @var{time} dimension.
These three slices are the average of the first three slices from the
input files.
Additionally, only data inside the tropics is included.
@page
@node ncecat netCDF Ensemble Concatenator, ncflint netCDF File Interpolator, ncea netCDF Ensemble Averager, Operators
@section @command{ncecat} netCDF Ensemble Concatenator
@cindex concatenation
@cindex ensemble concatenation
@findex ncecat
@noindent
SYNTAX
@example
ncecat [-A] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]]] [-F] [-h] [-l @var{path}]
[-n @var{loop}] [-O] [-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]]
[-x] @var{input-files} @var{output-file}
@end example
@noindent
DESCRIPTION
@command{ncecat} concatenates an arbitrary number of input files into a
single output file.
Input files are glued together by creating a record dimension in the
output file.
Input files must be the same size.
Each input file is stored consecutively as a single record in the output
file.
Thus, the size of the output file is the sum of the sizes of the input
files.
@xref{Averaging vs. Concatenating}, for a description of the
distinctions between the various averagers and concatenators.
Consider five realizations, @file{85a.nc}, @file{85b.nc}, @dots{}
@file{85e.nc} of 1985 predictions from the same climate model.
Then @code{ncecat 85?.nc 85_ens.nc} glues the individual realizations
together into the single file, @file{85_ens.nc}.
If an input variable was dimensioned [@code{lat},@code{lon}], it will have
dimensions [@code{record},@code{lat},@code{lon}] in the output file.
A restriction of @command{ncecat} is that the hyperslabs of the processed
variables must be the same from file to file.
Normally this means all the input files are the same size, and contain
data on different realizations of the same variables.
@noindent
EXAMPLES
Consider a model experiment which generated five realizations of one
year of data, say 1985.
You can imagine that the experimenter slightly perturbs the
initial conditions of the problem before generating each new solution.
Assume each file contains all twelve months (a seasonal cycle) of data
and we want to produce a single file containing all the seasonal cycles.
Here the numeric filename suffix denotes the experiment number
(@emph{not} the month):
@example
ncecat 85_01.nc 85_02.nc 85_03.nc 85_04.nc 85_05.nc 85.nc
ncecat 85_0[1-5].nc 85.nc
ncecat -n 5,2,1 85_01.nc 85.nc
@end example
@noindent
These three commands produce identical answers.
@xref{Specifying input files}, for an explanation of the distinctions
between these methods.
The output file, @file{85.nc}, is five times the size as a single
@var{input-file}.
It contains 60 months of data (which might or might not be stored in the
record dimension, depending on the input files).
@page
@node ncflint netCDF File Interpolator, ncks netCDF Kitchen Sink, ncecat netCDF Ensemble Concatenator, Operators
@section @command{ncflint} netCDF File Interpolator
@cindex interpolation
@cindex adding data
@cindex multiplying data
@cindex addition
@findex ncflint
@noindent
SYNTAX
@example
ncflint [-A] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]]] [-F] [-h]
[-i @var{var},@var{val3}]
[-l @var{path}] [-O] [-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]]
[-w @var{wgt1}[,@var{wgt2}]] [-x] @var{file_1} @var{file_2} @var{file_3}
@end example
@noindent
DESCRIPTION
@command{ncflint} creates an output file that is a linear combination of
the input files.
This linear combination can be a weighted average, a normalized weighted
average, or an interpolation of the input files.
Coordinate variables are not acted upon in any case, they are simply
copied from @var{file_1}.
There are two conceptually distinct methods of using @command{ncflint}.
The first method is to specify the weight each input file is to have in
the output file.
In this method, the value @var{val3} of a variable in the output file
@var{file_3} is determined from its values @var{val1} and @var{val2} in the
two input files according to
@set flg
@tex
$val3 = wgt1 \times val1 + wgt2 \times val2$
@clear flg
@end tex
@ifinfo
@math{@var{val3} = @var{wgt1}*@var{val1} + @var{wgt2}*@var{val2}}
@clear flg
@end ifinfo
@ifset flg
@c texi2html does not like @math{}
@var{val3} = @var{wgt1}*@var{val1} + @var{wgt2}*@var{val2}
@clear flg
@end ifset
.
Here at least @var{wgt1}, and, optionally, @var{wgt2}, are specified on
the command line with the @samp{-w} switch.
If only @var{wgt1} is specified then @var{wgt2} is automatically
computed as @math{@var{wgt2} = 1 - @var{wgt1}}.
Note that weights larger than 1 are allowed.
Thus it is possible to specify @math{@var{wgt1} = 2} and
@math{@var{wgt2} = -3}.
One can use this functionality to multiply all the values in a given
file by a constant.
The second method of using @command{ncflint} is specifying the
interpolation option with @samp{-i}.
This is really the inverse of the first method in the following sense.
When the user specifies the weights directly, @command{ncflint} has no
work to do besides multiplying the input values by their respective
weights and adding the results together to produce the output values.
This assumes it is the weights that are known a priori.
@cindex arrival value
In another class of cases it is the @dfn{arrival value} (i.e.,
@var{val3}) of a particular variable @var{var} that is known a priori.
In this case, the implied weights can always be inferred by examining
the values of @var{var} in the input files.
This results in one equation in two unknowns, @var{wgt1} and @var{wgt2}:
@set flg
@tex
$val3 = wgt1 \times val1 + wgt2 \times val2$
@clear flg
@end tex
@ifinfo
@math{@var{val3} = @var{wgt1}*@var{val1} + @var{wgt2}*@var{val2}}
@clear flg
@end ifinfo
@ifset flg
@c texi2html does not like @math{}
@var{val3} = @var{wgt1}*@var{val1} + @var{wgt2}*@var{val2}
@clear flg
@end ifset
.
Unique determination of the weights requires imposing the additional
constraint of normalization on the weights:
@math{@var{wgt1} + @var{wgt2} = 1}.
Thus, to use the interpolation option, the user specifies @var{var}
and @var{val3} with the @samp{-i} option.
@command{ncflint} will compute @var{wgt1} and @var{wgt2}, and use these
weights on all variables to generate the output file.
Although @var{var} may have any number of dimensions in the input
files, it must represent a single, scalar value.
@cindex degenerate dimensions
Thus any dimensions associated with @var{var} must be @dfn{degenerate},
i.e., of size one.
If neither @samp{-i} nor @samp{-w} is specified on the command line,
@command{ncflint} defaults to weighting each input file equally in the
output file.
This is equivalent to specifying @samp{-w 0.5} or @samp{-w 0.5,0.5}.
Attempting to specify both @samp{-i} and @samp{-w} methods in the same
command is an error.
@command{ncflint} is programmed not to interpolate variables of type
@code{NC_CHAR} and @code{NC_BYTE}.
This behavior is hardcoded.
@noindent
EXAMPLES
Although it has other uses, the interpolation feature was designed
to interpolate @var{file_3} to a time between existing files.
Consider input files @file{85.nc} and @file{87.nc} containing variables
describing the state of a physical system at times @math{@code{time} =
85} and @math{@code{time} = 87}.
Assume each file contains its timestamp in the scalar variable
@code{time}.
Then, to linearly interpolate to a file @file{86.nc} which describes
the state of the system at time at @code{time} = 86, we would use
@example
ncflint -i time,86 85.nc 87.nc 86.nc
@end example
Say you have observational data covering January and April 1985 in two
files named @file{85_01.nc} and @file{85_04.nc}, respectively.
Then you can estimate the values for February and March by interpolating
the existing data as follows.
Combine @file{85_01.nc} and @file{85_04.nc} in a 2:1 ratio to make
@file{85_02.nc}:
@example
ncflint -w 0.667 85_01.nc 85_04.nc 85_02.nc
ncflint -w 0.667,0.333 85_01.nc 85_04.nc 85_02.nc
@end example
Multiply @file{85.nc} by 3 and by @minus{}2 and add them together to
make @file{tst.nc}:
@example
ncflint -w 3,-2 85.nc 85.nc tst.nc
@end example
@noindent
This is an example of a null operation, so @file{tst.nc} should be
identical (within machine precision) to @file{85.nc}.
Add @file{85.nc} to @file{86.nc} to obtain @file{85p86.nc},
then subtract @file{86.nc} from @file{85.nc} to obtain @file{85m86.nc}
@example
ncflint -w 1,1 85.nc 86.nc 85p86.nc
ncflint -w 1,-1 85.nc 86.nc 85m86.nc
ncdiff 85.nc 86.nc 85m86.nc
@end example
@noindent
Thus @command{ncflint} can be used to mimic @command{ncdiff} operations.
@cindex broadcasting variables
However this is not a good idea in practice because @command{ncflint}
does not broadcast (@pxref{ncdiff netCDF Differencer}) conforming
variables during arithmetic.
Thus the final two commands would produce identical results except that
@command{ncflint} would fail if any variables needed to be broadcast.
@cindex @code{units}
Rescale the dimensional units of the surface pressure @code{prs_sfc}
from Pascals to hectopascals (millibars)
@example
ncflint -O -C -v prs_sfc -w 0.01,0.0 in.nc in.nc out.nc
ncatted -O -a units,prs_sfc,o,c,millibar out.nc
@end example
@noindent
@page
@node ncks netCDF Kitchen Sink, ncra netCDF Record Averager, ncflint netCDF File Interpolator, Operators
@section @command{ncks} netCDF Kitchen Sink
@cindex kitchen sink
@cindex printing files contents
@cindex printing variables
@findex ncks
@noindent
SYNTAX
@example
ncks [-A] [-a] [-C] [-c] [-D]
[-d @var{dim},[@var{min}][,[@var{max}]][,[@var{stride}]]]
[-F] [-H] [-h] [-l @var{path}] [-M] [-m] [-O] [-p @var{path}] [-q]
[-R] [-r] [-s @var{format}] [-u] [-v @var{var}[,@dots{}]] [-x]
@var{input-file} [@var{output-file}]
@end example
@noindent
DESCRIPTION
@command{ncks} combines selected features of @command{ncdump},
@command{ncextr}, and the nccut and ncpaste specifications into one
versatile utility.
@command{ncks} extracts a subset of the data from @var{input-file} and
either prints it as @acronym{ASCII} text to stdout, or writes (or pastes) it to
@var{output-file}, or both.
@command{ncks} will print netCDF data in @acronym{ASCII} format to @code{stdout},
like @command{ncdump}, but with these differences:
@command{ncks} prints data in a tabular format intended to be easy to
search for the data you want, one datum per screen line, with all
dimension subscripts and coordinate values (if any) preceding the datum.
Option @samp{-s} allows the user the format the data using C-style
format strings.
Options @samp{-a}, @samp{-F}, @samp{-H}, @samp{-M}, @samp{-m},
@samp{-q}, @samp{-s}, and @samp{-u} control the formatted appearance of
the data.
@cindex global attributes
@cindex attributes, global
@command{ncks} will extract (and optionally create a new netCDF file
comprised of) only selected variable from the input file, like
@command{ncextr} but with these differences: Only variables and
coordinates may be specifically included or excluded---all global
attributes and any attribute associated with an extracted variable will
be copied to the screen and/or output netCDF file.
Options @samp{-c}, @samp{-C}, @samp{-v}, and @samp{-x} control which
variables are extracted.
@command{ncks} will extract hyperslabs from the specified variables.
In fact @command{ncks} implements the nccut specification exactly.
Option @samp{-d} controls the hyperslab specification.
Input dimensions that are not associated with any output variable will
not appear in the output netCDF.
This feature removes superfluous dimensions from a netCDF file.
@cindex appending data
@cindex merging files
@command{ncks} will append variables and attributes from the
@var{input-file} to @var{output-file} if @var{output-file} is a
pre-existing netCDF file whose relevant dimensions conform to dimension
sizes of @var{input-file}.
The append features of @command{ncks} are intended to provide a rudimentary
means of adding data from one netCDF file to another, conforming, netCDF
file.
When naming conflicts exists between the two files, data in
@var{output-file} is usually overwritten by the corresponding data from
@var{input-file}.
Thus it is recommended that the user backup @var{output-file} in case
valuable data is accidentally overwritten.
If @var{output-file} exists, the user will be queried whether to
@dfn{overwrite}, @dfn{append}, or @dfn{exit} the @command{ncks} call
completely.
Choosing @dfn{overwrite} destroys the existing @var{output-file} and
create an entirely new one from the output of the @command{ncks} call.
Append has differing effects depending on the uniqueness of the
variables and attributes output by @command{ncks}: If a variable or
attribute extracted from @var{input-file} does not have a name conflict with
the members of @var{output-file} then it will be added to @var{output-file}
without overwriting any of the existing contents of @var{output-file}.
In this case the relevant dimensions must agree (conform) between the
two files; new dimensions are created in @var{output-file} as required.
@cindex global attributes
@cindex attributes, global
When a name conflict occurs, a global attribute from @var{input-file}
will overwrite the corresponding global attribute from
@var{output-file}.
If the name conflict occurs for a non-record variable, then the
dimensions and type of the variable (and of its coordinate dimensions,
if any) must agree (conform) in both files.
Then the variable values (and any coordinate dimension values)
from @var{input-file} will overwrite the corresponding variable values (and
coordinate dimension values, if any) in @var{output-file}
@footnote{
Those familiar with netCDF mechanics might wish to know what is
happening here: @command{ncks} does not attempt to redefine the variable in
@var{output-file} to match its definition in @var{input-file}, @command{ncks} merely
copies the values of the variable and its coordinate dimensions, if any,
from @var{input-file} to @var{output-file}.
}.
Since there can only be one record dimension in a file, the record
dimension must have the same name (but not necessarily the same size) in
both files if a record dimension variable is to be appended.
If the record dimensions are of differing sizes, the record dimension of
@var{output-file} will become the greater of the two record dimension sizes,
the record variable from @var{input-file} will overwrite any counterpart in
@var{output-file} and fill values will be written to any gaps left in the
rest of the record variables (I think).
In all cases variable attributes in @var{output-file} are superseded by
attributes of the same name from @var{input-file}, and left alone if
there is no name conflict.
Some users may wish to avoid interactive @command{ncks} queries about
whether to overwrite existing data.
For example, batch scripts will fail if @command{ncks} does not receive
responses to its queries.
Options @samp{-O} and @samp{-A} are available to force overwriting
existing files and variables, respectively.
@unnumberedsubsec Options specific to @command{ncks}
The following list provides a short summary of the features unique to
@command{ncks}.
Features common to many operators are described in
@ref{Common features}.
@table @samp
@cindex alphabetize output
@cindex sort alphabetically
@cindex @code{-a}
@item -a
Do not alphabetize extracted fields.
By default, the specified output variables are extracted, printed, and
written to disk in alphabetical order.
This tends to make long output lists easier to search for particular
variables.
Specifying @code{-a} results in the variables being extracted, printed,
and written to disk in the order in which they were saved in the input
file.
Thus @code{-a} retains the original ordering of the variables.
@cindex stride
@item -d @var{dim},[@var{min}][,[@var{max}]][,[@var{stride}]]
Add @dfn{stride} argument to hyperslabber.
For a complete description of the @var{stride} argument, @xref{Stride}.
@cindex @code{-H}
@item -H
Print data to screen.
Unless otherwise specified (with @code{-s}), each element of the data
hyperslab is printed on a separate line containing the names, indices,
and, values, if any, of all of the variables dimensions.
The dimension and variable indices refer to the location of the
corresponding data element with respect to the variable as stored on
disk (i.e., not the hyperslab).
@example
% ncks -H -C -v three_dmn_var in.nc
lat[0]=-90 lev[0]=100 lon[0]=0 three_dmn_var[0]=0
lat[0]=-90 lev[0]=100 lon[1]=90 three_dmn_var[1]=1
lat[0]=-90 lev[0]=100 lon[2]=180 three_dmn_var[2]=2
...
lat[1]=90 lev[2]=1000 lon[1]=90 three_dmn_var[21]=21
lat[1]=90 lev[2]=1000 lon[2]=180 three_dmn_var[22]=22
lat[1]=90 lev[2]=1000 lon[3]=270 three_dmn_var[23]=23
@end example
Printing the same variable with the @samp{-F} option shows the same
variable indexed with Fortran conventions
@example
% ncks -F -H -C -v three_dmn_var in.nc
lon(1)=0 lev(1)=100 lat(1)=-90 three_dmn_var(1)=0
lon(2)=90 lev(1)=100 lat(1)=-90 three_dmn_var(2)=1
lon(3)=180 lev(1)=100 lat(1)=-90 three_dmn_var(3)=2
...
@end example
Printing a hyperslab does not affect the variable or dimension indices
since these indices are relative to the full variable (as stored in the
input file), and the input file has not changed.
However, if the hypserslab is saved to an output file and those values
are printed, the indices will change:
@example
% ncks -H -d lat,90.0 -d lev,1000.0 -v three_dmn_var in.nc out.nc
lat[1]=90 lev[2]=1000 lon[0]=0 three_dmn_var[20]=20
lat[1]=90 lev[2]=1000 lon[1]=90 three_dmn_var[21]=21
lat[1]=90 lev[2]=1000 lon[2]=180 three_dmn_var[22]=22
lat[1]=90 lev[2]=1000 lon[3]=270 three_dmn_var[23]=23
% ncks -H out.nc
lat[0]=90 lev[0]=1000 lon[0]=0 three_dmn_var[0]=20
lat[0]=90 lev[0]=1000 lon[1]=90 three_dmn_var[1]=21
lat[0]=90 lev[0]=1000 lon[2]=180 three_dmn_var[2]=22
lat[0]=90 lev[0]=1000 lon[3]=270 three_dmn_var[3]=23
@end example
@cindex @code{-M}
@item -M
Print to screen the global metadata describing the file.
This includes file summary information and global attributes.
@cindex @code{-m}
@item -m
Print variable metadata to screen (similar to @code{ncdump -h}).
This displays all metadata pertaining to each variable, one variable
at a time.
@cindex @code{-q}
@item -q
Toggle printing of dimension indices and coordinate values when printing
arrays.
The name of each variable will appear flush left in the output.
This is useful when trying to locate specific variables when displaying
many variables with different dimensions.
The mnemonic for this option is "quiet".
@cindex @code{-s}
@cindex @code{printf()}
@item -s @var{format}
String format for text output. Accepts C language escape sequences and
@code{printf()} formats.
@cindex @code{-u}
@cindex @code{units}
@item -u
Accompany the printing of a variable's values with its units attribute,
if it exists.
@end table
@noindent
EXAMPLES
View all data in netCDF @file{in.nc}, printed with Fortran indexing
conventions:
@example
ncks -H -F in.nc
@end example
Copy the netCDF file @file{in.nc} to file @file{out.nc}.
@example
ncks -O in.nc out.nc
@end example
Now the file @file{out.nc} contains all the data from @file{in.nc}.
There are, however, two differences between @file{in.nc} and
@file{out.nc}.
@cindex @code{history} attribute
First, the @code{history} global attribute (@pxref{History attribute})
will contain the command used to create @file{out.nc}.
@cindex alphabetize output
@cindex sort alphabetically
@cindex @code{-a}
Second, the variables in @file{out.nc} will be defined in alphabetical
order.
Of course the internal storage of variable in a netCDF file should be
transparent to the user, but there are cases when alphabetizing a file
is useful (see description of @code{-a} switch).
@cindex @code{printf()}
@cindex @code{\n} (linefeed)
@cindex @code{\t} (horizontal tab)
Print variable @code{three_dmn_var} from file @file{in.nc} with
default notations.
Next print @code{three_dmn_var} as an un-annotated text column.
Then print @code{three_dmn_var} signed with very high precision.
Finally, print @code{three_dmn_var} as a comma-separated list.
@example
% ncks -H -C -v three_dmn_var in.nc
lat[0]=-90 lev[0]=100 lon[0]=0 three_dmn_var[0]=0
lat[0]=-90 lev[0]=100 lon[1]=90 three_dmn_var[1]=1
...
lat[1]=90 lev[2]=1000 lon[3]=270 three_dmn_var[23]=23
% ncks -s "%f\n" -H -C -v three_dmn_var in.nc
0.000000
1.000000
...
23.000000
% ncks -s "%+16.10f\n" -H -C -v three_dmn_var in.nc
+0.0000000000
+1.0000000000
...
+23.0000000000
% ncks -s "%f, " -H -C -v three_dmn_var in.nc
0.000000, 1.000000, ..., 23.000000,
@end example
@noindent
The second and third options are useful when pasting data into text
files like reports or papers.
@xref{ncatted netCDF Attribute Editor}, for more details on string
formatting and special characters.
One dimensional arrays of characters stored as netCDF variables are
automatically printed as strings, whether or not they are
NUL-terminated, e.g.,
@example
ncks -v fl_nm in.nc
@end example
@noindent
The @code{%c} formatting code is useful for printing
multidimensional arrays of characters representing fixed length strings
@example
ncks -H -s "%c" -v fl_nm_arr in.nc
@end example
@noindent
@cindex @code{core dump}
Using the @code{%s} format code on strings which are not NUL-terminated
(and thus not technically strings) is likely to result in a core dump.
Create netCDF @file{out.nc} containing all variables, and any associated
coordinates, except variable @code{time}, from netCDF @file{in.nc}:
@example
ncks -x -v time in.nc out.nc
@end example
Extract variables @code{time} and @code{pressure} from netCDF @file{in.nc}.
If @file{out.nc} does not exist it will be created.
Otherwise the you will be prompted whether to append to or to
overwrite @file{out.nc}:
@example
ncks -v time,pressure in.nc out.nc
ncks -C -v time,pressure in.nc out.nc
@end example
@noindent
The first version of the command creates an @file{out.nc} which contains
@code{time}, @code{pressure}, and any coordinate variables associated
with @var{pressure}.
The @file{out.nc} from the second version is guaranteed to contain only
two variables @code{time} and @code{pressure}.
Create netCDF @file{out.nc} containing all variables from file @file{in.nc}.
Restrict the dimensions of these variables to a hyperslab.
Print (with @code{-H}) the hyperslabs to the screen for good measure.
The specified hyperslab is: the fifth value in dimension @code{time}; the
half-open range @math{@var{lat} > 0.} in coordinate @code{lat}; the
half-open range @math{@var{lon} < 330.} in coordinate @code{lon}; the
closed interval @math{.3 < @var{band} < .5} in coordinate @code{band}; and
cross-section closest to 1000. in coordinate @code{lev}.
Note that limits applied to coordinate values are specified with a
decimal point, and limits applied to dimension indices do not have a
decimal point @xref{Hyperslabs}.
@example
ncks -H -d time,5 -d lat,,0. -d lon,330., -d band,.3,.5
-d lev,1000. in.nc out.nc
@end example
@cindex wrapped coordinates
Assume the domain of the monotonically increasing longitude coordinate
@code{lon} is @math{0 < @var{lon} < 360}.
Here, @code{lon} is an example of a wrapped coordinate.
@command{ncks} will extract a hyperslab which crosses the Greenwich
meridian simply by specifying the westernmost longitude as @var{min} and
the easternmost longitude as @var{max}, as follows:
@example
ncks -d lon,260.,45. in.nc out.nc
@end example
For more details @xref{Wrapped coordinates}.
@page
@node ncra netCDF Record Averager, ncrcat netCDF Record Concatenator, ncks netCDF Kitchen Sink, Operators
@section @command{ncra} netCDF Record Averager
@cindex averaging data
@cindex record average
@cindex running average
@findex ncra
@noindent
SYNTAX
@example
ncra [-A] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]][,[@var{stride}]]] [-F] [-h] [-l @var{path}]
[-n @var{loop}] [-O] [-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]]
[-x] [-y @var{op_typ}] @var{input-files} @var{output-file}
@end example
@noindent
DESCRIPTION
@command{ncra} averages record variables across an arbitrary number of
input files.
@cindex degenerate dimensions
The record dimension is retained as a degenerate (size 1) dimension in
the output variables.
@xref{Averaging vs. Concatenating}, for a description of the
distinctions between the various averagers and concatenators.
Input files may vary in size, but each must have a record dimension.
The record coordinate, if any, should be monotonic for (or else non-fatal
warnings may be generated).
@cindex hyperslab
Hyperslabs of the record dimension which include more than one file are
handled correctly.
@cindex stride
@command{ncra} supports the @var{stride} argument to the @samp{-d}
hyperslab option for the record dimension only, @var{stride} is not
supported for non-record dimensions.
@command{ncra} weights each record (e.g., time slice) in the
@var{input-files} equally.
@command{ncra} does not attempt to see if, say, the @code{time} coordinate
is irregularly spaced and thus would require a weighted average in order
to be a true time average.
@noindent
EXAMPLES
Average files @file{85.nc}, @file{86.nc}, @dots{} @file{89.nc}
along the record dimension, and store the results in @file{8589.nc}:
@cindex globbing
@cindex @code{NINTAP}
@cindex Processor
@cindex @acronym{CCM} Processor
@example
ncra 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc
ncra 8[56789].nc 8589.nc
ncra -n 5,2,1 85.nc 8589.nc
@end example
These three methods produce identical answers.
@xref{Specifying input files}, for an explanation of the distinctions
between these methods.
@cindex fortran
Assume the files @file{85.nc}, @file{86.nc}, @dots{} @file{89.nc} each
contain a record coordinate @var{time} of length 12 defined such that
the third record in @file{86.nc} contains data from March 1986, etc.
@acronym{NCO} knows how to hyperslab the record dimension across files.
Thus, to average data from December, 1985 through February, 1986:
@example
ncra -d time,11,13 85.nc 86.nc 87.nc 8512_8602.nc
ncra -F -d time,12,14 85.nc 86.nc 87.nc 8512_8602.nc
@end example
@noindent
The file @file{87.nc} is superfluous, but does not cause an error.
The @samp{-F} turns on the Fortran (1-based) indexing convention.
@cindex stride
The following uses the @var{stride} option to average all the March
temperature data from multiple input files into a single output file
@example
ncra -F -d time,3,,12 -v temperature 85.nc 86.nc 87.nc 858687_03.nc
@end example
@xref{Stride}, for a description of the @var{stride} argument.
Assume the @var{time} coordinate is incrementally numbered such that
January, 1985 = 1 and December, 1989 = 60.
Assuming @samp{??} only expands to the five desired files, the following
averages June, 1985--June, 1989:
@example
ncra -d time,6.,54. ??.nc 8506_8906.nc
@end example
@page
@node ncrcat netCDF Record Concatenator, ncrename netCDF Renamer, ncra netCDF Record Averager, Operators
@section @command{ncrcat} netCDF Record Concatenator
@cindex concatenation
@cindex record concatenation
@findex ncrcat
@noindent
SYNTAX
@example
ncrcat [-A] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]][,[@var{stride}]]] [-F] [-h] [-l @var{path}]
[-n @var{loop}] [-O] [-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]]
[-x] @var{input-files} @var{output-file}
@end example
@noindent
DESCRIPTION
@command{ncrcat} concatenates record variables across an arbitrary number
of input files.
The final record dimension is by default the sum of the lengths of the
record dimensions in the input files.
@xref{Averaging vs. Concatenating}, for a description of the
distinctions between the various averagers and concatenators.
Input files may vary in size, but each must have a record dimension.
The record coordinate, if any, should be monotonic (or else non-fatal
warnings may be generated).
@cindex hyperslab
Hyperslabs of the record dimension which include more than one file are
handled correctly.
@cindex stride
@command{ncra} supports the @var{stride} argument to the @samp{-d}
hyperslab option for the record dimension only, @var{stride} is not
supported for non-record dimensions.
@cindex ARM conventions
@command{ncrcat} applies special rules to @acronym{ARM} convention time fields (e.g.,
@code{time_offset}).
See @ref{ARM Conventions} for a complete description.
@noindent
EXAMPLES
Concatenate files @file{85.nc}, @file{86.nc}, @dots{} @file{89.nc}
along the record dimension, and store the results in @file{8589.nc}:
@cindex globbing
@cindex @code{NINTAP}
@cindex Processor
@cindex @acronym{CCM} Processor
@example
ncrcat 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc
ncrcat 8[56789].nc 8589.nc
ncrcat -n 5,2,1 85.nc 8589.nc
@end example
@noindent
These three methods produce identical answers.
@xref{Specifying input files}, for an explanation of the distinctions
between these methods.
@cindex fortran
Assume the files @file{85.nc}, @file{86.nc}, @dots{} @file{89.nc} each
contain a record coordinate @var{time} of length 12 defined such that
the third record in @file{86.nc} contains data from March 1986, etc.
@acronym{NCO} knows how to hyperslab the record dimension across files.
Thus, to concatenate data from December, 1985--February, 1986:
@example
ncrcat -d time,11,13 85.nc 86.nc 87.nc 8512_8602.nc
ncrcat -F -d time,12,14 85.nc 86.nc 87.nc 8512_8602.nc
@end example
@noindent
The file @file{87.nc} is superfluous, but does not cause an error.
The @samp{-F} turns on the Fortran (1-based) indexing convention.
@cindex stride
The following uses the @var{stride} option to concatenate all the March
temperature data from multiple input files into a single output file
@example
ncrcat -F -d time,3,,12 -v temperature 85.nc 86.nc 87.nc 858687_03.nc
@end example
@xref{Stride}, for a description of the @var{stride} argument.
Assume the @var{time} coordinate is incrementally numbered such that
January, 1985 = 1 and December, 1989 = 60.
Assuming @code{??} only expands to the five desired files, the following
concatenates June, 1985--June, 1989:
@example
ncrcat -d time,6.,54. ??.nc 8506_8906.nc
@end example
@page
@node ncrename netCDF Renamer, ncwa netCDF Weighted Averager, ncrcat netCDF Record Concatenator, Operators
@section @command{ncrename} netCDF Renamer
@cindex renaming variables
@cindex renaming dimensions
@cindex renaming attributes
@cindex variable names
@cindex dimension names
@cindex attribute names
@findex ncrename
@noindent
SYNTAX
@example
ncrename [-a @var{old_name},@var{new_name}] [-a @dots{}] [-D]
[-d @var{old_name},@var{new_name}] [-d @dots{}] [-h] [-l path] [-O] [-p path]
[-R] [-r] [-v @var{old_name},@var{new_name}] [-v @dots{}]
@var{input-file} [@var{output-file}]
@end example
@noindent
DESCRIPTION
@command{ncrename} renames dimensions, variables, and attributes in a
netCDF file.
Each object that has a name in the list of old names is renamed using
the corresponding name in the list of new names.
All the new names must be unique.
Every old name must exist in the input file, unless the name is preceded
by the character @samp{.}.
The validity of the old names is not checked prior to the renaming.
Thus, if an old name is specified without the the @samp{.} prefix and is
not present in @var{input-file}, @command{ncrename} will abort.
@cindex data safety
@cindex safeguards
@cindex temporary output files
@command{ncrename} is the exception to the normal rules that the user will
be interactively prompted before an existing file is changed, and that a
temporary copy of an output file is constructed during the operation.
If only @var{input-file} is specified, then @command{ncrename} will change
the names of the @var{input-file} in place without prompting and without
creating a temporary copy of @code{input-file}.
This is because the renaming operation is considered reversible if the
user makes a mistake.
The @var{new_name} can easily be changed back to @var{old_name} by using
@command{ncrename} one more time.
Note that renaming a dimension to the name of a dependent variable can
be used to invert the relationship between an independent coordinate
variable and a dependent variable.
In this case, the named dependent variable must be one-dimensional and
should have no missing values.
Such a variable will become a coordinate variable.
@cindex performance
@cindex operator speed
@cindex speed
@cindex execution time
According to the @cite{netCDF User's Guide}, renaming properties in
netCDF files does not incur the penalty of recopying the entire file
when the @var{new_name} is shorter than the @var{old_name}.
@noindent
OPTIONS
@table @samp
@item -a @var{old_name},@var{new_name}
Attribute renaming.
The old and new names of the attribute are specified by the associated
@var{old_name} and @var{new_name} values.
@cindex global attributes
@cindex attributes, global
Global attributes are treated no differently than variable attributes.
This option may be specified more than once.
You cannot change the attribute name for one particular variable (unless
it is uniquely named); all occurrences of the attribute of a given name
will be renamed.
This is considered an oversight and will be addressed in a future
version of @acronym{NCO}.
@item -d @var{old_name},@var{new_name}
Dimension renaming.
The old and new names of the dimension are specified by the associated
@var{old_name} and @var{new_name} values.
This option may be specified more than once.
@item -v @var{old_name},@var{new_name}
Variable renaming.
The old and new names of the variable are specified by the associated
@var{old_name} and @var{new_name} values.
This option may be specified more than once.
@c @cindex interactive prompting
@c @item -i
@c Interactive.
@c @command{ncrename} will prompt for confirmation before overwriting an
@c existing file.
@end table
@noindent
EXAMPLES
Rename the variable @code{p} to @code{pressure} and @code{t} to
@code{temperature} in netCDF @file{in.nc}.
In this case @code{p} must exist in the input file (or @command{ncrename} will
abort), but the presence of @code{t} is optional:
@example
ncrename -v p,pressure -v .t,temperature in.nc
@end example
@cindex coordinate variables
@command{ncrename} does not automatically attach dimensions to variables of
the same name.
If you want to rename a coordinate variable so that it remains a
coordinate variable, you must separately rename both the dimension and
the variable:
@example
ncrename -d lon,longitude -v lon,longitude in.nc
@end example
@cindex global attributes
@cindex attributes, global
@cindex @code{_FillValue} attribute
@cindex @code{missing_value} attribute
Create netCDF @file{out.nc} identical to @file{in.nc} except the attribute
@code{_FillValue} is changed to @code{missing_value} (in all variables
which possess it) and the global attribute @code{Zaire} is changed to
@code{Congo}:
@example
ncrename -a _FillValue,missing_value -a Zaire,Congo in.nc out.nc
@end example
@page
@node ncwa netCDF Weighted Averager, , ncrename netCDF Renamer, Operators
@section @command{ncwa} netCDF Weighted Averager
@cindex averaging data
@cindex weighted average
@cindex masked average
@cindex broadcasting variables
@findex ncwa
@noindent
SYNTAX
@example
ncwa [-A] [-a @var{dim}[,@dots{}]] [-C] [-c] [-D @var{dbg}]
[-d @var{dim},[@var{min}][,[@var{max}]]] [-F] [-h] [-I] [-l @var{path}]
[-M @var{val}] [-m @var{mask}] [-N] [-n] [-O] [-o @var{condition}]
[-p @var{path}] [-R] [-r] [-v @var{var}[,@dots{}]] [-W] [-w @var{weight}]
[-x] [-y @var{op_typ}] @var{input-file} @var{output-file}
@end example
@noindent
DESCRIPTION
@command{ncwa} averages variables in a single file over arbitrary
dimensions, with options to specify weights, masks, and normalization.
@xref{Averaging vs. Concatenating}, for a description of the
distinctions between the various averagers and concatenators.
The default behavior of @command{ncwa} is to arithmetically average every
numerical variable over all dimensions and produce a scalar result.
To average variables over only a subset of their dimensions, specify
these dimensions in a comma-separated list following @samp{-a}, e.g.,
@samp{-a time,lat,lon}.
@cindex arithmetic operators
@cindex hyperslab
@cindex @code{-d @var{dim},[@var{min}][,[@var{max}]]}
As with all arithmetic operators, the operation may be restricted to
an arbitrary hypserslab by employing the @samp{-d} option
(@pxref{Hyperslabs}).
@command{ncwa} also handles values matching the variable's
@code{missing_value} attribute correctly.
Moreover, @command{ncwa} understands how to manipulate user-specified
weights, masks, and normalization options.
With these options, @command{ncwa} can compute sophisticated averages (and
integrals) from the command line.
@var{mask} and @var{weight}, if specified, are broadcast to conform to
the variables being averaged.
@cindex rank
The rank of variables is reduced by the number of dimensions which they
are averaged over.
Thus arrays which are one dimensional in the @var{input-file} and are
averaged by @command{ncwa} appear in the @var{output-file} as scalars.
This allows the user to infer which dimensions may have been averaged.
Note that that it is impossible for @command{ncwa} to make make a
@var{weight} or @var{mask} of rank @var{W} conform to a @var{var} of
rank @var{V} if @var{W > V}.
This situation often arises when coordinate variables (which, by
definition, are one dimensional) are weighted and averaged.
@command{ncwa} assumes you know this is impossible and so @command{ncwa} does
not attempt to broadcast @var{weight} or @var{mask} to conform to
@var{var} in this case, nor does @command{ncwa} print a warning message
telling you this, because it is so common.
Specifying @var{dbg > 2} does cause @command{ncwa} to emit warnings in
these situations, however.
Non-coordinate variables are always masked and weighted if specified.
Coordinate variables, however, may be treated specially.
By default, an averaged coordinate variable, e.g., @code{latitude},
appears in @var{output-file} averaged the same way as any other variable
containing an averaged dimension.
In other words, by default @command{ncwa} weights and masks
coordinate variables like all other variables.
This design decision was intended to be helpful but for some
applications it may be preferable not to weight or mask coordinate
variables just like all other variables.
Consider the following arguments to @command{ncwa}: @code{-a latitude -w
lat_wgt -d latitude,0.,90.} where @code{lat_wgt} is a weight in the
@code{latitude} dimension.
Since, by default @command{ncwa} weights coordinate variables, the
value of @code{latitude} in the @var{output-file} depends on the weights
in @var{lat_wgt} and is not likely to be 45.---the midpoint latitude of
the hyperslab.
@cindex coordinate variable
@cindex @code{-I}
Option @samp{-I} overrides this default behavior and causes @command{ncwa}
not to weight or mask coordinate variables.
In the above case, this causes the value of @code{latitude} in the
@var{output-file} to be 45.---which is a somewhat appealing result.
Thus, @samp{-I} specifies simple arithmetic averages for the coordinate
variables.
In the case of latitude, @samp{-I} specifies that you prefer to archive
the central latitude of the hyperslab over which variables were averaged
rather than the area weighted centroid of the hyperslab
@footnote{If @code{lat_wgt} contains Gaussian weights then the value of
@code{latitude} in the @var{output-file} will be the area-weighted
centroid of the hyperslab. For the example given, this is about 30
degrees.}.
Note that the default behavior of (@samp{-I}) changed on
1998/12/01---before this date the default was not to weight or mask
coordinate variables.
The mathematical definition of operations involving rank reduction
is given above (@pxref{Operation Types}).
@menu
* Masking condition::
* Normalization::
@end menu
@node Masking condition, Normalization, ncwa netCDF Weighted Averager, ncwa netCDF Weighted Averager
@unnumberedsubsec Masking condition
@cindex masking condition
@tex
Each $x_i$ also has an associated masking weight $m_i$ whose value is 0
or 1 (false or true).
The value of $m_i$ is always 1 unless a @var{mask} is specified (with
@samp{-m}).
As noted above, @var{mask} is broadcast, if possible, to conform
to the variable being averaged.
In this case, the value of $m_i$ depends on the @dfn{masking
condition}.
As expected, $m_i = 1$ when the masking condition is @dfn{true} and $m_i
= 0$ otherwise.
@end tex
The masking condition has the syntax @math{@var{mask}}
@math{@var{condition}} @math{@var{val}}.
Here @var{mask} is the name of the masking variable (specified with
@samp{-m}).
The @var{condition} argument to @samp{-o} may be any one of the six
arithmetic comparatives: @kbd{eq}, @kbd{ne}, @kbd{gt}, @kbd{lt},
@kbd{ge}, @kbd{le}.
@set flg
@tex
These are the Fortran-style character abbreviations for the logical
operations $=$, $\neq$, $>$, $<$, $\ge$, $\le$.
@clear flg
@end tex
@ifinfo
These are the Fortran-style character abbreviations for the logical
operations @math{==}, @math{!=}, @math{>}, @math{<}, @math{>=},
@math{<=}.
@clear flg
@end ifinfo
@ifset flg
@c texi2html does not like @math{}
These are the Fortran-style character abbreviations for the logical
operations @kbd{==}, @kbd{!=}, @kbd{>}, @kbd{<}, @kbd{>=},
@clear flg
@end ifset
The masking condition defaults to @kbd{eq} (equality).
The @var{val} argument to @samp{-M} is the right hand side of the
@dfn{masking condition}.
Thus for the @var{i}'th element of the hyperslab to be averaged,
the masking condition is
@math{@var{mask_i}} @var{condition} @var{val}.
@tex
Each $x_i$ is also associated with an additional weight $w_i$ whose
value may be user-specified.
The value of $w_i$ is identically 1 unless the user specifies a
@var{weight} (with @samp{-w}).
In this case, the value of $w_i$ is determined by the @var{weight}
variable in the @var{input-files}.
As noted above, @var{weight} is broadcast, if possible, to conform
to the variable being averaged.
$M$ is the number of input elements $x_i$ which @emph{actually}
contribute to the output element $x_j$.
$M$ is also known as the @dfn{tally} and is defined as
$$
M = \sum_{i = 1}^{i = N} \mu_i m_i
$$
$M$ is identical to the denominator of the generic averaging expression
except for the omission of the weight $w_i$.
Thus $M = N$ whenever no input points are missing values or are masked.
Whether an element contributes to the output, and thus increments
$M$ by one, has more to do with the above two criteria (missing value
and masking) than with the numeric value of the element per se.
For example, $x_i = 0.0$ does contribute to $x_j$ (assuming the
@code{missing_value} attribute is not 0.0 and location $i$ is not
masked).
The value $x_i = 0.0$ will not change the numerator of the generic
averaging expression, but it will change the denominator (unless its
weight $w_i =0.0$ as well).
@end tex
@node Normalization, , Masking condition, ncwa netCDF Weighted Averager
@unnumberedsubsec Normalization
@cindex normalization
@command{ncwa} has one switch which controls the normalization of the
averages appearing in the @var{output-file}.
Option @samp{-N} prevents @command{ncwa} from dividing the weighted sum of
the variable (the numerator in the averaging expression) by the weighted
sum of the weights (the denominator in the averaging expression).
Thus @samp{-N} tells @command{ncwa} to return just the numerator of the
arithmetic expression defining the operation (@pxref{Operation Types}).
@ignore
The second normalization option tells @command{ncwa} to multiply the
weighted average the variable (given by the averaging expression)
by the tally, @var{M}.
Thus this option is similar to integration---multiplying the mean value
of a quantity by the number of gridpoints to which it applies.
The third normalization option is equivalent to specifying the first two
options simultaneously.
In other words this option causes @command{ncwa} to return @var{M} times
the numerator of the generic averaging expression.
With these normalization options, @command{ncwa} can compute sophisticated
averages (and integrals) from the command line.
@end ignore
@noindent
EXAMPLES
Given file @file{85_0112.nc}:
@example
netcdf 85_0112 @{
dimensions:
lat = 64 ;
lev = 18 ;
lon = 128 ;
time = UNLIMITED ; // (12 currently)
variables:
float lat(lat) ;
float lev(lev) ;
float lon(lon) ;
float time(time) ;
float scalar_var ;
float three_dmn_var(lat, lev, lon) ;
float two_dmn_var(lat, lev) ;
float mask(lat, lon) ;
float gw(lat) ;
@}
@end example
Average all variables in @file{in.nc} over all dimensions and store
results in @file{out.nc}:
@example
ncwa in.nc out.nc
@end example
@noindent
Every variable in @file{in.nc} is reduced to a scalar in @file{out.nc}
because, by default, averaging is performed over all dimensions.
Store the zonal (longitudinal) average of @file{in.nc} in @file{out.nc}:
@example
ncwa -a lon in.nc out.nc
@end example
@noindent
Here the tally is simply the size of @code{lon}, or 128.
@cindex @code{gw}
@cindex Gaussian weights
Compute the meridional (latitudinal) average, with values weighted by
the corresponding element of @var{gw}
@footnote{@code{gw} stands for @dfn{Gaussian weight} in the NCAR climate
model.}:
@example
ncwa -w gw -a lat in.nc out.nc
@end example
@noindent
Here the tally is simply the size of @code{lat}, or 64.
The sum of the Gaussian weights is 2.0.
Compute the area average over the tropical Pacific:
@example
ncwa -w gw -a lat,lon -d lat,-20.,20. -d lon,120.,270.
in.nc out.nc
@end example
@noindent
Here the tally is
@set flg
@tex
$64 \times 128 = 8192$.
@clear flg
@end tex
@ifset flg
64 times 128 = 8192.
@clear flg
@end ifset
@cindex @code{ORO}
Compute the area average over the globe, but include only points for
which
@set flg
@tex
$ORO < 0.5$
@clear flg
@end tex
@ifset flg
@var{ORO} < 0.5
@clear flg
@end ifset
@footnote{@code{ORO} stands for @dfn{Orography} in the NCAR climate model.
@math{@var{ORO} < 0.5} selects the gridpoints which are covered by ocean.}:
@example
ncwa -m ORO -M 0.5 -o lt -w gw -a lat,lon in.nc out.nc
@end example
@noindent
Assuming 70% of the gridpoints are maritime, then here the tally is
@set flg
@tex
$0.70 \times 8192 \approx 5734$.
@clear flg
@end tex
@ifset flg
0.70 times 8192 = 5734.
@clear flg
@end ifset
Compute the global annual average over the maritime tropical Pacific:
@example
ncwa -m ORO -M 0.5 -o lt -w gw -a lat,lon,time
-d lat,-20.0,20.0 -d lon,120.0,270.0 in.nc out.nc
@end example
@node Contributing, General Index, Operators, Top
@chapter Contributing
@cindex contributing
@cindex contributors
We welcome contributions from anyone.
The @acronym{NCO} project homepage at
@uref{https://sourceforge.net/projects/nco}.
contains more information on how to contribute.
@table @asis
@item Charlie Zender
Concept, design and implementation of @acronym{NCO} from 1995--2000
@item Henry Butowsky
Min, max, total, and non-linear operations.
Type conversion for arithmetic.
Various hacks.
Migration to netCDF3 API.
@item Bill Kocik
Memory management
@item Juliana Rew
Compatibility with large PIDs
@item Keith Lindsay
Excellent bug reports
@end table
@c @ignore
@c @node CSM Example, General Index, Operators, Top
@c @chapter Example: Analyzing a @acronym{CSM} run
@c This chapter presents an in depth example of using @acronym{NCO} to analyze the
@c results of a @acronym{CSM} run.
@c @end ignore
@c @node Name Index, General Index, Operators, Top
@c @unnumbered Function and Variable Index
@c @printindex fn
@node General Index, , Contributing, Top
@unnumbered General Index
@printindex fn
@c Print the tables of contents
@contents
@c That's all
@bye
|