File: issue18.txt

package info (click to toggle)
lg-issue18 5-4
  • links: PTS
  • area: main
  • in suites: woody
  • size: 2,928 kB
  • ctags: 148
  • sloc: makefile: 36; sh: 4
file content (10856 lines) | stat: -rw-r--r-- 528,876 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098
6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
6165
6166
6167
6168
6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
6192
6193
6194
6195
6196
6197
6198
6199
6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6213
6214
6215
6216
6217
6218
6219
6220
6221
6222
6223
6224
6225
6226
6227
6228
6229
6230
6231
6232
6233
6234
6235
6236
6237
6238
6239
6240
6241
6242
6243
6244
6245
6246
6247
6248
6249
6250
6251
6252
6253
6254
6255
6256
6257
6258
6259
6260
6261
6262
6263
6264
6265
6266
6267
6268
6269
6270
6271
6272
6273
6274
6275
6276
6277
6278
6279
6280
6281
6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
6298
6299
6300
6301
6302
6303
6304
6305
6306
6307
6308
6309
6310
6311
6312
6313
6314
6315
6316
6317
6318
6319
6320
6321
6322
6323
6324
6325
6326
6327
6328
6329
6330
6331
6332
6333
6334
6335
6336
6337
6338
6339
6340
6341
6342
6343
6344
6345
6346
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
6363
6364
6365
6366
6367
6368
6369
6370
6371
6372
6373
6374
6375
6376
6377
6378
6379
6380
6381
6382
6383
6384
6385
6386
6387
6388
6389
6390
6391
6392
6393
6394
6395
6396
6397
6398
6399
6400
6401
6402
6403
6404
6405
6406
6407
6408
6409
6410
6411
6412
6413
6414
6415
6416
6417
6418
6419
6420
6421
6422
6423
6424
6425
6426
6427
6428
6429
6430
6431
6432
6433
6434
6435
6436
6437
6438
6439
6440
6441
6442
6443
6444
6445
6446
6447
6448
6449
6450
6451
6452
6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
6469
6470
6471
6472
6473
6474
6475
6476
6477
6478
6479
6480
6481
6482
6483
6484
6485
6486
6487
6488
6489
6490
6491
6492
6493
6494
6495
6496
6497
6498
6499
6500
6501
6502
6503
6504
6505
6506
6507
6508
6509
6510
6511
6512
6513
6514
6515
6516
6517
6518
6519
6520
6521
6522
6523
6524
6525
6526
6527
6528
6529
6530
6531
6532
6533
6534
6535
6536
6537
6538
6539
6540
6541
6542
6543
6544
6545
6546
6547
6548
6549
6550
6551
6552
6553
6554
6555
6556
6557
6558
6559
6560
6561
6562
6563
6564
6565
6566
6567
6568
6569
6570
6571
6572
6573
6574
6575
6576
6577
6578
6579
6580
6581
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
6599
6600
6601
6602
6603
6604
6605
6606
6607
6608
6609
6610
6611
6612
6613
6614
6615
6616
6617
6618
6619
6620
6621
6622
6623
6624
6625
6626
6627
6628
6629
6630
6631
6632
6633
6634
6635
6636
6637
6638
6639
6640
6641
6642
6643
6644
6645
6646
6647
6648
6649
6650
6651
6652
6653
6654
6655
6656
6657
6658
6659
6660
6661
6662
6663
6664
6665
6666
6667
6668
6669
6670
6671
6672
6673
6674
6675
6676
6677
6678
6679
6680
6681
6682
6683
6684
6685
6686
6687
6688
6689
6690
6691
6692
6693
6694
6695
6696
6697
6698
6699
6700
6701
6702
6703
6704
6705
6706
6707
6708
6709
6710
6711
6712
6713
6714
6715
6716
6717
6718
6719
6720
6721
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
6739
6740
6741
6742
6743
6744
6745
6746
6747
6748
6749
6750
6751
6752
6753
6754
6755
6756
6757
6758
6759
6760
6761
6762
6763
6764
6765
6766
6767
6768
6769
6770
6771
6772
6773
6774
6775
6776
6777
6778
6779
6780
6781
6782
6783
6784
6785
6786
6787
6788
6789
6790
6791
6792
6793
6794
6795
6796
6797
6798
6799
6800
6801
6802
6803
6804
6805
6806
6807
6808
6809
6810
6811
6812
6813
6814
6815
6816
6817
6818
6819
6820
6821
6822
6823
6824
6825
6826
6827
6828
6829
6830
6831
6832
6833
6834
6835
6836
6837
6838
6839
6840
6841
6842
6843
6844
6845
6846
6847
6848
6849
6850
6851
6852
6853
6854
6855
6856
6857
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
6875
6876
6877
6878
6879
6880
6881
6882
6883
6884
6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
6905
6906
6907
6908
6909
6910
6911
6912
6913
6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
6960
6961
6962
6963
6964
6965
6966
6967
6968
6969
6970
6971
6972
6973
6974
6975
6976
6977
6978
6979
6980
6981
6982
6983
6984
6985
6986
6987
6988
6989
6990
6991
6992
6993
6994
6995
6996
6997
6998
6999
7000
7001
7002
7003
7004
7005
7006
7007
7008
7009
7010
7011
7012
7013
7014
7015
7016
7017
7018
7019
7020
7021
7022
7023
7024
7025
7026
7027
7028
7029
7030
7031
7032
7033
7034
7035
7036
7037
7038
7039
7040
7041
7042
7043
7044
7045
7046
7047
7048
7049
7050
7051
7052
7053
7054
7055
7056
7057
7058
7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
7085
7086
7087
7088
7089
7090
7091
7092
7093
7094
7095
7096
7097
7098
7099
7100
7101
7102
7103
7104
7105
7106
7107
7108
7109
7110
7111
7112
7113
7114
7115
7116
7117
7118
7119
7120
7121
7122
7123
7124
7125
7126
7127
7128
7129
7130
7131
7132
7133
7134
7135
7136
7137
7138
7139
7140
7141
7142
7143
7144
7145
7146
7147
7148
7149
7150
7151
7152
7153
7154
7155
7156
7157
7158
7159
7160
7161
7162
7163
7164
7165
7166
7167
7168
7169
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7180
7181
7182
7183
7184
7185
7186
7187
7188
7189
7190
7191
7192
7193
7194
7195
7196
7197
7198
7199
7200
7201
7202
7203
7204
7205
7206
7207
7208
7209
7210
7211
7212
7213
7214
7215
7216
7217
7218
7219
7220
7221
7222
7223
7224
7225
7226
7227
7228
7229
7230
7231
7232
7233
7234
7235
7236
7237
7238
7239
7240
7241
7242
7243
7244
7245
7246
7247
7248
7249
7250
7251
7252
7253
7254
7255
7256
7257
7258
7259
7260
7261
7262
7263
7264
7265
7266
7267
7268
7269
7270
7271
7272
7273
7274
7275
7276
7277
7278
7279
7280
7281
7282
7283
7284
7285
7286
7287
7288
7289
7290
7291
7292
7293
7294
7295
7296
7297
7298
7299
7300
7301
7302
7303
7304
7305
7306
7307
7308
7309
7310
7311
7312
7313
7314
7315
7316
7317
7318
7319
7320
7321
7322
7323
7324
7325
7326
7327
7328
7329
7330
7331
7332
7333
7334
7335
7336
7337
7338
7339
7340
7341
7342
7343
7344
7345
7346
7347
7348
7349
7350
7351
7352
7353
7354
7355
7356
7357
7358
7359
7360
7361
7362
7363
7364
7365
7366
7367
7368
7369
7370
7371
7372
7373
7374
7375
7376
7377
7378
7379
7380
7381
7382
7383
7384
7385
7386
7387
7388
7389
7390
7391
7392
7393
7394
7395
7396
7397
7398
7399
7400
7401
7402
7403
7404
7405
7406
7407
7408
7409
7410
7411
7412
7413
7414
7415
7416
7417
7418
7419
7420
7421
7422
7423
7424
7425
7426
7427
7428
7429
7430
7431
7432
7433
7434
7435
7436
7437
7438
7439
7440
7441
7442
7443
7444
7445
7446
7447
7448
7449
7450
7451
7452
7453
7454
7455
7456
7457
7458
7459
7460
7461
7462
7463
7464
7465
7466
7467
7468
7469
7470
7471
7472
7473
7474
7475
7476
7477
7478
7479
7480
7481
7482
7483
7484
7485
7486
7487
7488
7489
7490
7491
7492
7493
7494
7495
7496
7497
7498
7499
7500
7501
7502
7503
7504
7505
7506
7507
7508
7509
7510
7511
7512
7513
7514
7515
7516
7517
7518
7519
7520
7521
7522
7523
7524
7525
7526
7527
7528
7529
7530
7531
7532
7533
7534
7535
7536
7537
7538
7539
7540
7541
7542
7543
7544
7545
7546
7547
7548
7549
7550
7551
7552
7553
7554
7555
7556
7557
7558
7559
7560
7561
7562
7563
7564
7565
7566
7567
7568
7569
7570
7571
7572
7573
7574
7575
7576
7577
7578
7579
7580
7581
7582
7583
7584
7585
7586
7587
7588
7589
7590
7591
7592
7593
7594
7595
7596
7597
7598
7599
7600
7601
7602
7603
7604
7605
7606
7607
7608
7609
7610
7611
7612
7613
7614
7615
7616
7617
7618
7619
7620
7621
7622
7623
7624
7625
7626
7627
7628
7629
7630
7631
7632
7633
7634
7635
7636
7637
7638
7639
7640
7641
7642
7643
7644
7645
7646
7647
7648
7649
7650
7651
7652
7653
7654
7655
7656
7657
7658
7659
7660
7661
7662
7663
7664
7665
7666
7667
7668
7669
7670
7671
7672
7673
7674
7675
7676
7677
7678
7679
7680
7681
7682
7683
7684
7685
7686
7687
7688
7689
7690
7691
7692
7693
7694
7695
7696
7697
7698
7699
7700
7701
7702
7703
7704
7705
7706
7707
7708
7709
7710
7711
7712
7713
7714
7715
7716
7717
7718
7719
7720
7721
7722
7723
7724
7725
7726
7727
7728
7729
7730
7731
7732
7733
7734
7735
7736
7737
7738
7739
7740
7741
7742
7743
7744
7745
7746
7747
7748
7749
7750
7751
7752
7753
7754
7755
7756
7757
7758
7759
7760
7761
7762
7763
7764
7765
7766
7767
7768
7769
7770
7771
7772
7773
7774
7775
7776
7777
7778
7779
7780
7781
7782
7783
7784
7785
7786
7787
7788
7789
7790
7791
7792
7793
7794
7795
7796
7797
7798
7799
7800
7801
7802
7803
7804
7805
7806
7807
7808
7809
7810
7811
7812
7813
7814
7815
7816
7817
7818
7819
7820
7821
7822
7823
7824
7825
7826
7827
7828
7829
7830
7831
7832
7833
7834
7835
7836
7837
7838
7839
7840
7841
7842
7843
7844
7845
7846
7847
7848
7849
7850
7851
7852
7853
7854
7855
7856
7857
7858
7859
7860
7861
7862
7863
7864
7865
7866
7867
7868
7869
7870
7871
7872
7873
7874
7875
7876
7877
7878
7879
7880
7881
7882
7883
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
7901
7902
7903
7904
7905
7906
7907
7908
7909
7910
7911
7912
7913
7914
7915
7916
7917
7918
7919
7920
7921
7922
7923
7924
7925
7926
7927
7928
7929
7930
7931
7932
7933
7934
7935
7936
7937
7938
7939
7940
7941
7942
7943
7944
7945
7946
7947
7948
7949
7950
7951
7952
7953
7954
7955
7956
7957
7958
7959
7960
7961
7962
7963
7964
7965
7966
7967
7968
7969
7970
7971
7972
7973
7974
7975
7976
7977
7978
7979
7980
7981
7982
7983
7984
7985
7986
7987
7988
7989
7990
7991
7992
7993
7994
7995
7996
7997
7998
7999
8000
8001
8002
8003
8004
8005
8006
8007
8008
8009
8010
8011
8012
8013
8014
8015
8016
8017
8018
8019
8020
8021
8022
8023
8024
8025
8026
8027
8028
8029
8030
8031
8032
8033
8034
8035
8036
8037
8038
8039
8040
8041
8042
8043
8044
8045
8046
8047
8048
8049
8050
8051
8052
8053
8054
8055
8056
8057
8058
8059
8060
8061
8062
8063
8064
8065
8066
8067
8068
8069
8070
8071
8072
8073
8074
8075
8076
8077
8078
8079
8080
8081
8082
8083
8084
8085
8086
8087
8088
8089
8090
8091
8092
8093
8094
8095
8096
8097
8098
8099
8100
8101
8102
8103
8104
8105
8106
8107
8108
8109
8110
8111
8112
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
8128
8129
8130
8131
8132
8133
8134
8135
8136
8137
8138
8139
8140
8141
8142
8143
8144
8145
8146
8147
8148
8149
8150
8151
8152
8153
8154
8155
8156
8157
8158
8159
8160
8161
8162
8163
8164
8165
8166
8167
8168
8169
8170
8171
8172
8173
8174
8175
8176
8177
8178
8179
8180
8181
8182
8183
8184
8185
8186
8187
8188
8189
8190
8191
8192
8193
8194
8195
8196
8197
8198
8199
8200
8201
8202
8203
8204
8205
8206
8207
8208
8209
8210
8211
8212
8213
8214
8215
8216
8217
8218
8219
8220
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
8232
8233
8234
8235
8236
8237
8238
8239
8240
8241
8242
8243
8244
8245
8246
8247
8248
8249
8250
8251
8252
8253
8254
8255
8256
8257
8258
8259
8260
8261
8262
8263
8264
8265
8266
8267
8268
8269
8270
8271
8272
8273
8274
8275
8276
8277
8278
8279
8280
8281
8282
8283
8284
8285
8286
8287
8288
8289
8290
8291
8292
8293
8294
8295
8296
8297
8298
8299
8300
8301
8302
8303
8304
8305
8306
8307
8308
8309
8310
8311
8312
8313
8314
8315
8316
8317
8318
8319
8320
8321
8322
8323
8324
8325
8326
8327
8328
8329
8330
8331
8332
8333
8334
8335
8336
8337
8338
8339
8340
8341
8342
8343
8344
8345
8346
8347
8348
8349
8350
8351
8352
8353
8354
8355
8356
8357
8358
8359
8360
8361
8362
8363
8364
8365
8366
8367
8368
8369
8370
8371
8372
8373
8374
8375
8376
8377
8378
8379
8380
8381
8382
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
8404
8405
8406
8407
8408
8409
8410
8411
8412
8413
8414
8415
8416
8417
8418
8419
8420
8421
8422
8423
8424
8425
8426
8427
8428
8429
8430
8431
8432
8433
8434
8435
8436
8437
8438
8439
8440
8441
8442
8443
8444
8445
8446
8447
8448
8449
8450
8451
8452
8453
8454
8455
8456
8457
8458
8459
8460
8461
8462
8463
8464
8465
8466
8467
8468
8469
8470
8471
8472
8473
8474
8475
8476
8477
8478
8479
8480
8481
8482
8483
8484
8485
8486
8487
8488
8489
8490
8491
8492
8493
8494
8495
8496
8497
8498
8499
8500
8501
8502
8503
8504
8505
8506
8507
8508
8509
8510
8511
8512
8513
8514
8515
8516
8517
8518
8519
8520
8521
8522
8523
8524
8525
8526
8527
8528
8529
8530
8531
8532
8533
8534
8535
8536
8537
8538
8539
8540
8541
8542
8543
8544
8545
8546
8547
8548
8549
8550
8551
8552
8553
8554
8555
8556
8557
8558
8559
8560
8561
8562
8563
8564
8565
8566
8567
8568
8569
8570
8571
8572
8573
8574
8575
8576
8577
8578
8579
8580
8581
8582
8583
8584
8585
8586
8587
8588
8589
8590
8591
8592
8593
8594
8595
8596
8597
8598
8599
8600
8601
8602
8603
8604
8605
8606
8607
8608
8609
8610
8611
8612
8613
8614
8615
8616
8617
8618
8619
8620
8621
8622
8623
8624
8625
8626
8627
8628
8629
8630
8631
8632
8633
8634
8635
8636
8637
8638
8639
8640
8641
8642
8643
8644
8645
8646
8647
8648
8649
8650
8651
8652
8653
8654
8655
8656
8657
8658
8659
8660
8661
8662
8663
8664
8665
8666
8667
8668
8669
8670
8671
8672
8673
8674
8675
8676
8677
8678
8679
8680
8681
8682
8683
8684
8685
8686
8687
8688
8689
8690
8691
8692
8693
8694
8695
8696
8697
8698
8699
8700
8701
8702
8703
8704
8705
8706
8707
8708
8709
8710
8711
8712
8713
8714
8715
8716
8717
8718
8719
8720
8721
8722
8723
8724
8725
8726
8727
8728
8729
8730
8731
8732
8733
8734
8735
8736
8737
8738
8739
8740
8741
8742
8743
8744
8745
8746
8747
8748
8749
8750
8751
8752
8753
8754
8755
8756
8757
8758
8759
8760
8761
8762
8763
8764
8765
8766
8767
8768
8769
8770
8771
8772
8773
8774
8775
8776
8777
8778
8779
8780
8781
8782
8783
8784
8785
8786
8787
8788
8789
8790
8791
8792
8793
8794
8795
8796
8797
8798
8799
8800
8801
8802
8803
8804
8805
8806
8807
8808
8809
8810
8811
8812
8813
8814
8815
8816
8817
8818
8819
8820
8821
8822
8823
8824
8825
8826
8827
8828
8829
8830
8831
8832
8833
8834
8835
8836
8837
8838
8839
8840
8841
8842
8843
8844
8845
8846
8847
8848
8849
8850
8851
8852
8853
8854
8855
8856
8857
8858
8859
8860
8861
8862
8863
8864
8865
8866
8867
8868
8869
8870
8871
8872
8873
8874
8875
8876
8877
8878
8879
8880
8881
8882
8883
8884
8885
8886
8887
8888
8889
8890
8891
8892
8893
8894
8895
8896
8897
8898
8899
8900
8901
8902
8903
8904
8905
8906
8907
8908
8909
8910
8911
8912
8913
8914
8915
8916
8917
8918
8919
8920
8921
8922
8923
8924
8925
8926
8927
8928
8929
8930
8931
8932
8933
8934
8935
8936
8937
8938
8939
8940
8941
8942
8943
8944
8945
8946
8947
8948
8949
8950
8951
8952
8953
8954
8955
8956
8957
8958
8959
8960
8961
8962
8963
8964
8965
8966
8967
8968
8969
8970
8971
8972
8973
8974
8975
8976
8977
8978
8979
8980
8981
8982
8983
8984
8985
8986
8987
8988
8989
8990
8991
8992
8993
8994
8995
8996
8997
8998
8999
9000
9001
9002
9003
9004
9005
9006
9007
9008
9009
9010
9011
9012
9013
9014
9015
9016
9017
9018
9019
9020
9021
9022
9023
9024
9025
9026
9027
9028
9029
9030
9031
9032
9033
9034
9035
9036
9037
9038
9039
9040
9041
9042
9043
9044
9045
9046
9047
9048
9049
9050
9051
9052
9053
9054
9055
9056
9057
9058
9059
9060
9061
9062
9063
9064
9065
9066
9067
9068
9069
9070
9071
9072
9073
9074
9075
9076
9077
9078
9079
9080
9081
9082
9083
9084
9085
9086
9087
9088
9089
9090
9091
9092
9093
9094
9095
9096
9097
9098
9099
9100
9101
9102
9103
9104
9105
9106
9107
9108
9109
9110
9111
9112
9113
9114
9115
9116
9117
9118
9119
9120
9121
9122
9123
9124
9125
9126
9127
9128
9129
9130
9131
9132
9133
9134
9135
9136
9137
9138
9139
9140
9141
9142
9143
9144
9145
9146
9147
9148
9149
9150
9151
9152
9153
9154
9155
9156
9157
9158
9159
9160
9161
9162
9163
9164
9165
9166
9167
9168
9169
9170
9171
9172
9173
9174
9175
9176
9177
9178
9179
9180
9181
9182
9183
9184
9185
9186
9187
9188
9189
9190
9191
9192
9193
9194
9195
9196
9197
9198
9199
9200
9201
9202
9203
9204
9205
9206
9207
9208
9209
9210
9211
9212
9213
9214
9215
9216
9217
9218
9219
9220
9221
9222
9223
9224
9225
9226
9227
9228
9229
9230
9231
9232
9233
9234
9235
9236
9237
9238
9239
9240
9241
9242
9243
9244
9245
9246
9247
9248
9249
9250
9251
9252
9253
9254
9255
9256
9257
9258
9259
9260
9261
9262
9263
9264
9265
9266
9267
9268
9269
9270
9271
9272
9273
9274
9275
9276
9277
9278
9279
9280
9281
9282
9283
9284
9285
9286
9287
9288
9289
9290
9291
9292
9293
9294
9295
9296
9297
9298
9299
9300
9301
9302
9303
9304
9305
9306
9307
9308
9309
9310
9311
9312
9313
9314
9315
9316
9317
9318
9319
9320
9321
9322
9323
9324
9325
9326
9327
9328
9329
9330
9331
9332
9333
9334
9335
9336
9337
9338
9339
9340
9341
9342
9343
9344
9345
9346
9347
9348
9349
9350
9351
9352
9353
9354
9355
9356
9357
9358
9359
9360
9361
9362
9363
9364
9365
9366
9367
9368
9369
9370
9371
9372
9373
9374
9375
9376
9377
9378
9379
9380
9381
9382
9383
9384
9385
9386
9387
9388
9389
9390
9391
9392
9393
9394
9395
9396
9397
9398
9399
9400
9401
9402
9403
9404
9405
9406
9407
9408
9409
9410
9411
9412
9413
9414
9415
9416
9417
9418
9419
9420
9421
9422
9423
9424
9425
9426
9427
9428
9429
9430
9431
9432
9433
9434
9435
9436
9437
9438
9439
9440
9441
9442
9443
9444
9445
9446
9447
9448
9449
9450
9451
9452
9453
9454
9455
9456
9457
9458
9459
9460
9461
9462
9463
9464
9465
9466
9467
9468
9469
9470
9471
9472
9473
9474
9475
9476
9477
9478
9479
9480
9481
9482
9483
9484
9485
9486
9487
9488
9489
9490
9491
9492
9493
9494
9495
9496
9497
9498
9499
9500
9501
9502
9503
9504
9505
9506
9507
9508
9509
9510
9511
9512
9513
9514
9515
9516
9517
9518
9519
9520
9521
9522
9523
9524
9525
9526
9527
9528
9529
9530
9531
9532
9533
9534
9535
9536
9537
9538
9539
9540
9541
9542
9543
9544
9545
9546
9547
9548
9549
9550
9551
9552
9553
9554
9555
9556
9557
9558
9559
9560
9561
9562
9563
9564
9565
9566
9567
9568
9569
9570
9571
9572
9573
9574
9575
9576
9577
9578
9579
9580
9581
9582
9583
9584
9585
9586
9587
9588
9589
9590
9591
9592
9593
9594
9595
9596
9597
9598
9599
9600
9601
9602
9603
9604
9605
9606
9607
9608
9609
9610
9611
9612
9613
9614
9615
9616
9617
9618
9619
9620
9621
9622
9623
9624
9625
9626
9627
9628
9629
9630
9631
9632
9633
9634
9635
9636
9637
9638
9639
9640
9641
9642
9643
9644
9645
9646
9647
9648
9649
9650
9651
9652
9653
9654
9655
9656
9657
9658
9659
9660
9661
9662
9663
9664
9665
9666
9667
9668
9669
9670
9671
9672
9673
9674
9675
9676
9677
9678
9679
9680
9681
9682
9683
9684
9685
9686
9687
9688
9689
9690
9691
9692
9693
9694
9695
9696
9697
9698
9699
9700
9701
9702
9703
9704
9705
9706
9707
9708
9709
9710
9711
9712
9713
9714
9715
9716
9717
9718
9719
9720
9721
9722
9723
9724
9725
9726
9727
9728
9729
9730
9731
9732
9733
9734
9735
9736
9737
9738
9739
9740
9741
9742
9743
9744
9745
9746
9747
9748
9749
9750
9751
9752
9753
9754
9755
9756
9757
9758
9759
9760
9761
9762
9763
9764
9765
9766
9767
9768
9769
9770
9771
9772
9773
9774
9775
9776
9777
9778
9779
9780
9781
9782
9783
9784
9785
9786
9787
9788
9789
9790
9791
9792
9793
9794
9795
9796
9797
9798
9799
9800
9801
9802
9803
9804
9805
9806
9807
9808
9809
9810
9811
9812
9813
9814
9815
9816
9817
9818
9819
9820
9821
9822
9823
9824
9825
9826
9827
9828
9829
9830
9831
9832
9833
9834
9835
9836
9837
9838
9839
9840
9841
9842
9843
9844
9845
9846
9847
9848
9849
9850
9851
9852
9853
9854
9855
9856
9857
9858
9859
9860
9861
9862
9863
9864
9865
9866
9867
9868
9869
9870
9871
9872
9873
9874
9875
9876
9877
9878
9879
9880
9881
9882
9883
9884
9885
9886
9887
9888
9889
9890
9891
9892
9893
9894
9895
9896
9897
9898
9899
9900
9901
9902
9903
9904
9905
9906
9907
9908
9909
9910
9911
9912
9913
9914
9915
9916
9917
9918
9919
9920
9921
9922
9923
9924
9925
9926
9927
9928
9929
9930
9931
9932
9933
9934
9935
9936
9937
9938
9939
9940
9941
9942
9943
9944
9945
9946
9947
9948
9949
9950
9951
9952
9953
9954
9955
9956
9957
9958
9959
9960
9961
9962
9963
9964
9965
9966
9967
9968
9969
9970
9971
9972
9973
9974
9975
9976
9977
9978
9979
9980
9981
9982
9983
9984
9985
9986
9987
9988
9989
9990
9991
9992
9993
9994
9995
9996
9997
9998
9999
10000
10001
10002
10003
10004
10005
10006
10007
10008
10009
10010
10011
10012
10013
10014
10015
10016
10017
10018
10019
10020
10021
10022
10023
10024
10025
10026
10027
10028
10029
10030
10031
10032
10033
10034
10035
10036
10037
10038
10039
10040
10041
10042
10043
10044
10045
10046
10047
10048
10049
10050
10051
10052
10053
10054
10055
10056
10057
10058
10059
10060
10061
10062
10063
10064
10065
10066
10067
10068
10069
10070
10071
10072
10073
10074
10075
10076
10077
10078
10079
10080
10081
10082
10083
10084
10085
10086
10087
10088
10089
10090
10091
10092
10093
10094
10095
10096
10097
10098
10099
10100
10101
10102
10103
10104
10105
10106
10107
10108
10109
10110
10111
10112
10113
10114
10115
10116
10117
10118
10119
10120
10121
10122
10123
10124
10125
10126
10127
10128
10129
10130
10131
10132
10133
10134
10135
10136
10137
10138
10139
10140
10141
10142
10143
10144
10145
10146
10147
10148
10149
10150
10151
10152
10153
10154
10155
10156
10157
10158
10159
10160
10161
10162
10163
10164
10165
10166
10167
10168
10169
10170
10171
10172
10173
10174
10175
10176
10177
10178
10179
10180
10181
10182
10183
10184
10185
10186
10187
10188
10189
10190
10191
10192
10193
10194
10195
10196
10197
10198
10199
10200
10201
10202
10203
10204
10205
10206
10207
10208
10209
10210
10211
10212
10213
10214
10215
10216
10217
10218
10219
10220
10221
10222
10223
10224
10225
10226
10227
10228
10229
10230
10231
10232
10233
10234
10235
10236
10237
10238
10239
10240
10241
10242
10243
10244
10245
10246
10247
10248
10249
10250
10251
10252
10253
10254
10255
10256
10257
10258
10259
10260
10261
10262
10263
10264
10265
10266
10267
10268
10269
10270
10271
10272
10273
10274
10275
10276
10277
10278
10279
10280
10281
10282
10283
10284
10285
10286
10287
10288
10289
10290
10291
10292
10293
10294
10295
10296
10297
10298
10299
10300
10301
10302
10303
10304
10305
10306
10307
10308
10309
10310
10311
10312
10313
10314
10315
10316
10317
10318
10319
10320
10321
10322
10323
10324
10325
10326
10327
10328
10329
10330
10331
10332
10333
10334
10335
10336
10337
10338
10339
10340
10341
10342
10343
10344
10345
10346
10347
10348
10349
10350
10351
10352
10353
10354
10355
10356
10357
10358
10359
10360
10361
10362
10363
10364
10365
10366
10367
10368
10369
10370
10371
10372
10373
10374
10375
10376
10377
10378
10379
10380
10381
10382
10383
10384
10385
10386
10387
10388
10389
10390
10391
10392
10393
10394
10395
10396
10397
10398
10399
10400
10401
10402
10403
10404
10405
10406
10407
10408
10409
10410
10411
10412
10413
10414
10415
10416
10417
10418
10419
10420
10421
10422
10423
10424
10425
10426
10427
10428
10429
10430
10431
10432
10433
10434
10435
10436
10437
10438
10439
10440
10441
10442
10443
10444
10445
10446
10447
10448
10449
10450
10451
10452
10453
10454
10455
10456
10457
10458
10459
10460
10461
10462
10463
10464
10465
10466
10467
10468
10469
10470
10471
10472
10473
10474
10475
10476
10477
10478
10479
10480
10481
10482
10483
10484
10485
10486
10487
10488
10489
10490
10491
10492
10493
10494
10495
10496
10497
10498
10499
10500
10501
10502
10503
10504
10505
10506
10507
10508
10509
10510
10511
10512
10513
10514
10515
10516
10517
10518
10519
10520
10521
10522
10523
10524
10525
10526
10527
10528
10529
10530
10531
10532
10533
10534
10535
10536
10537
10538
10539
10540
10541
10542
10543
10544
10545
10546
10547
10548
10549
10550
10551
10552
10553
10554
10555
10556
10557
10558
10559
10560
10561
10562
10563
10564
10565
10566
10567
10568
10569
10570
10571
10572
10573
10574
10575
10576
10577
10578
10579
10580
10581
10582
10583
10584
10585
10586
10587
10588
10589
10590
10591
10592
10593
10594
10595
10596
10597
10598
10599
10600
10601
10602
10603
10604
10605
10606
10607
10608
10609
10610
10611
10612
10613
10614
10615
10616
10617
10618
10619
10620
10621
10622
10623
10624
10625
10626
10627
10628
10629
10630
10631
10632
10633
10634
10635
10636
10637
10638
10639
10640
10641
10642
10643
10644
10645
10646
10647
10648
10649
10650
10651
10652
10653
10654
10655
10656
10657
10658
10659
10660
10661
10662
10663
10664
10665
10666
10667
10668
10669
10670
10671
10672
10673
10674
10675
10676
10677
10678
10679
10680
10681
10682
10683
10684
10685
10686
10687
10688
10689
10690
10691
10692
10693
10694
10695
10696
10697
10698
10699
10700
10701
10702
10703
10704
10705
10706
10707
10708
10709
10710
10711
10712
10713
10714
10715
10716
10717
10718
10719
10720
10721
10722
10723
10724
10725
10726
10727
10728
10729
10730
10731
10732
10733
10734
10735
10736
10737
10738
10739
10740
10741
10742
10743
10744
10745
10746
10747
10748
10749
10750
10751
10752
10753
10754
10755
10756
10757
10758
10759
10760
10761
10762
10763
10764
10765
10766
10767
10768
10769
10770
10771
10772
10773
10774
10775
10776
10777
10778
10779
10780
10781
10782
10783
10784
10785
10786
10787
10788
10789
10790
10791
10792
10793
10794
10795
10796
10797
10798
10799
10800
10801
10802
10803
10804
10805
10806
10807
10808
10809
10810
10811
10812
10813
10814
10815
10816
10817
10818
10819
10820
10821
10822
10823
10824
10825
10826
10827
10828
10829
10830
10831
10832
10833
10834
10835
10836
10837
10838
10839
10840
10841
10842
10843
10844
10845
10846
10847
10848
10849
10850
10851
10852
10853
10854
10855
10856

           Linux Gazette... making Linux just a little more fun!
                                      
  Copyright  1996-97 Specialized Systems Consultants, Inc. linux@ssc.com
                                      
     _________________________________________________________________
                                      
                       Welcome to Linux Gazette! (tm)
                                      
   Sponsored by:
   
                                 InfoMagic
                                      
   Our sponsors make financial contributions toward the costs of
   publishing Linux Gazette. If you would like to become a sponsor of LG,
   e-mail us at sponsor@ssc.com.
   
     _________________________________________________________________
                                      
     _________________________________________________________________
                                      
                             Table of Contents
                            June 1997 Issue #18
                                      
     _________________________________________________________________
                                      
     * The Front Page
     * The MailBag
          + Help Wanted -- Article Ideas
          + General Mail
     * More 2 Cent Tips
          + A Fast and Simple Printing Tip
          + Grepping Files ina Directory Tree
          + ViRGE Chipset
          + Maintaining Multiple X Sessions
          + Automatic File Transfers
          + Setting Up Newsgroups
          + Color Application in X
          + X With 256 Colors
          + Video Cards on the S3/ViRGE
          + C Source With Line Numbers
          + ncftp Vs. ftplib
          + Domain & Dynamic IP Names
          + netcfg Tool
          + Putting Links to Your Dynamic IP
          + Hard Disk Duplication
          + Untar and Unzip
     * News Bytes
          + News in General
          + Software Announcements
     * The Answer Guy, by James T. Dennis
          + Networking Problems
          + Fetchmail
          + Procmail
          + Tcl/tlk Dependencies
          + /var/log/messages
          + OS Showdown
          + Adding Linux to a DEC XLT-366
          + Configuration Problems of a Soundcard
          + Procmail Idea and Question
          + UUCP/Linux on Caldera
          + ActiveX For Linux
          + What Packages Do I Need?
          + Users And Mounted Disks
          + [q] Map Left Arrow to Backspace
          + Adding Programs to Pull Down Menus
          + Linux and NT
          + pcmcia 28.8 Modems and Linux 1.2.13 Internet Servers
     * bash Strng Manipulations, by Jim Dennis
     * Brave GNU World, by Michael Stutz
     * Building Your Linux Computer Yourself, by Josh Turial
     * Cleaning Up Your /tmp, The Safe Way, by Guy Geens
     * Clueless at the Prompt: A Column for New Users, by Mike List
     * DiskHog: Using Perl and the WWW to Track System Disk Usage, by
       Ivan Griffin
     * dosemu & MIDI: A User's Report, by Dave Phillips
     * Graphics Muse, by Michael J. Hammel
     * New Release Reviews, by Larry Ayers
          + Bomb: An Interactive Image Generator
          + On-The_Fly Disk Compression
          + Xlock and Xlockmore
     * Red Hat Linux: Linux Installation and Getting Started, by Henry
       Pierce
     * SQL Server and Linux: No Ancient Heavenly Connections, But..., by
       Brian Jepson
     * The Weekend Mechanic, by John M. Fisk
     * The Back Page
          + About This Month's Authors
          + Not Linux
       
   A.L.S. 
   The Answer Guy
   The Weekend Mechanic
   
     _________________________________________________________________
                                      
   TWDT 1 (text)
   TWDT 2 (HTML)
   are files containing the entire issue: one in text format, one in
   HTML. They are provided strictly as a way to save the contents as one
   file for later printing in the format of your choice; there is no
   guarantee of working links in the HTML version.
   
     _________________________________________________________________
                                      
   Got any great ideas for improvements! Send your comments, criticisms,
   suggestions and ideas.
   
     _________________________________________________________________
                                      
   This page written and maintained by the Editor of Linux Gazette,
   gazette@ssc.com
   
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                                The Mailbag!
                                      
                    Write the Gazette at gazette@ssc.com
                                      
                                 Contents:
                                      
     * Help Wanted -- Article Ideas
     * General Mail
       
     _________________________________________________________________
                                      
                        Help Wanted -- Article Ideas
                                      
     _________________________________________________________________
                                      
   Date: Wed May 28 11:16:14 1997
   Subject: Help wanted: 2.1.40 will not boot
   From: Duncan Simpson, D.P.Simpson@ecs.soton.ac.uk
   
   2.1.40 dies after displaying the message Checking whether the WP bit
   is honored even in supervisor mode...
   
   A few prints hacked in later reveals that in enters the page fault
   handler, detects the bootup test and gets to the end of the C
   (do_fault in traps.c). However it never gets back to continue
   booting---exactly where it gets lost is obscure.
   
   Anyone have any ideas/fixes?
   
   Duncan
   
     _________________________________________________________________
                                      
   Date: Fri, 16 May 1997 16:17:47 -0400
   Subject: CD-ROMs
   From: James S Humphrye, humpjs@aur.alcatel.com
   
   I just found the LG today, and I have read most of the back issues...
   Great job so far! Lots of really useful info in here!
   
   Now to my "problem". I installed Slackware 3.0, which went just fine.
   I had XFree86 and all the goodies working perfectly (no, really, it
   all worked just great!) Then I upgraded my machine to a P150, and
   installed a Trident 9660) PCI video card. Then the X server wasn't
   happy any more. So...I upgraded the kernel sources to 2.0.29, got all
   the required upgrades for GCC, etc. I built a new kernel, and it was
   up and running...sort of.
   
   Despite having compiled in support for both IDE and SCSI CDROMs, I can
   only get the IDE one to work. I have edited the rc.* scripts, launched
   kerneld, run depmod -s, and all the other things the docs recommend.
   
   I have rebuilt the kernel to zdisk about 25 times, trying different
   combinations of built-in and module support, all to no avail. When the
   system boots, the scsi host adapter is not detected (it is an AHA1521,
   located on a SB16/SCSI-2 sound card, and it worked fine under 1.2.13 &
   1.3.18 kernels) When the aha152x module tries to load, it says it does
   not recognize scd0 as a block device. If I try to mount the SCSI unit,
   it says "init_module: device or resource busy". Any advice would be
   welcome. What I want is to at least be able to use the SCSI CDROM
   under Linux, or better yet, both it and the IDE CDROM...
   
   There are also a bunch of messages generated by depmod about
   unresolved symbols that I don't understand, as well as a bunch of
   lines generated by modprobe that say "cannot locate block-major-XX"
   (XX is a major number, and the ones I see are for devices not
   installed or supported by the kernel) The second group of messages may
   be unimportant, but I don't know..
   
   Thanks in advance, Steve
   
     _________________________________________________________________
                                      
   Date: Mon, 26 May 1997 12:18:40 -0700
   Subject: Need Help From Linux Gazette
   From: Scott L. Colantonio, scott@burbank.k12.ca.us
   
   Hi... We have Linux boxes located at the remote schools and the
   district office. All remote school clients (Mac, WinNT, Linux)
   attempting to access the district office Linux boxes experience a 75
   second delay on each transaction. On the other hand, we do not
   experience any delay when district office clients (Mac, WinNT, Linux)
   attempt to access the remote school Linux boxes. The delay began when
   we moved all the remote school clients to a separate network (and
   different ISP) than the district office servers.
   
   To provide a map, consider this:
   
   remote school <-> city hall city hall <-> Internet Internet <->
   district office
   
   We experience a 75 second delay: remote school client -> city hall ->
   Internet -> District office Linux box
   
   We do not experience any delay: remote school client -> city hall ->
   Internet
   
   We do not experience any delay: city hall -> Internet -> District
   office Linux box
   
   We do not experience any delay: District office client -> Internet ->
   city hall -> remote school Linux box ...
   
   The remote schools use a Linux box at City Hall for the DNS.
   
   In effect, the problem is isolated to the remote school clients
   connecting to the district office Linux boxes, just one hop away from
   city hall.
   
   As a result, the mail server is now a 75 second delay away from all
   educators in our district. Our Cisco reps do not think, after
   extensive tests, that this is a router configuration problem.
   
   I setup a Microsoft Personal web server at the district office to test
   if the delay was universal to our route. Unfortunately, there was no
   delay when remote school clients attempted to access the MS web
   server.
   
   Is this a known Linux network problem? Why is this a one-way problem?
   
   Any help would be greatly appreciated.
   
   Scott L. Colantonio
   
     _________________________________________________________________
                                      
   Date: Thu, 1 May 1997 16:16:58 -0700
   Subject: inetd
   From: Toby Reed, toby@eskimo.com
   
   I have a question for the inetd buffs out there...perhaps something
   like xinetd or a newer version has the capability to do the job, but
   what I want is this:
normal behavior:
connect to inetd
look in /etc/inetd.conf
run program

enhanced behavior:
connect to inetd
find out what hostname used to connect to inetd
look in /etc/inetd.conf.hostname if it exists, if not, use /etc/inetd.conf
run program listed in /etc/inetd.conf

   So if dork1.bob.com has the same IP address as dork2.bob.com, inetd
   would still be able to distinguish between them. In other words,
   similar to the VirtualHost directive in Apache that allows you to make
   virtual hosts that have the same IP address, except that with inetd.
   
   Or, depending on the hostname used to access inetd, inetd could
   forward the request to another address.
   
   This would be extremely useful in many limited-budget cases where a
   multitude of IPs are not available. For example, in combination with
   IP masquerading, would allow a lan host to be accessed transparently
   both ways on all ports, so long as it was accessed by a hostname, not
   an IP address. No port masquerading or proxies would be required
   unless the service needed was very very special. Even non-inetd httpd
   servers would work with this kind of redirection because the forwarded
   connection would still be handled by httpd on the machine with the
   masqueraded machine.
   
   Anyone know if this already exists or want to add to it so I can
   suggest it to the inetd group?
   
   -Toby
   
     _________________________________________________________________
                                      
   Date: Thu, 8 May 1997 08:05:03 -0700 (PDT)
   Subject: S3 Virge Video Board
   From: Tim Gray & Family, timgray@lambdanet.com
   
   I have a Linux box using a S3 Virge video board with 4 meg Ram. The
   problem is that X refuses to start with no other color depth than
   8bpp. As X is annoying at 8bpp (Color flashing on every window and
   several programs complain about no free colors) Is there a way to
   FORCE X to start in 16 bpp? using the command .... startx -bpp 16 does
   not work and erasing the 8bpp entry in the XF86Config file causes X to
   self destruct. Even changing the Depth from 8 to 16 causes errors..
   Anyone have experience with this X server?
   
     _________________________________________________________________
                                      
   Date: Fri, 9 May 1997 09:20:05
   Subject: Linux and NT
   From: Greg McNichol, mcnichol@mcs.net
   
   I am new to LINUX (and NT 4.0 for that matter) and would like any and
   all information I can get my hands on regarding the dual-boot issue.
   Any help is appreciated.
   
   --Greg
   
     _________________________________________________________________
                                      
   Date: Wed, 14 May 1997 00:02:04
   Subject: Help with CD-ROM
   From: Ralph, ralphs@kyrandia.com
   
   I'm relatively new to Linux...not a coder or anything like that...just
   like messing with new things....anyways I have been running Linux for
   about a year now and love the H*** out of it. About two weeks ago I
   was testing some HD's I picked up used with this nifty plug and play
   bios I got and when I went to restore the system back to normal and
   now my CD-Rom does not work in Linux...I booted back into 95 and it
   still worked so I tried forcing the darn thing nothing, nada , zero. I
   booted with the install disks and still no CD-Rom...its on the 2nd
   eide set for cable select I tried removing the 2nd hard drive and
   moving it there still nothing....can anyone give me some more
   suggestions to try?
   
     _________________________________________________________________
                                      
   Date: Thu, 15 May 1997 12:40:27 -0700
   Subject: Programming in C++
   From: Chris Walker, crwalker@cc.weber.edu
   
   Hi, I'm Chris Walker. I'm an undergrad computer science major at Weber
   State University. During my object oriented programming class Linux
   was brought up. The question was asked "if c++ is so good for programs
   that are spread over different files or machines, why are Linux and
   Unix programmed in c not c++?" I was hoping that you may have an
   answer. Has anyone converted Linux source to c++, would there be any
   advantages/disadvantages?
   
   Thanks, Chris Walker
   
     _________________________________________________________________
                                      
   Date: Thu, 15 May 1997 11:27:17 -0700 (PDT)
   Subject: Programming Serial Ports
   From: Celestino Rey Lopez, claude@idecnet.com
   
   First of all congratulations for your good job in the Linux Gazette.
   I'm interested in programming the serial ports in order to get data
   from other computers or devices. In other Unixes it is possible, via
   ioctl, to ask the driver to inform a process with a signal every time
   a character is ready in the port. For example, in HP-UX, the process
   receive a SIGIO signal. In Linux SIGIO means input/output error. Do
   you know where can I get information about this matter? Is there any
   books talking about that?
   
   Thanks in advance and thanks for providing the Linux community with
   lot of tricks, ideas and information about this amazing operating
   system.
   
   Yours, Celestino Rey Lopez.
   
     _________________________________________________________________
                                      
                                General Mail
                                      
     _________________________________________________________________
                                      
   Date:Fri, 16 May 1997 10:53:18
   Subject: Response to VGA-16 Server in LG Issue 17
   From: Andrew Vanderstock, Andrew.van.der.Stock@member.sage-au.org.au
   
   I'll look into it, even though VGA_16 has a very short life. Yes, he
   is correct, there isn't much in the way of testing dual headedness
   with a herc card and VGA16, as both are getting quite long in the
   tooth. VGA_16 disappears in a few months to reappear as the argument
   -bpp 4 on most display adapters. One bug fixer managed to re-enable
   Herc support in the new source tree a while back, so there may be life
   there yet.
   
   Also, there was one 2c issue that was a little out of whack in regards
   to linear addressing. The Cirrus chipsets are not fabulous, but many
   people have them built into their computers (laptops, HP PC's etc).
   
   All I can suggest is that he try startx -- -bpp 16 and see if that
   works. If it doesn't have a look at the release notes for his chipset.
   If all else fails, report any XFree bugs to the bug report cgi on
   www.xfree86.org
   
   I'll ask the powers that be if I can write an article for you on
   XFree86 3.3, the next version of the current source tree, as it is due
   soon. How many words are your articles generally?
   
   Andrew Vanderstock
   
     _________________________________________________________________
                                      
   Date: Sat, 24 May 1997 01:32:29 -0700
   Subject: Secure Anonymous FTP setup mini-howto spotted, then lost
   From: Alan Bailward, ajb@direct.ca
   
   I saw once on a friend of mines linux box, running Slackware 3.1, in
   /usr/docs/faq/HOWTO/mini, a mini-howto on how to setup a secure
   anonymous FTP server. It detailed how to setup all the directories,
   permissions, and so on, so you could upload, have permissions to write
   but not delete on your /incoming, etc etc etc. It looked like a great
   doc, but for the life of me I can't find it! I've looked on the
   slackware 3.2 cdrom, the 3.1 cdrom, searched all through the net, but
   to no avail. As I am trying to setup an anonymous ftp site now, this
   would be invaluable... I'd feel much better reading it than 'chmod
   777'ing all over the place :)
   
   If anyone has seen this document, or knows where it is, please let me
   know. Or even if there is another source of this type of information,
   I would sure appreciate it sent to me at ajb@direct.ca
   
   Thanks a lot, and keep on Linuxing!
   
   alan
   
     _________________________________________________________________
                                      
   Date: Mon, 26 May 1997 13:21:20 +0800
   Subject: Tuning XFree86
   From: Soh Kam Yung, kysoh@ctlsg.creaf.com
   
   I've been reading Linux Gazette since day one and it has been great.
   Keep up the good work.
   
   I've been seeing comments and letters in the Gazette from people who
   are having trouble with their XFree86. Well, here's a tip for those
   not satisfied with the way their screen looks (offset to one side, too
   high/wide, etc.).
   
   While looking through the XFree86 web site for tips on how to tweak my
   XF86 configuration, I noticed a reference to a program called
   xvidtune. Not many people may have heard about it, but it is a program
   used to tune your video modes. Among its features include:
    1. the ability to modify your graphics screen 'on-the-fly'. You can
       move the screen, strech/compress it vertically or horizontally and
       see the results.
    2. it can generate a modeline of the current screen setting. Just
       copy it into the correct area of your XF86Config file and the next
       time you start up the XFree86 server, the screen will come up the
       way you like it.
       
   Just run xvidtune and have fun with it! But be careful: as with
   XFree86 in general, it does not guarantee that the program will not
   burn your monitor by generating invalid settings. Fortunately, it has
   a quick escape (press 'r' to restore your previous screen settings).
   
   Regards, -- Soh Kam Yung
   
     _________________________________________________________________
                                      
   Date: Fri, May 30 1997 12:34:23
   Subject: Certification and training courses for Linux
   From: Harry Silver, hsilver@pyx.net
   
   I am currently on a mailing list for consultants for Red Hat Linux.
   One of my suggestions to that list is contained below. I truly hope as
   part of a broader international initiative, Linux International will
   pick up the ball on this one so as to ensure that Linux generically
   will survive. I truly hope that someone from your organization will
   follow up both with myself and with the Red Hat consulting mailing
   list as to a more generic Linux support effort in this area. All that
   would be required is gathering up the manuals from the older Unixware
   CNE course and 'porting' them to Linux and creating an HTMLized
   version. This along with online testing could easily generate a
   reasonable revenue stream for the generic Linux group involved.
   
   Respectfully,
   
   MY SUGGESTION: About two years ago, Novell still had Unixware before
   sending it over to the care of SCO. At the time Unix was under the
   stewardship of Novell, a Unixware CNE course was developed. Since, Ray
   Noorda of Caldera and former CEO of Novell is also an avid supporter
   of Linux as well as the good folks at Red Hat and other distributions,
   rather than RE-INVENT the wheel so to speak, wouldn't it make more
   sense to pattern certification AFTER the Unixware CNE courses by
   'porting' the course to Linux GENERICALLY ?
   
   Harley Silver
   
     _________________________________________________________________
                                      
   Date: Fri, 24 May 1996 11:39:25 +0200
   Subject: Duplicating a Linux Installed HD
   From: Dietmar Kling, kling@tao.de
   
   Hello. I did duplicate my Hard disk before you release this articles
   for it. A friend of mine new to linux tried to do it, too using your
   instructions. But we discovered, when he copied my root partition,
   that he couldn't compile anything on his computer afterwards. A bug in
   libc.so.5.2.18 prevented his old 8 MB Machine from runnig make or gcc.
   it always aborted with an error. After updating libc.so5.2.18 and
   running ldconfig the problem was solved.
   
   We had a SuSe 4.0 installation.
   
   Dietmar
   
     _________________________________________________________________
                                      
   Date: Sat, 10 May 1997 16:09:29 +0200 (MET DST)
   Subject: Re: X Color Depth
   From: Roland Smith, rsmit06@ibm.net
   
   In response to Michael J. Hammel's 2cent tip in issue #17: I disagree
   that a 16bit display displays less colors than a 8 bit display.
   
   Both kinds of displays use a colormap. A color value is nothing more
   than an index into a color map, which is an array of red,green,blue
   triplets, each 8 bits. The amount of colors that can be shown
   simultaneously depends on the graphics hardware.
   
   An 8bit display has an eight bit color value, so it can maximally have
   256 different color values. The color map links these to 256 different
   colors which can be displayed simultaneously. Each of these 256 colors
   can be one of the 2^24 different colors possible with the 3*8 bits in
   each colormap entry (or color cell, as it is called).
   
   A 16bit display has a sixteen bit color value, which can have
   2^16=65536 different values. The colormap links these to 65535
   different, simultaneously visible, colors (out of 2^24 possible
   colors). (actually it's a bit more difficult than this, but thats
   beyond the point).
   
   So both a 8 and 16 bit display can show 2^24=16.7*10^6 colors. The
   difference lies in the number of colors they can show *at once*.
   
   Regards, Roland
   
     _________________________________________________________________
                                      
   Date: Fri, May 30 1997 13:24:35
   Subject: Using FTP as a shell-command with ftplib
   
   From: Walter Harms, Walter.Harms@Informatik.Uni-Oldenburg.DE ...
   
   Any drawbacks? Of course, for any ftp session you need a user/paswdr.
   I copy into public area using anonymous/email@ others >will need to
   surly a password at login, what is not very useful for regular jobs or
   you have to use some kind of public login but still I think it's
   easier and >better to use than the r-cmds.
   
   -- walter
   
     _________________________________________________________________
                                      
   Date: Mon, 12 May 1997 17:05:09 -0700
   Subject: RE: Using ftp Commands in Shellscript
   From: James Boorn, jboorn@optum.com
   
   I recommend you depend on .netrc for ftp usernames and passwords for
   automated ftp.
   
   James Boorn
   
     _________________________________________________________________
                                      
   Date: Thu, 29 May 1997 09:09:35 -0500
   Subject: X limitation to 8 Bit Color (Response to Gary Masters)
   From: Omegaman, omegam@COMMUNIQUE.NET
   
   I read your question in Linux Gazette regarding an X limitation to 8
   bit color when the system has more that 14 megs of RAM. Where did you
   find that information? I ask because my system has 24 megs of RAM, and
   I run 16 bit color all the time. One difference between our systems is
   that I am using a Diamond Stealth 64 video card.
   
   Gary,
   
   Just caught this letter in Linux Gazette. This limitation is specific
   to Cirrus Logic cards, particularly those on the ISA bus and some on
   VLB (ie. old systems -- like mine. Since you're using a Diamond
   Stealth 64, you don't have this limitation.
   
   Full details are in the readme.cirrus file contained in the XFree86
   Documentation. Some cirrus owners may be able to overcome this
   limitation. See http://xfree86.org
   
     _________________________________________________________________
                                      
   Date: Fri, May 30 1997 8:31:25
   Subject: Response to Gary Masters
   From: Ivan Griffin, Ivan Griffin@ul.ie
   
   From: Gary Masters gmasters@devcg.denver.co.us
   
   I read your question in Linux Gazette regarding an X limitation to 8
   bit color when the system has more than 14 megs of RAM. Where did you
   find that information? I ask because my system has 24 megs of RAM, and
   I run 16 bit color all the time. One difference between our systems is
   that I am using a Diamond Stealth 64 video card.
   
   XFree86 needs to be able to map video memory in at the end of physical
   memory linearly. However, ISA machines cannot support greater than
   16MB in this fashion - so if you have 16 or greater MB or RAM, you
   cannot run XFree86 in higher than 8 bit color.
   
   Ivan
   
     _________________________________________________________________
                                      
               Published in Linux Gazette Issue 18, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next 
   
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1997 Specialized Systems Consultants, Inc.
      
                               More 2 Tips!
                                      
                                      
               Send Linux Tips and Tricks to gazette@ssc.com 
                                      
     _________________________________________________________________
                                      
  Contents:
  
     * A Fast and Simple Printing Tip
     * Grepping Files in a Directory Tree
     * ViRGE Chipset
     * Maintaining Multiple X Sessions
     * Automatic File Transfers
     * Setting Up Newsgroups
     * Color Application in X
     * X With 256 Colors
     * Video Cards on the S3/ViRGE
     * C Source With Line Numbers
     * ncftp Vs. ftplib
     * Domain & Dynamic IP Names
     * netcfg Tool
     * Putting Links to Your Dynamic IP
     * Hard Disk Duplication
     * Untar and Unzip
       
     _________________________________________________________________
                                      
  Monitoring a ftp Download.
  
   Date: Tue, 27 May 1997 09:57:20 -0400
   From: Bob Grabau bob_grabau@fmso.navy. mil
   
   Here is a tip for monitoring a ftp download. in another virtual
   console enter the following script:

while :
do
clear
ls -l <filename that you are downloading&gr;
sleep 1
done

   This virtual console can be behind (if you are using X) any other
   window and just showing a line of text. This will let you know if your
   download is done or stalled. This will let you do other things, like
   reading the Linux Gazette.
   
   When you type this in, you wll get a > prompt after the first line and
   continue until you enter the last line.
   
   -- Bob Grabau
   
     _________________________________________________________________
                                      
  Logging In To X Tip
  
   Date: Mon, 26 May 1997 10:17:12 -0500 (CDT)
   From: Tom Barron barron@usit.net
   Xlogin.mini-howto
   
   Several people regularly use my Linux system at home (an
   assembled-from- components box containing a 133 Mhz Pentium, 2Gb of
   disk, 32Mb of memory, running the Slackware distribution) -- my
   step-son Stephen, who's learning to program and likes using X, my
   younger step-son Michael, who likes the X screen-savers and games like
   Doom, my wife Karen, who prefers the generic terminalness of the
   un-X'd console, and myself -- I like to use X for doing software
   development work since it lets me see several processes on the screen
   at once. I also like to keep an X screen saver running when no-one is
   using the machine.
   
   I didn't want to run xdm (an X-based login manager), since Karen
   doesn't want to have to deal with X. She wants to be at the console
   when she logins in and not have to worry about where to click the
   mouse and such. But I wanted to have a simple way of getting into X
   when I login without having to start it up manually.
   
   Here's what I came up with:
   
     * In my .profile (my shell is bash), I put:
if [ "$DISPLAY" = "" ]; then

   cal > ~/.month
   xinit .Xsession > /dev/null 2>&1
   clear
   if [ ! -f .noexit ]; then
      exit
   fi

else

   export TTY=`tty`
   export TTY=`expr "$TTY" : "/dev/tty\(.*\)"`
   export PS1=" \n$ "
   export PATH=${PATH}:~/bin:.
   export EDITOR=emacs
   export WWW_HOME=file://localhost/home/tb/Lynx/lynx_bookmarks.html
   export DISPLAY

   alias cls="clear"
   alias dodo="$EDITOR ~/prj/dodo"
   alias e="$EDITOR"
   alias exit=". ~/bin/off"
   alias l="ls -l"
   alias lx="ls -x"
   alias minicom="minicom -m"
   alias pg=less
   alias pine="export DISPLAY=;'pine'"
   alias prj=". ~/bin/prj"
   alias profile="$EDITOR ~/.profile; . ~/.profile"

fi
   When I first login, on the console, $DISPLAY is not yet set, so the
       first branch of the if statement takes effect and we start up X.
       When X terminates, we'll clear the screen and, unless the file
       .noexit exists, logout. Running cal and storing the output in
       .month is in preparation for displaying a calender in a window
       under X.
     * Once X comes up, $DISPLAY is set. My .Xsession file contains:

:
xsetroot -solid black
fvwm &
oclock -geometry 75x75-0+0 &
xload -geometry 100x75+580+0 &
emacs -geometry -0-0 &
xterm -geometry 22x8+790+0 -e less ~/.month &
color_xterm -font 7x14 -ls -geometry +5-0 &
exec color_xterm -font 7x14 -ls -geometry +5+30 \
   -T "Type 'exit' in this window to leave X"
   So when my color_xterms run, with -ls as an argument (which says to
       run a login shell), they run .profile again. Only this time
       $DISPLAY is set, so they process the else half of the if, getting
       the environment variables and aliases I normally expect.
       
     _________________________________________________________________
                                      
  xlock Tip
  
   Date: Mon, 26 May 1997 10:14:12 -0500 (CDT)
   From: Tom Barron barron@usit.net Xscreensaver.mini-howto
   
   Several people regularly use my Linux system at home (an
   assembled-from- components box containing a 133 Mhz Pentium, 2Gb of
   disk, 32Mb of memory, running the Slackware distribution) -- my
   step-son Stephen, who's learning to program and likes using X, my
   younger step-son Michael, who likes the X screen-savers and games like
   Doom, my wife Karen, who prefers the generic terminalness of the
   un-X'd console, and myself -- I like to use X for doing software
   development work since it lets me see several processes on the screen
   at once. I also like to keep an X screen saver running when no-one is
   using the machine.
   
   I didn't want to run xdm (an X-based login manager), since Karen
   doesn't want to have to deal with X. She wants to be at the console
   when she logins in and not have to worry about where to click the
   mouse and such. But I wanted to have a simple way of starting up the
   X-based screensaver xlock when I (or anyone) logged out to the console
   login.
   
   Here's what I did (as root):
   
     * I created a user called xlock. It has no password and its home
       directory is /usr/local/xlock. Its shell is bash.
     * In xlock's .profile, I put

if [ "$DISPLAY" = "" ]; then

   xinit .Xsession > /dev/null 2>&1
   clear
   exit

fi
     * In xlock's .Xsession, I put

:
exec xlock -nolock -mode random

   Now, anybody can login xlock and instantly bring up the X
   screen-saver. The "random" keyword tells it to select a pattern to
   display at random, changing it every so often. When a key is pressed
   or a mouse button clicked, the screensaver process exits, the X
   session is ended, and control returns to the console login prompt.
   
   In my next article, I show how I arranged to jump into X from the
   console login prompt just by logging in (i.e., without having to start
   X manually).
   
     _________________________________________________________________
                                      
  Hex Dump
  
   Date: Sat, 24 May 1997 00:29:20 -0400
   From: Joseph Hartmann joeh@arakis.sugar-river.net
   
   Hex Dump by Joseph L. Hartmann, Jr.
   
   This code is copyright under the GNU GPL by Joseph L. Hartmann, Jr.
   
   I have not been happy with Hex Dump. I am an old ex-DOS user, and am
   familiar with the HEX ... ASCII side-by-side presentation.
   
   Since I am studying awk and sed, I thought it would be an interesting
   excercise to write this type of dump.
   
   Here is a sample of what you may expect when you type the (script)
   command "jhex " to the shell:

0000000  46 69 6c 65 6e 61 6d 65  0000000 F i l e n a m e
0000008  3a 20 2f 6a 6f 65 2f 62  0000008 :   / j o e / b
0000010  6f 6f 6b 73 2f 52 45 41  0000010 o o k s / R E A
0000018  44 4d 45 0a 0a 62 6f 6f  0000018 D M E . . b o o
0000020  6b 2e 74 6f 2e 62 69 62  0000020 k . t o . b i b
0000028  6c 69 6f 66 69 6e 64 2e  0000028 l i o f i n d .
0000030  70 65 72 6c 20 69 73 20  0000030 p e r l   i s

   If you like it, read on....
The 0000000 is the hexadecimal address of the dump
46 is the hexadecimal value at 0000000
69 is the hexadecimal value at 0000001
6c is the hexadecimal value at 0000002
...and so on.

   To the right of the repeated address, "F i l e n a m e" is the 8 ascii
   equivalents to the hex codes you see on the left.
   
   I elected to dump 8 bytes in one row of screen output. The following
   software is required: hexdump, bash, less and gawk.
   
   gawk is the GNU/Linux version of awk.
   
   There are four files that I have installed in my /joe/scripts
   directory, a directory that is in my PATH environment.
   
   The four files are: combine -- an executable script: you must "chmod
   +x combine" jhex -- an executable script: you must "chmod +x jhex"
   hexdump.dashx.format -- a data file holding the formatting information
   for the hex bytes. hexdump.perusal.format -- a data file holding the
   formatting information for the ascii bytes.
   
   Here is the file jhex:
hexdump -f /joe/scripts/hexdump.dashx.format $1 > /tmp1.tmp
hexdump -f /joe/scripts/hexdump.perusal.format $1 > /tmp2.tmp
gawk -f /joe/scripts/combine /tmp1.tmp > /tmp3.tmp
less /tmp3.tmp
rm /tmp1.tmp
rm /tmp2.tmp
rm /tmp3.tmp

   Here is the file combine:
# this is /joe/scripts/combine -- it is invoked by /joe/scripts/jhex
{  getline < "/tmp1.tmp"
   printf("%s  ",$0)
   getline < "/tmp2.tmp"
   print
}

   Here is the file hexdump.dashx.format:
           "%07.7_ax  " 8/1 "%02x "  "\n"

   Here is the file hexdump.perusal.format:
           "%07.7_ax "  8/1  "%_p " "\n"

   I found the "sed & awk" book by Dale Dougherty helpful.
   
   I hope you find jhex useful. To make it useful for yourself, you will
   have to replace the "/joe/scripts" with the path of your choice. It
   must be a path that is in your PATH, so that the scripts can be
   executed from anyplace in the directory tree.
   
   A trivial note: do not remove the blank line from the
   hexdump.dasx.format and hexdump.perusal.format files: it will not work
   if you do!
   
   A second trivial note: when a file contains many characters all of
   same kind, the line-by-line display will be aborted and the display
   will look similar to the example below:

0000820  75 65 6e 63 65 20 61 66  0000820 u e n c e   a f
0000828  74 65 72 20 74 68 65 20  0000828 t e r   t h e
0000830  0a 20 20 20 20 20 20 20  0000830 .
0000838  20 20 20 20 20 20 20 20  0000838
*  *
0000868  20 20 20 20 20 6c 61 73  0000868           l a s
0000870  74 20 72 65 63 6f 72 64  0000870 t   r e c o r d

   Instead of displaying *all* the 20's, you just get the
*  *  .

   I don't like this myself, but I have reached the end of my competence
   (and/or patience), and therefore, that's the way it is!
   
     _________________________________________________________________
                                      
  A Fast and Simple Printing Tip
  
   Date: Fri, 23 May 1997 07:30:38 -0400
   From: Tim Bessell tbessell@buffnet.net
   
   I have been using Linux for about a year, as each day passes and my
   knowledge increases, my Win95 patitions decrease. This prompted me to
   by a notebook, which of course is loaded with Windows. Currently these
   two machines are NOT networked :-( But that doesn't mean I can't print
   a document created in Word for Windows, Internet Explorer, etc.,
   without plugging my printer cable into the other machine.
   
   My solution is rather simple. If you haven't already, add a new
   printer in the Windows control panel, using the driver for the printer
   that is connected to your Linux box. Select "FILE" as the port you
   wish to print to and give it a name, eg: Print File (HP Destjet 540).
   Now print your document to a floppy disk file, take it to the Linux
   machine, and issue a command simular to: cat filename > /dev/lp1. Your
   document will be printed with all the formatting that was done in
   Windows.
   
   Enjoy,
   Tim Bessell
   
     _________________________________________________________________
                                      
  Grepping Files in a Directory Tree
  
   Date: Wed, 21 May 1997 21:42:34
   From: Earl Mitchell earlm@Terayon.COM
   
   Ever wonder how you can grep certain files in a directory tree for a
   particular string. Here's example how
grep foo `find . -name \*.c -print`

   This command will generate a list of all the .c files in the current
   working directory or any of its subdirectories then use this list of
   files for the grep command. The grep will then search those files for
   the string "foo" and output the filename and the line containing
   "foo".
   
   The only caveat here is that UNIX is configured to limit max chars in
   a command line and the "find" command may generate a list of files to
   huge for shell to digest when it tries to run the grep portion as a
   command line. Typically this limit is 1024 chars per command line.
   
   -earl
   
     _________________________________________________________________
                                      
  ViRGE Chipset
  
   Date: Wed, 30 Apr 1997 22:41:28
   From: Peter Amstutz amstpi@freenet.tlh.fl.us
   
   A couple suggestions to people with video cards based on the ViRGE
   Chipset...
    1. XFree 3.2 has a ViRGE server! I have heard a number of people
       complain about XFree's lack of ViRGE support. Yo GUYZ! That's
       because your wonderful Linux CD has XFree86 3.1.2 WHICH IS NOT THE
       MOST RECENT VERSION!
    2. There is a minor hack you can make to svgalib 1.12.10 to get it to
       reconignize your nice S3 based card as actually being such. The
       s3/ViRGE chip is, in the words of some guy at C|Net, "basically a
       S3 Trio 64 with a 3d engine bolted on top." Unfortunately, it
       returns a card code totally different to the Trio64. With just a
       minor little bit of hacking, you too can do 1024x768x16bpp through
       svgalib. Get the source, untar it & everything. Go into the main
       source directory, and with your favorite editor, open up s3.c (or
       it maybe vga.c it has been sometime since I did this and I do not
       have the source now in front of me) Now, search for the nice
       little error message it gives you when it says something like "S3
       chip 0x(some hex number) not reconignized." Above it there should
       be a switch()/case statement that figures out which card it is.
       Find the case statement that matches a Trio64. Insert a
       fall-through case statement that matches the code your card
       returns, so svgalib treats it as a Trio64! You're home free!
       Recompile, re-install libraries, and now, what we've all been
       waiting for, test 640x480x256! 640x480x16bpp! 800x600x24bpp!
       YES!!!
       
   Note: this trick has not been authorized, reconignized, or in any way
   endorsed, recommended, or even considered by the guy(s) who wrote
   svgalib in the first place. (that last version of svgalib is over a
   year old, so I don't expect there to be any new versions real soon) It
   works for me, so I just wanted to share it with the Linux community
   that just might find it useful. Peter Amstutz
   
     _________________________________________________________________
                                      
  Maintaining Multiple X Sessions
  
   Date: Sun, 04 May 1997 21:02:10 +0200
   From: David Kastrup dak@neuroinformatik.ruhr-uni-bochum.de
   
   Suppose you have an X running, and want to start another one (perhaps
   for a different user).
   
   startx alone will complain.
   
   Writing
startx -- :1

   
   will work, however (if screen 0 is already taken). Start another one
   with
startx -- :2

   
   if you want. You want that to have hicolor, and your Xserver would
   support it?
   
   Then start it rather with
startx -- -bpp 16 :2

   Of course, if no Xserver is running yet, you can get a non-default
   depth by just starting with
startx -- -bpp 16

   or
startx -- -bpp 8

   
   or whatever happens to be non-standard with you. -- David Kastrup
   
     _________________________________________________________________
                                      
  Automatic File Transfer
  
   Date: Sat, 3 May 1997 12:58:11 +0200 (MDT)
   From: Gregor Gerstmann gerstman@tfh-berlin.de
   
   Hi there, Here is a small tip concerning the 'automatic' file
   transfer; Linux Gazette Issue 17, May 1997. Everything is known stuff
   in Unix and Linux. To 'automate' file transfer for me means to
   minimize the load on the remote server as well as my own telephone
   costs - you have to pay for the time you think if or not to get a
   special file, for changing the directories and for the time to put the
   names into the PC. The procedure is called with the address as
   parameter and generates a protocol.

#!/bin/bash
#
date > prot
#
ftp -v $1 >> prot
#
#
date >> prot
#

   Ftp now looks if a .netrc file exists; in this file I use macros
   written in advance and numbered consecutively:

...
machine ftp.ssc.com login anonymous password -gerstman@tfh-berlin.de
macdef T131
binary
prompt
cd ./pub/lg
pwd
dir . C131.2
get lg_issue17.tar.gz SSC17

macdef init
$T131
bye
...

   Now I first get the contents of several directories via dir . C131...
   and, to have some book-keeping, logically use the same numbers for the
   macros and the directories. The protocol shows, if I am really in the
   directory I wished to. Until the next session begins, the file C131...
   is used to edit the last .netrc file, therefore the names will always
   be typed correctly. If you are downloading under DOS from your account
   the shorter names are defined in the .netrc file. Everything is done
   beforehand with vi under Linux.
   
   Dr.Werner Gerstmann
   
     _________________________________________________________________
                                      
  Setting Up Newsgroups
  
   Date: Mon, 05 May 1997 16:19:05 -0600
   From: "Michael J. Hammel" mjhammel@emass.com
   
   But I just can't seem to find any documentation explaining how to set
   up local newsgroups. smtpd and nntpd are running, but the manpages
   won't tell anything about how to set up ng's
   
   smtpd and nntpd are just transport agents. They could just as easily
   transport any sort of message files as they do mail or NetNews files.
   What you're looking for is the software which manages these files on
   your local system (if you want newsgroups available only locally then
   you need to have this software on your system). I used to use CNEWS
   for this. I believe there are some other packages, much newer than
   CNEWS, that might make it easier. Since I haven't used CNEWS in awhile
   I'm afraid I can't offer any more info than this.
   
   Michael J. Hammel
   
     _________________________________________________________________
                                      
  Color Applications in X
  
   Date: Tue, 06 May 1997 09:25:01 -0400 (EDT)
   From: Oliver Oberdorf oly@borg.harvard.edu
   
   Saw some X Window tips, so I thought I'd send this one along..
   
   I tend to use lots of color rich applications in X. After cranking up
   XEmacs, Gimp, etc., I find that I quickly run out of palette on my
   8-bit display. Most programs don't behave sensibly when I run out of
   colors - for example, CGoban comes up black and white and realaudio
   refuses to run at all (not enough colors to play sound, I suppose.
   
   I've found I can solve these problems by passing a "-cc 4" option to
   the X server. This tells it to pretend I have a bigger pallete and to
   pass back closest matches to colors when necessary. I've never run out
   of colors since then.
   
   There are caveats: programs that check for a full colormap and install
   their own (color flashing) will automatically do so. This includes
   netscape and XForms programs (which I was running with private color
   maps anyway). My copy of LyriX makes the background black. Also, I
   tried Mosaic on a Sun and had some odd color effects.
   
   oly
   
     _________________________________________________________________
                                      
  X With 256 Colors
  
   Date: Tue, 06 May 1997 09:40:10 -0400 (EDT)
   From: Oliver Oberdorf oly@borg.harvard.edu
   
   I forgot to add that the -cc 4 can be used like this:
startx -- -cc 4

   
   (I use xdm, so I don't have to do it this way)
   
   sorry about that
   
   oly
   
     _________________________________________________________________
                                      
  Video Cards on the S3/ViRGE
  
   Date: Mon, 05 May 1997 20:44:13 -0400
   From: Peter Amstutz amstpi@freenet.tlh.fl.us
   
   A couple suggestions to people with video cards based on the S3/ViRGE
   Chipset... (which is many video cards that ship with new computers
   that claim to have 3D accelerated graphics. Don't believe it. The 3D
   graphics capability of all ViRGE-based chips sucks. They make better
   cheap 2D accelerators)
    1. XFree 3.2 has a ViRGE server! I have heard a number of people
       complain about XFree's lack of ViRGE support. Yo GUYZ! That's
       because your wonderful Linux CD has XFree86 3.1.2 WHICH IS NOT THE
       MOST RECENT VERSION!
    2. There is a minor hack you can make to svgalib 1.12.10 to get it to
       reconignize your nice S3 based card as actually being such. The
       s3/ViRGE chip is, in the words of some guy at C|Net, "basically a
       S3 Trio 64 with a 3d engine bolted on top." (as noted, the 3D
       engine is really slow) Unfortunately, it returns a card ID code
       totally different to the Trio64. But, drum roll please, with just
       a little bit of hacking, you too can do 1024x768x16bpp through
       svgalib! Just follow these E-Z steps:
       
   I)Get the source, untar it & everything. II) Go into the main source
   directory, and with your favorite editor (vim forever!), open up s3.c
   III) Now, search for the nice little error message "S3: Unknown chip
   id %02x\n" around line 1552. Above it there should be a switch()/case
   statement that figures out which card it you have based on an ID code.
   Find the case statement that matches a Trio64. Insert a fall-through
   case statement that matches the code your card returns, so svgalib
   treats it as a Trio64! Like this: (starts at line 1537 of s3.c)
            case 0x11E0:
                s3_chiptype = S3_TRIO64;
                break;
becomes
            case 0x11E0:
            case 0x31E1:
                s3_chiptype = S3_TRIO64;
                break;

   
   Replace 0x31E1 with the appropriate ID if your card returns a
   different code.
   
   Save it! You're home free! Recompile, re-install libraries, and now,
   what we've all been waiting for, test some svga modes! 640x480x256!
   640x480x16bpp! 800x600x24bpp! YES!!!
   
   But wait! One thing to watch out for. First, make sure you reinstall
   it in the right place! Slackware puts libvga.a in /usr/lib/, so make
   sure that is that file that you replace. Another thing: programs
   compiled with svgalib statically linked in will have to be rebuilt
   with the new library, otherwise they will just go along in their brain
   dead fashion blithely unaware that your card is not being used to
   nearly it's full potential.
   
   Note: this hack has not been authorized, reconignized, or in any way
   endorsed, recommended, or even considered by the guy(s) who wrote
   svgalib. The last version of svgalib is over a year old, so I don't
   expect there to be any new versions real soon. It works for me, so I
   just wanted to share it with the Linux community that just might find
   it useful. This has only been tested on my machine, using a Diamond
   Stealth 3D 2000, so if you have a different ViRGE-based card and you
   have problems you're on your own.
   
   No, there are no Linux drivers that use ViRGE "accelerated 3D"
   features. It sucks, I know (then again, the 3D performance of ViRGE
   chips is so bad you're probably not missing much)
   
   Peter Amstutz
   
     _________________________________________________________________
                                      
  C Source with Line Numbers
  
   Date: 5 May 1997
   From: joeh@sugar-river.net
   
   I wanted to print out a c source with line numbers. Here is one way to
   do it:
   
   Assuming you are using bash, install the following function in your
   .bashrc file.
jnl () {
           for args
         do
           nl -ba $args > /tmp.tmp
         done
         lpr /tmp.tmp
       }

   "nl" is a textutils utility that numbers the lines of a file.
   
   "-ba" makes sure *all* the lines (even the empty lines) get numbered.
   
   /tmp.tmp is my true "garbage" temporary file, hence I write over it,
   and send it to the line printer.
   
   For example to print out a file "kbd.c", with line numbers:
jnl kdb.c

   There are probably 20 different methods of accomplishing the same
   thing, but when you don't even have *one* of them in your bag of
   tricks, it can be a time-consuming detour.
   
   Note: I initially tried to name the function "nl", but this led to an
   infinite loop. Hence I named it jnl (for Joe's number lines).
   
   Best Regards,
   Joe Harmann
   
     _________________________________________________________________
                                      
  ncftp Vs. ftplib
  
   Date: Thu, 08 May 1997 13:30:04 -0700
   From: Igor Markov imarkov@math.ucla.edu
   
   Hi, I read your 2c tip in Linux gazette regarding ftplib.
   
   I am not sure why you recommend downloading ftpget, while another
   package, actually, a single program, which is available on many
   systems does various ftp services pretty well.
   
   I mean ncftp ("nikFTP"). It can do command line, it can work in the
   mode of usual ftp (with the "old" or "smarter" interface") and it also
   does full-screen mode showing ETA during the transfer. It has filename
   and hostname completion and a bunch of other niceties, like
   remembering passwords if you ask it to.
   
   Try man ncftp on your system (be in Linux or Solaris) ... also, ncftp
   is available from every major Linux archive (including ftp.redhat.com
   where you can find latest RPMs)
   
   Hope this helps, Igor
   
     _________________________________________________________________
                                      
  Domain and Dynamic IP Names
  
   Date: Thu, 08 May 1997 13:52:02 -0700
   From: Igor Markov imarkov@math.ucla.edu
   
   I have a dial-up with dynamic IP and it has always been an
   incontinence for me and my friends to learn my current IP address (I
   had an ftp script which put the address every 10 minutes into ~/.plan
   file on my acct at UCLA, then one could get the address by fingering
   the account).
   
   However, recently I discovered a really cool project http://www.ml.org
   which
     * can give you a dynamic IP name, i.e. when your computer gets a new
       IP address, it needs to contact www.ml.org and update its record.
       Once their nameserver reloads its tables (once every 5-10mins!)
       your computer can be accessed by the name you selected when
       registered.
       For example, my Linux box has IP name math4.dyn.ml.org
       Caveat: if you are not online, the name can point to a random
       computer. In my case, those boxes are most often wooden (i.e.
       running Windoze ;-) so you would get "connection refused".
       In general, you need some kind of authentication scheme (e.g. if
       you telnet to my computer, it would say "Office on Rodeo Drive")
     * allows you to register domain name for free (e.g. you can register
       an alternative name for your computer at work which has a constant
       IP)
     * offer nameserver support for free (if you need it)
       
   Isn't that cool ?
   
   Cheers, Igor
   
     _________________________________________________________________
                                      
  netcfg Tool
  
   Date: Sat, 10 May 1997 11:55:28 -0400
   From: Joseph Turian turian@idt.net
   
   I used Redhat 4.0's netcfg tool to install my PPP connection, but
   found that I could only use the Internet as root. I set the proper
   permissions on my scripts and the pppd (as stated in the PPP Howto and
   the Redhat PPP Tips documents), but I still could not use any Internet
   app from a user's account. I then noticed that a user account _could_
   access an IP number, but could not do a DNS lookup. It turns out that
   I merely had to chmod ugo+r /etc/resolv.conf
   
     _________________________________________________________________
                                      
  Putting Links to Your Dynamic IP
  
   Date: Wed, 28 May 1997 13:24:45
   From: Nelson Tibbitt nelson@interpath.com
   
   Sometimes it might be useful to allow trusted friends to connect to
   your personal Linux box over the Internet. An easy way to do this is
   to put links to your IP address on a full-time web server, then give
   the URL to whomever. Why would you want to do that? Well, I do it so
   my sister can telnet to Magnon, my laptop, for a chat whenever I'm
   connected.
   
   However it might prove difficult if, like me, your ISP assigns your IP
   address dynamically. So I wrote a short script to take care of this...
   The script generates an html file containing my local IP address then
   uploads the file via ftp to a dedicated web server on which I have
   rented some web space. It runs every time a ppp connection is
   established, so the web page always contains my current IP, as well as
   the date/time I last connected.
   
   This is pretty easy to set up, and the result is way cool. Just give
   my sis (or anyone else I trust) the URL... then she can check to see
   if I'm online whenever she wants, using Netscape from her vax account
   at RIT. If I am connected, she can click to telnet in for a chat.
   
   Here's how it works....
     * determine local IP address
     * write an html file containing date/time and links to the IP
       address that has been assigned
     * upload the html file to a dedicated web server using ftp (and a
       .netrc file)
       
   To get ftp to work, I had to create a file named .netrc in my home
   directory with a line that contains the ftp login information for the
   remote server. My .netrc has one line that looks like this:
machine ftp.server.com login ftpusername password ftppassword

   For more information on the .netrc file and its format, try "man ftp".
   Chmod it 700 (chmod 700 .netrc) to prevent other users from reading
   the file. This isn't a big deal on my laptop, which is used primarily
   by yours truly. But it's a good idea anyway.
   
   Here's my script. There might be a better way to do all of this,
   however my script works pretty well. Still, I'm always interested in
   ways to improve my work, so if you have any suggestions or comments,
   feel free to send me an email.


#!/bin/sh
# *** This script relies on the user having a valid local .netrc ***
# *** file permitting automated ftp logins to the web server!!   ***
#
# Slightly modified version of:
# Nelson Tibbitt's insignificant bash script, 5-6-97
# nelson@interpath.com
#
# Here are variables for the customizing...
# Physical destination directory on the remote server
# (/usr/apache/htdocs/nelson/ is the httpd root directory at my virtual
domain)
REMOTE_PLANDIR="/usr/apache/htdocs/nelson/LinuX/Magnon"
# Desired destination filename
REMOTE_PLANNAME="sonny.htm"
# Destination ftp server
# Given this and the above 2 variables, a user would find my IP address
at
# http://dedicated.web.server/LinuX/Magnon/sonny.htm
REMOTE_SERVER="dedicated.web.server"
# Local (writable) temporary directory
TMPDIR="/usr/tmp"
# Title (and header) of the html file to be generated
HTMLHEAD="MAGNON"
# Existing image on remote server to place in html file..
# Of course, this variable isn't necessary, and may be commented out.
If commented out,
# you'll want to edit the html file generation below to prevent an empty
image from appearing
# in your web page.
HTMLIMAGE="/LinuX/Magnon/images/mobile_web.gif"
# Device used for ppp connection
PPP_DEV="ppp0"
# Local temporary files for the html file/ftp script generation
TFILE="myip.htm"
TSCPT="ftp.script"
# Used to determine local IP address on PPP_DEV
#  There are several ways to get your IP, this was the first
command-line method I came
# up with.   It works fine here.  Another method, posted in May 1997
LJ  (and which looks
# much cleaner) is this:
#  `/sbin/ifconfig | awk 'BEGIN { pppok = 0} \
#   /ppp.*/ { pppok = 1; next } \
#  {if (pppok == 1 ) {pppok = 0; print} }'\
#  | awk -F: '{print $2 }'| awk  '{print $1 }'`
GETMYIP=$(/sbin/ifconfig | grep -A 4 $PPP_DEV \
  | awk '/inet/ { print $2 } ' | sed -e s/addr://)
# Used to place date/time of last connection in the page
FORMATTED_DATE=$(date '+%B %-d, %I:%M %p')
#
#
# Now, do it!  First give PPP_DEV time to settle down...
sleep 5
echo "Current IP: $GETMYIP"

# Generate the html file...
# Edit this part to change the appearance of the web page.
rm -f $TMPDIR/$TFILE
echo "Writing $REMOTE_PLANNAME"
echo >$TMPDIR/$TFILE
echo "<html><head><title>$HTMLHEAD</title></head><center>"   >>
$TMPDIR/$TFILE
echo "<body bgcolor=#ffffff><font size=+3>$HTMLHEAD</font>"  >>
$TMPDIR/$TFILE
# Remove the <imgtag in the line below if you don't want an image
echo "<p><img src='$HTMLIMAGE' alt='image'<p>The last "     >>
$TMPDIR/$TFILE
echo "time I connected was <b>$FORMATTED_DATE</b>, when the " >>
$TMPDIR/$TFILE
echo "Net Gods dealt <b>$GETMYIP</bto Magnon. <p><a href="  >>
$TMPDIR/$TFILE
echo "http://$GETMYIP target=_top>http://$GETMYIP</a><p>"     >>
$TMPDIR/$TFILE
echo "<a href=ftp://$GETMYIP target=_top>ftp://$GETMYIP

   " >> $TMPDIR/$TFILE echo "<p><a
   href=telnet://$GETMYIP>telnet://$GETMYIP</a><br>" >> $TMPDIR/$TFILE
   echo "(Telnet must be properly configured in your browser.)" >>
   $TMPDIR/$TFILE # Append a notice about the links.. echo "<p>The above
   links will only work while I'm connected." >> $TMPDIR/$TFILE # Create
   an ftp script to upload the html file echo "put $TMPDIR/$TFILE"
   $REMOTE_PLANDIR/$REMOTE_PLANNAME > $TMPDIR/$TSCPT echo "quit"
   >$TMPDIR/$TSCPT # Run ftp using the above-generated ftp script
   (requires valid .netrc file for ftp login to work) echo "Uploading
   $REMOTE_PLANNAME to $REMOTE_SERVER..." ftp $REMOTE_SERVER >
   $TMPDIR/$TSCPT &/dev/null # The unset statements are probably
   unnecessary, but make for a clean 'look and feel' echo -n "Cleaning
   up... " rm -f $TMPDIR/$TFILE ; rm -f $TMPDIR/$TSCPT unset HTMLHEAD
   HTMLIMAGE REMOTE_SERVER REMOTE_PLANDIR REMOTE_PLANNAME unset GETMYIP
   FORMATTED_DATE PPP_DEV TMPDIR TFILE TSCPT echo "Done." exit
   
     _________________________________________________________________
                                      
  Hard Disk Duplication
  
   Date: Tue, 27 May 1997 11:16:32
   From: Michael Jablecki mcablec@ucsd.edu
   
   Shockingly enough, there seems to be a DOS product out there that will
   happily make "image files" of entire hard disks and copy these image
   files onto blank hard disks in a sector-by-sector fashion. Boot
   sectors and partition tables should be transferred exactly. See:
   http://www.ingot.com for more details. Seagate (I think...) has also
   made a program that does the duplication in one step - transfers all
   of one hard disk to another identical disk. I'm not sure which of
   these products works with non-identical disks.
   
   Hope this helps.
   
   Michael Jablecki
   
     _________________________________________________________________
                                      
  Untar and Unzip
  
   From: Paul
   
   Oh, here's a little tidbit of info to pass on, this has been bugging
   me for a while. Often times when people send in tips 'n' tricks, it
   requires one to untar and unzip an archive. It usually suggested that
   this be done in one of several cumbersome ways: gzcat foo.tar.gz | tar
   zxvf - or 1. gunzip foo.tar.gz 2. tar xvf foo.tar or some other
   multi-step method. There is a much easier, time-saving, space saving
   method. The version of tar shipped with most distributions of Linux is
   from the FSF GNU project. These people recognized that most tar
   archives are usually gzipped and provided a 'decompress' flag to tar.
   This is equivalent to the above methods: tar zxvf foo.tar.gz This
   decompress the tar.gz file on the fly and then untars it into the
   current directory, but it also leaves the original .tar.gz alone.
   However, one step I consider essential that is usually never
   mentioned, is to look at what's in the tar archive prior to extracting
   it. You have no idea whether the archiver was kind enough to tar up
   the parent directory of the files, or it they just tarred up a few
   files. The netscape tar.gz is a classic example. When that's untarred,
   it dumps the contents into your current directory. Using: gtar ztvf
   foo.tar.gz allows you to look at the contents of the archive prior to
   opening it up and potetially writing over files with the same name. At
   the very least, you will know what's going on and be able to make
   provisions for it before you mess something up. For those who are
   adventurous, (X)Emacs is capable of not only opening up and reading a
   tar.gz file, but actually editing and re-saving the contents of these
   as well. Think of the time/space savings in that! Seeya, Paul
   
     _________________________________________________________________
                                      
               Published in Linux Gazette Issue 18, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
      This page maintained by the Editor of Linux Gazette, gazette@ssc.com
      Copyright  1997 Specialized Systems Consultants, Inc.
      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                                 News Bytes
                                      
                                 Contents:
                                      
     * News in General
     * Software Announcements
       
     _________________________________________________________________
                                      
                              News in General
                                      
     _________________________________________________________________
                                      
  Atlanta Linux Showcase
  
   Linus Torvalds, the "Kernel-Kid" and creator of Linux, Jon "Maddog"
   Hall, Linux/Alpha team leader and inspiring Linux advocate, David
   Miller, the mind behind Linux/SPARC, and Phil Hughes, publisher of
   Linux Journal, and many more will speak at the upcoming Atlanta Linux
   Showcase.
   
   For more information on the Atlanta Linux Showcase and to reserve your
   seat today, please visit our web site at http://www.ale.org.showcase
   
     _________________________________________________________________
                                      
  Linux Speakers Bureau
  
   SSC is currently putting together a Linux Speaker's Bureau.
   http://www.ssc.com/linux/lsb.html
   
   The LSB is designed to become a collage of speakers specializing in
   Linux. Speakers who specialize in talks ranging from novice to
   advanced - technical or business are all welcome. The LSB will become
   an important tool for organizers of trade show talks, computer fairs
   and general meetings, so if you are interested in speaking at industry
   events, make sure to visit the LSB WWW page and register yourself as a
   speaker.
   
   We welcome your comments and suggestions.
   
     _________________________________________________________________
                                      
  The Linux System Administrator's Guide (SAG)
  
   The Linux System Administrator's Guide (SAG) is a book on system
   administration targeted at novices. Lars Wiraenius has been writing it
   for some years, and it shows. He has made an official HTML version,
   available at the SAG home page at:
   http://www.iki.fi/liw/linux/sag
   
   Take a Look!
   
     _________________________________________________________________
                                      
  Free CORBA 2 ORB For C++ Available
  
   The Olivetti and Oracle Research Laboratory has made available the
   first public release of omniORB (version 2.2.0). We also refer to this
   version as omniORB2.
   
   omniORB2 is copyright Olivetti & Oracle Research Laboratory. It is
   free software. The programs in omniORB2 are distributed under the GNU
   General Public Licence as published by the Free Software Foundation.
   The libraries in omniORB2 are distributed under the GNU Library
   General Public Licence.
   
   For more information take a look at http://www.orl.co.il/omniORB.
   
   Source code and binary distributions are available from
   http://www.orl.co.uk/omniORB/omniORB.html
   
     _________________________________________________________________
                                      
  The Wurd Project
  
   The Wurd Project, a SGML Word Processor for the UNIX environment (and
   hopefully afterwards, Win32 and Mac) is currently looking for
   developers that are willing to participate in the project. Check out
   the site at: http://sunsite.unc.edu/paulc/wp
   
   Mailing list archives are available, as well as the current source,
   documentation, programming tools and various other items can also be
   found at the above address.
   
     _________________________________________________________________
                                      
  Linus in Wonderland
  
   Check it out...
   
   Here's the online copy of Metro's article on Linus...
   http://www.metroactive.com/metro/cover/linux-9719.html
   
   Enjoy!
   
     _________________________________________________________________
                                      
                           Software Announcements
                                      
     _________________________________________________________________
                                      
  BlackMail 0.24
  
   Announcing BlackMail 0.24. This is a bug-fix release over the previous
   release, which was made public on April 29th.
   
   BlackMail is a mailer proxy that wraps around your existing mailer
   (preferrably smail) and provides protection against spammers, mail
   forwarding, and the like.
   
   For those of you looking for a proxy, you may want to look into this.
   This is a tested product, and works very well. I am interested in
   getting this code incorporated into SMAIL, so if you are interested in
   doing this task, please feel free.
   
   You can download blackmail from ftp://ftp.bitgate.com. You can also
   view the web page at http://www.bitgate.com.
   
     _________________________________________________________________
                                      
  CDE--Common Desktop Environment for Linux
  
   Red Hat Software is proud to announce the arrival of Red Hat's TriTeal
   CDE for Linux. Red Hat Software, makers of the award-winning,
   technologically advanced Red Hat Linux operating system, and TriTeal
   Corporation, the industry leader in CDE technology, teamed up to bring
   you this robust, easy to use CDE for your Linux PC.
   
   CDE includes Red Hat's TriTeal CDE for Linux provides users with a
   graphical environment to access both local and remote systems. It
   gives you icons, pull-down menus, and folders.
   
   Red Hat's TriTeal CDE for Linux is available in two versions. The
   Client Edition gives you everything you need to operate a complete
   licensed copy of the CDE desktop, incluidng the Motif 1.2.5 shared
   libraries. The Developer's Edition allows you to perform all functions
   of the Client Edition, and also includes a complete integrated copy of
   OSF Motif version 1.2.5, providing a complete development environment
   with static and dynamically linked libraries, Motif Window Manager,
   and sample Motif Sources.
   
   CDE is an RPM-based product, and will install easily on Red Hat and
   other RPM-based Linux systems. We recommend using Red Hat Linux 4.2 to
   take full advantage of CDE features. For those who do not have Red Hat
   4.2, CDE includes several Linux packages that can be automatically
   installed to improve its stability.
   
   Order online at: http://www.redhat.com Or call 1-888-REDHAT1 or (919)
   572-6500.
   
     _________________________________________________________________
                                      
  TCFS 2.0.1
  
   Announcing the release 2.0.1 of TCFS (Transparent Cryptographic File
   System) for Linux. TCFS is a cryptographic filesystem developed here
   at Universita' di Salerno (Italy). It operates like NFS but allow
   users to use a new flag X to make the files secure (encrypted).
   Security engine is based on DES, RC5 and IDEA.
   
   The new release works in Linux kernel space, and may be linked as
   kernel module. It is developed to work on Linux 2.0.x kernels.
   
   A mailing-list is available at tcfs-list@mikonos.dia.unisa.it.
   Documentation is available at http://mikonos.dia.unisa.it/tcfs. Here
   you can find instructions for installing TCFS and docs on how it
   works. Mirror site is available at http://www.globenet.it and
   http://www.inopera.it/~ermmau.tcfs
   
     _________________________________________________________________
                                      
  Qddb 1.43p1
  
   Qddb 1.43p1 (patch 1) is now available
   
   Qddb is fast, powerful and flexible database software that runs on
   UNIX platforms, including Linux. Some of its features include:
     * Tcl/Tk programming interface
     * Easy to use, you can have a DB application completely up and
     * running in about 5 minutes, using nxqddb.
     * CGI interface for quick and easy online
       databases/guestbooks/etc...
     * Fast, and powerful searching capability
     * Report generator
     * Barcharts and graphs
     * Mass mailings with Email, letters and postcards
       
   Qddb-1.43p1 is the first patch release to 1.43. This patch fixes a few
   minor problems and a searching bug when using cached secondary
   searching.
   
   To download the patch file:
   ftp://ftp.hsdi.com/pub/qddb/sources/qddb-1.43p1.patch
   
   For more information on Qddb, visit the official Qddb home page:
   http://www.hsdi.com/qddb
   
     _________________________________________________________________
                                      
  Golgotha
  
   AUSTIN, TX- Crack dot Com, developers of the cult-hit Abuse and the
   anticipated 3D action/strategy title Golgotha, recently learned that
   Kevin Bowen, aka Fragmaster on irc and Planet Quake, has put up the
   first unofficial Golgotha web site.
   
   The new web site can be found at
   http://www.planetquake.com/grags/golgotha, and there is a link to the
   new site at http://crack.com/games/golgotha. Mr. Bowen's web site
   features new screenshots and music previously available only on irc.
   
   Golgotha is Crack dot Com's first $1M game and features a careful
   marriage of 3D and 2D gameplay in an action/strategy format featuring
   new rendering technology, frantic gameplay, and a strong storyline.
   For more information on Golgotha, visit Crack dot Com's web site at
   http://crack.com/games/golgotha.
   
   Crack dot Com is a small game development company located in Austin,
   Texas. The corporation was founded in 1996 by Dave Taylor, co-author
   of Doom and Quake, and Jonathan Clark, author of Abuse.
   
     _________________________________________________________________
                                      
  ImageMagick-3.8.5-elf.tgz
  
   ImageMagick-3.8.5-elf.tgz is now out.
   
   This version brings together a number of minor changes made to
   accomodate PerlMagick and lots of minor bugs fixes including
   multi-page TIFF decoding and writing PNG.
   
   ImageMagick (TM), version 3.8.5, is a package for display and
   interactive manipulation of images for the X Window System.
   ImageMagick performs, also as command line programs, among others
   these functions:
     * Describe the format and characteristics of an image
     * Convert an image from one format to another
     * Transform an image or sequence of images
     * Read an image from an X server and output it as an image file
     * Animate a sequence of images
     * Combine one or more images to create new images
     * Create a composite image by combining several separate images
     * Segment an image based on the color histogram
     * Retrieve, list, or print files from a remote network site
       
   ImageMagick supports also the Drag-and-Drop protocol form the OffiX
   package and many of the more popular image formats including JPEG,
   MPEG, PNG, TIFF, Photo CD, etc. Check out:
   ftp://ftp.wizards.dupont.com/pub/ImageMagick/linux
   
     _________________________________________________________________
                                      
  Slackware 3.2 on CD-ROM
  
   Linux Systems Labs, The Linux Publishing Company is pleased to
   announce Slackware 3.2 on CD-ROM This CD contains Slackware 3.2 with
   39 security fixes and patches since the Official Slackware 3.2
   release. The CD mirrors the slackware ftp site as of April 26, 1997.
   Its a great way to get started with Linux or update the most popular
   Linux distribution.
   
   This version contains the 2.0.29 Linux kernel, plus recent versions of
   these (and other) software packages:
     * Kernel modules 2.0.29
     * PPP daemon 2.2.0f
     * Dynamic linker (ld.so) 1.8.10
     * GNU CC 2.7.2.1
     * Binutils 2.7.0.9
     * Linux C Library 5.4.23
     * Linux C++ Library 2.7.2.1
     * Termcap 2.0.8
     * Procps 1.01
     * Gpm 1.10
     * SysVinit 2.69
     * Shadow Password Suite 3.3.2 (with Linux patches) Util-linux 2.6
       
   LSL price: $1.95
   
   Ordering Info: http://www.lsl.com
   
     _________________________________________________________________
                                      
  mtools
  
   A new release of mtools, a collection of utilities to access MS-DOS
   disks from Unix without mounting them.
   
   Mtools can currently be found at the following places:
   http://linux.wauug.org/pub/knaff/mtools
   http://www.club.innet.lu/~year3160/mtools
   ftp://prep.ai.mit.edu/pub/gnu
   
   Mtools-3.6 includes the features such as Msip -e which now only ejects
   Zip disks when they are not mounted, Mzip manpage, detection of bad
   passwords and more. Most GNU software is packed using the GNU `gzip'
   compression program. Source code is available on most sites
   distributing GNU software. For more information write to
   gnu@prep.ai.mit.edu
   or look at: http://www.gnu.ai.mit.edu/order/ftp.html
   
     _________________________________________________________________
                                      
  CM3
  
   CM3 version 4.1.1 is now available for Unix and Windows platforms:
   SunOS, Solaris, Windows NT/Intel, Windows 95, HP/UX, SGI IRIX,
   Linux/ELF on Intel, and Digital Unix on Alpha/AXP. For additional
   information, or to download an evaluation copy, contact Critical Mass,
   Inc. via the Internet at info@cmass.com or on the World Wide Web at
   http://www.cmass.com
   
   newsBot:
   Extracts exactly what you want from your news feed. Cuts down on
   "noise". Sophisticated search algorithms paired with numerous filters
   cut out messages with ALL CAPS, too many $ signs, threads which won't
   die, wild cross posts and endless discussions why a Mac is superior to
   a Chicken, and why it isn't. newsBot is at:
   http://www.dsb.com/mkt/newsbot.html
   
   mailBot:
   Provides itendical functionality but reads mailing lists and e-zines
   instead of news groups. Both are aimed at responsible Marketers and
   Information managers. The *do not* extract email addresses and cannot
   be mis-used for bulk mailings. mailBot is at:
   http://www.dsb.com/mkt/mail.bot.html
   
   siteSee:
   A search engine running on your web server and using the very same
   search technology: a very fast implementation of Boyer Moore. siteSee
   differs from other search engines in that it does not require creation
   and maintenance of large index files. It also becomes an integrated
   part of your site design. You have full control over page layout.
   siteSee is located at: http://www.dsb.com/publish/seitesee.html
   
     _________________________________________________________________
                                      
  linkCheck
  
   linkCheck:
   A hypertext link checker, used to keep your site up to date. Its
   client-server implementation allows you to virtually saturate your
   comms link without overloading your server. LinkCheck is fast at
   reading and parsing HTML files and builds even large deduplicated
   lists of 10,000 or more cross links faster than interpreted languages
   take to load. linkCheck is at:
   http://www.dsb.com/maintain/linkckeck.html
   
   All products require Linux, SunOS or Solaris. And all are sold as "age
   ware": a free trial license allows full testing. When the license
   expires, the products "age", forget some of their skills, but they
   still retain about 80% of their functionality.
   
   A GUI text editor named "Red" is available for Linux. The editor has a
   full graphical interface, supports mouse and key commands, and is easy
   to use.
   
   These are some of Red's features that might be interesting:
     * Graphical interface
     * Full mouse and key support
     * 40 step undo (and redo)
     * User-definable key bindings
     * Automatic backup creation
     * Cut/paste exchange with other X Windows applications
     * On-line function list, help and manual
       
   It can be downloaded free in binary form or with full source code.
   ftp://ftp.cs.su.oz.au/mik/red
   Also, take a look at the web site at:
   http://www.cs.su.oz.au/~mik/red-manual/red-main-page.html
   
   The web site also includes a full Manual - have a look if you are
   interested.
   
     _________________________________________________________________
                                      
  Emacspeak-97++
  
   Announcing Emacspeak-97++ (The Internet PlusPack). Based on
   InterActive Accessibility technology, Emacspeak-97++ provides a
   powerful Internet ready audio desktop that integrates Internet
   technologies including Web surfing and messaging into all aspects of
   the electronic desktop.
   
   Major Enhancements in this release include:
     * Support for WWW ACSS (Aural Cascading Style Sheets)
     * Audio formatted output for rich text
     * Enhanced support for browsing tables
     * Support for speaking commonly used ISO Latin characters
     * Speech support for the Emacs widget libraries
     * Support for SGML mode
     * Emacspeak now has an automatically generated users manual thanks
       to Jim Van Zandt.
       
   Emacspeak-97++ can be downloaded from:
   http://cs.cornell.edu/home/raman/emacspeak
   ftp://ftp.cs.cornell.edu/pub/raman/emacspeak
   
     _________________________________________________________________
                                      
               Published in Linux Gazette Issue 18, May 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1997 Specialized Systems Consultants, Inc.
      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                               The Answer Guy
                                      
                                      
                   By James T. Dennis, jimd@starshine.org
          Starshine Technical Services, http://www.starshine.org/
                                      
     _________________________________________________________________
                                      
  Contents:
  
     * Networking Problems
     * Fetchmail
     * Procmail
     * Tcl/tlk Dependencies
     * /var/log/messages
     * OS Showdown
     * Adding Linux to a DEC XLT-366
     * Configuration Problems of a Soundcard
     * Procmail Idea and Question
     * UUCP/Linux on Caldera
     * ActiveX For Linux
     * What Packages Do I Need?
     * Users And Mounted Disks
     * [q] Map Left Arrow to Backspace
     * Adding Programs to Pull Down Menus
     * Linux and NT
     * pcmcia 28.8 Modems and Linux 1.2.13 Internet Servers
       
     _________________________________________________________________
                                      
  Tcl/tlk Dependencies
  
   From: David E. Stern, lptsua@i/wasjomgtpm/edu
   
   The end goal: to install FileRunner, I simply MUST have it! :-)
   
   My intermediate goal is to install Tcl/Tk 7.6/4.2, because FileRunner
   needs these to install, and I only have 7.5/4.1 . However, when I try
   to upgrade tcl/tlk, other apps rely on older tcl/tk libraries, atleast
   that's what the messages allude to:
  libtcl7.5.so is needed by some-app
       libtk4.1.so is needed by some-app

   (where some-app is python, expect, blt, ical, tclx, tix, tk,
   tkstep,...)
   
   I have enough experience to know that apps may break if I upgrade the
   libraries they depend on. I've tried updating some of those other
   apps, but I run into further and circular dependencies--like a cat
   chasing it's tail.
   
   In your opinion, what is the preferred method of handling this
   scenario? I must have FileRunner, but not at the expense of other
   apps. 
   
   It sounds like you're relying too heavily on RPM's. If you can't
   afford to risk breaking your current stuff, and you "must" have the
   upgrade you'll have to do some stuff beyond what the RPM system seems
   to do.
   
   One method would be to grab the sources (SRPM or tarball) and manually
   compile the new TCL and tk into /usr/local (possibly with some changes
   to their library default paths, etc). Now you'll probably need to grab
   the FileRunner sources and compile that to force it to use the
   /usr/local/wish or /usr/local/tclsh (which, in turn, will use the
   /usr/local/lib/tk if you've compiled it all right).
   
   Another approach is to set up a separate environment (separate disk, a
   large subtree of an existing disk -- into which you chroot, or a
   separate system entirely) and test the upgrade path where it won't
   inconvenience you by failing. A similar approach is to do a backup,
   test your upgrade plan -- (if the upgrade fails, restore the backup).
   
   This is a big problem in all computing environments (and far worse in
   DOS, Windows, and NT systems than in most multi-user operating
   systems. At least with Unix you have the option of installing a
   "playpen" (accessing it with the chroot call -- or by completely
   rebooting on another partition if you like).
   
   Complex interdepencies are unavoidable unless you require that every
   application be statically linked and completely self-sufficient
   (without even allowing their configuration files to be separate. So
   this will remain an aspect of system administration where experience
   and creativity are called for (and a good backup may be the only thing
   between you and major inconvenience). -- Jim
   
     _________________________________________________________________
                                      
  Networking Problems
  
   From: Bill Johnson, b_johnson@cel.co.chatham.ga.us
   
   I have two networking problems which may be related. I'm using a
   dial-up (by chat) ppp connection.
   
   1) pppd will not execute for anyone without root privilege, even
   though it's permissions are set rw for group and other. 
   
   I presume you mean that it's *x* (execute) bit is set. It's *rw* bits
   should be disabled -- the *w* bit ESPECIALLY.
   
   If you really want pppd to be started by users (non-root) you should
   write a small C "wrapper" program that executes pppd after doing a
   proper set of suid (seteuid) calls and sanity checks. You might be
   O.K. with the latest suidperl (though there have been buffer overflows
   with some versions of that.
   
   Note that the file must be marked SUID with the chmod command in order
   for it to be permitted to use the seteuid call (unless ROOT is running
   it, of course).
   
   Regardless of the method you use to accomplish your SUID of pppd (even
   if you just set the pppd binary itself to SUID):
   
   I suggest you pick or make a group (in /etc/group) and make the pppd
   wrapper group executable, SUID (root owned), and completely
   NON-ACCESSIBLE to "other" (and make sure to just as the "trusted"
   users to the group.
   
   'sudo' (University of Colorado, home of Evi Nemeth) is a generalized
   package for provided access to privileged programs. You might consider
   grabbing it and installing it.
   
   I'd really suggest diald -- which will dynamically bring the link up
   and down as needed. Thus your users will just try to access their
   target -- wait a long time for dialing, negotiation, etc (just like
   pppd only a little faster) and away you go (until your connection is
   idle long enough to count as a "timeout" for diald.
   
   2) http works, and mail works, and telnet works, but ftp does not
   work. I can connect, login, poke around, and all that. But when I try
   to get a file, it opens the file for writing on my machine and then
   just sits there. No data received, ever. Happens with Netscape, ftp,
   ncftp, consistently, at all sites. Even if user is root. Nothing is
   recorded in messages or in ppp-log. /etc/protocols, /etc/services and
   all that seems to be set up correctly. Any suggestions? 
   
   Can you dial into a shell account and do a kermit or zmodem transfer?
   What does 'stty -a < /dev/modem' say? Make sure you have an eight-bit
   clean session. Do you have 16550 (high speed) UARTS.
   
   Do you see any graphics when you're using HTTP? (that would suggest
   that binary vs. text is not the problem).
   
   -- Jim
   
     _________________________________________________________________
                                      
  Fetchmail
  
   From: Zia Khan, khanz@foxvalley.net
   
   I have a question regarding fetchmail. i've been successful at using
   it to send and recieve mail from my ISP via a connection to their POP3
   server. there is a slight problem though. the mail that i send out has
   in its from: field my local login and local hostname e.g.
   ruine@clocktower.net. when it should be my real email address
   khanz@foxvalley.net those who recieve my message recieve an non
   existant email address to reply to. is there any way in modifying this
   behavior? i've been investigating sendmail with hopes it may have have
   a means of making this change,to little success. 
   
   Technically this has nothing to do with fetchmail or POP. 'fetchmail'
   just *RECIEVES* your mail -- POP is just the protocol for storing and
   picking up your mail. All of your outgoing mail is handles by a
   different process.
   
   Sendmail has a "masquerade" feature and an "all_masquerade" feature
   which will tell it to override the host/domain portions of the headers
   addresses when it sends your mail. That's why my mail shows up as
   "jimd@starshine.org" rather than "jimd@antares.starshine.org."
   
   The easy way to configure modern copies of sendmail is to use the M4
   macro package that comes with it. You should be able to find a file in
   /usr/lib/sendmail-cf/cf/
   
   Mine looks something like:
divert(-1)
include(`../m4/cf.m4')
VERSIONID(`@(#)antares.uucp.mc  .9 (JTD) 8/11/95')
OSTYPE(`linux')

FEATURE(nodns)
FEATURE(nocanonify)
FEATURE(local_procmail)
FEATURE(allmasquerade)
FEATURE(always_add_domain)
FEATURE(masquerade_envelope)

MAILER(local)
MAILER(smtp)

MASQUERADE_AS(starshine.org)
define(`RELAY_HOST', a2i)
define(`SMART_HOST', a2i)
define(`PSEUDONYMS', starshine|antares|antares.starshine.org|starshine.org)

   (I've removed all the UUCP stuff that doesn't apply to you at all).
   
   Note: This will NOT help with the user name -- just the host and
   domain name. You should probably just send all of your outgoing mail
   from an account name that matches your account name at your provider.
   There are other ways to do it -- but this is the easiest.
   
   Another approach would require that your sendmail "trust" your account
   (with a define line to add your login ID as one which is "trusted" to
   "forge" their own "From" lines in sendmail headers. Then you'd adjust
   your mail-reader to reflect your provider's hostname and ID rather
   than your local one. The details of this vary from one mailer to
   another -- and I won't give the gory details here).
   
   Although I said that this is not a fetchmail problem -- I'd look in th
   fetchmail docs for suggestions. I'd also read (or re-read) the latest
   version of the E-Mail HOW-TO.
   
   -- Jim
   
     _________________________________________________________________
                                      
  Procmail
  
   Justin Mark Tweedie, linda@zanet.co.za
   
   Our users no not have valid command shells in the /etc/passwd file
   (they have /etc/ppp/ppp.sh). I would like the users to use procmail to
   process each users mail but .forward returns an error saying user does
   not have a vaild shell.
   
   The .forward file has the following entry
|IFS=' '&&exec /usr/local/bin/procmail -f-||exit 75 #justin

   How can I make this work ???
   
   Cheers Justin 
   
   I suspect that its actually 'sendmail' that issuing the complaint.
   
   Add the ppp.sh to your /etc/shells file. procmail will still use
   /bin/sh for processing the recipes in the .procmailrc file.
   
   Another method would be to use procmail as your local delivery agent.
   In your sendmail "mc" (m4 configuration file) you'd use the following:
FEATURE(local_procmail)

   (and make sure that your copy of procmail is in a place where sendmail
   can find it -- either using symlinks or by adding:
define(`PROCMAIL_PATH', /usr/local/your/path/to/procmail);

   Then you don't have to muss with .forward files at all. 'sendmail'
   will hand all local mail to procmail which will look for a .procmailrc
   file.
   
   Another question to as is whether you want to use your ppp.sh has a
   login shell at all. If you want people to login in and be given an
   automatic PPP connection I'd look at some of the cool features of
   mgetty (which I haven't used yet -- but seen in the docs).
   
   These allow you to define certain patterns that will be caught by
   'mgetty' when it prompts for a login name -- so that something like
   Pusername will call .../ppplogin while Uusername will login with with
   'uucico' etc.
   
   If you want to limit your customers solely to ppp services and POP
   (with procmail) then you've probably can't do it in any truly secure
   or reasonably way. Since the .procmailrc can call on arbitrary
   external programs -- the user with a valid password and account can
   access other services on the system. Also the ftp protocol can be
   subverted to provide arbitrary interactive access -- unless it is run
   in a 'chroot' environment -- one which would make the processing of
   updating the user's .procmailrc and any other .forward or
   configuration files a hassle.
   
   It can be done -- but it ultimately is more of a hassle than it's
   worth. So if you want to securely limit your customers' from access to
   interactive services and arbitrary commands you'll want to look at a
   more detailed plan than I could write up here.
   
   -- Jim
   
     _________________________________________________________________
                                      
  /var/log/messages
  
   From: Mike West, mwest@netpath.net
   
   Hi Jim, This may seem like a silly question, but I've been unable to
   find any HOW-TOs or suggestions on how to do it right. My question is,
   how should I purge my /var/log/messages file? I know this file
   continues to grow. What would be the recommended way to purge it each
   month? Also, are there any other log files that are growing that I
   might need to know about? Thanks in advance Jim. 
   
   I'm sorry to have dropped the ball on your message. Usually when I
   don't answer a LG question right away it's because I have to go do
   some research. In this case it was that I knew exactly what I wanted
   to say -- which would be "read my 'Log Management' article in the next
   issue of LG"
   
   However haven't finished the article yet. I have finished the code.
   
   Basically the quick answer is:
                rm /var/log/messages
                kill -HUP $(cat /var/run/syslog.pid)

   (on systems that are configured to conform to the FSSTND and putting a
   syslog.pid file in /var/run).
   
   The HUP signal being send to the syslogd process is to tell it to
   close and re-open its files. This is necessary because of the way that
   Unix handles open files. "unlinking" a file (removing the directory
   entry for it) is only a small part of actually removing it. Remember
   that real information about a file (size, location on the device,
   ownership, permissions, and all three date/time stamps for access,
   creation, and modification) is stored in the "inode." This is a
   unique, system maintained data structure. One of the fields in the
   inode is a "reference" or "link" count. If the name that you supplied
   to 'rm' was the only "hard link" to the file than the reference count
   reaches zero. So the filesystem driver will clear the inode and return
   all the blocks that were assigned to that file to the "free list" --
   IF THE FILE WASN'T OPEN BY ANY PROCESS!
   
   If there is any open file descriptor for the file -- then the file is
   maintained -- with no links (no name). This is because it could be
   critically bad to remove a file out from under a process with no
   warning.
   
   So, many daemons interrupt a "Hang-up" signal (sent via the command
   'kill -HUP') as a hint that they should "reinitialize in some way.
   That usually means that they close all files, re-read any
   configuration or options files and re-open any files that they need
   for their work.
   
   You can also do a
                
                cp /dev/null /var/log/messages

   .. and you get away without doing the 'kill -HUP'.
   
   I don't really know why this doesn't get the syslog confused -- since
   it's offset into the file is all wrong. Probably this generates a
   "holey" file -- which is a topic for some other day.
   
   Another quick answer is: Use the 'logrotate' program from Red Hat.
   (That comes with their 4.1 distribution -- and is probably freely
   usable if you just want to fetch the RPM from their web site. If you
   don't use a distribution that support RPM's you can get converters
   that translate .rpm files into tar or cpio files. You can also just
   use Midnight Commander to navigate through an RPM file just like it
   was a tar file or a directory).
   
   The long answer looks a little more like:
#! /bin/bash
## jtd: Rotate logs

## This is intended to run as a cron job, once per day
## it renames a variety of log files and then prunes the
## oldest.

cd /var/log
TODAY=$(date +%Y%m%d)   # YYYYMMDD convenient for sorting

function rotate {
        cp $1 OLD/${1}.$TODAY
        cp /dev/null $1
        }

rotate maillog
rotate messages
rotate secure
rotate spooler
rotate cron

( echo -n "Subject: Filtered Logs for:  " ; date "+%a %m/%d/%Y"
echo; echo; echo;
echo "Messages:"
/root/bin/filter.log /root/lib/messages.filter  OLD/messages.$TODAY

echo; echo; echo "Cron:"
/root/bin/filter.log /root/lib/cron.filter OLD/cron.$TODAY

echo; echo; echo "--"; echo "Your Log Messaging System"
echo; echo; echo ) | /usr/lib/sendmail -oi -oe  root
## End of rotate.logs

   That should be fairly self explanatory except for the part at the end
   with the (....) | sendmail .... stuff. The parenthese here group the
   output from all of those commands into the pipe for sendmail -- so the
   provide a whole message for sendmail. (Otherwise only the last echo
   would go to sendmail and the rest would try to go to the tty of the
   process that ran this -- which (when cron runs the job) will generate
   a different -- much uglier -- piece of mail.
   
   Now there is one line in the sendmail group that bears further
   explanation: /root/bin/filter.log /root/lib/messages.filter
   OLD/messages.$TODAY
   
   This is a script (filter.log) that I wrote -- it takes a data file
   (messages.filter) that I have created in little parts over several
   weeks and still have to update occasionally.
   
   Here's the filter.log script:
#!  /usr/bin/gawk -f
        # filter.log
        # by James T. Dennis

        # syntax filter.log patternfile  datafile [datafile2 .....]

        # purpose -- trim patterns, listed in the first filename
        # from a series of data files (such as /var/adm/messages)
        # the patterns in the patternfile should take the form
        # of undelimited (no '/foo/' slashes and no "foo" quotes)

        # Note:  you must use a '-' as the data file parameter if
        # if you to process stdin (use this as a filter in a pipe
        # otherwise this script will not see any input from it!

ARGIND == 1  {
                # ugly hack.
        # allows first parameter to be specially used as the
        # pattern file and all others to be used as data to
        # be filtered; avoids need to use
        # gawk -v patterns=$filename ....  syntax.
        if ( $0 ~/^[ \t]*$/ ) { next }  # skip blank lines
                # also skip lines that start with hash
                # to allow comments in the patterns file.
        if ( $0 !~ /^\#/ ) { killpat[++i]=$0 }}

ARGIND > 1 {
        for( i in killpat ) {
                if($0 ~ killpat[i]) { next }}}

 ARGIND > 1 {
         print FNR ": " $0 }

   That's about eight lines of gawk code. I hope the comments are clear
   enough. All this does is reads one file full of pattern, and then use
   that set of patterns as a filter for all of the rest of the files that
   are fed through it.
   
   Here's an excerpt from my ~root/lib/messages.filter file:
... ..? ..:..:.. antares ftpd\[[0-9]+\]: FTP session closed
... ..? ..:..:.. antares getty\[[0-9]+\]: exiting on TERM signal
... ..? ..:..:.. antares innd: .*
... ..? ..:..:.. antares kernel:[ \t]*
... ..? ..:..:.. antares kernel:   Type: .*

   Basically those first seventeen characters on each line match any
   date/time stamp -- the antares obviously matches my host name and the
   rest of each line matches items that might appear in my messages file
   that I don't care about.
   
   I use alot of services on this machine. My filter file is only about
   100 lines long. This scheme trims my messages file (several thousand
   lines per day) down to about 20 or 30 lines of "different" stuff per
   day.
   
   Everyone once in awhile I see a new pattern that I add to the patterns
   list.
   
   This isn't an ideal solution. It is unreasonable to expect of most new
   Linux users (who shouldn't "have to" learn this much about regular
   expressions to winnow the chaff from their messages file. However it
   is elegant (very few lines of code -- easy to understand exactly
   what's happening).
   
   I thought about using something like swatch or some other log
   management package -- but my concern was that these are looking for
   "interesting things" and throwing the rest away. Mine looks for
   "boring things" and whatever is left is what I see. To me anything
   that is "unexpected" is interesting (in my messages file) -- so I have
   to use a fundamentally different approach. I look at these messages
   files as a professional sysadmin. They may warn me about problems
   before my users notice them. (incidentally you can create a larger
   messages file that handles messages for many hosts -- if you are using
   remote syslogging for example).
   
   Most home users can just delete these files with abandon. They are
   handy diagnostics -- so I'd keep at least a few days worth of them
   around.
   
   -- Jim
   
     _________________________________________________________________
                                      
  OS Showdown
  
   From: William Macdonald will@merchant.clara.net
   Subject: OS showdown
   
   Hi, I was reading one of the British weekly computing papers this week
   and there was an article about a shoot out between Intranetware and
   NT. This was to take place on 20th May in the Guggenhiem museum in
   NYC. 
   
   Intranetware sounds interesting. Sadly I think it may be too little,
   too late in the corporate world. However, if Novell picks the right
   pricing strategy and niche they may be able to come back in from the
   bottom.
   
   I won't talk about NT -- except when someone is paying me for the
   discomfort.
   
   The task was to have a system offering an SQL server that could
   process 1 billion transasctions in a day. This was supposed to be 10
   time what Visa requires and 4 time what a corporation like American
   Airlines. It was all about proving that these OSs could work reliably
   in a mission critical environment. 
   
   If I wanted to do a billion SQL transactions a day I'd probably look
   at a Sun Starfire running Solaris. The Sun Starfire has 64 SPARC
   (UltraSPARC's??) running in parallel.
   
   Having a face off between NT and Netware (or "Intra" Netware as
   they've labelled their new release) in this category is really
   ignoring the "real" contenders in the field of SQL.
   
   Last I heard the world record for the largest database system was
   owned by Walmart and ran on Tandem mini-computers. However that was
   several years ago.
   
   I haven't seen the follow up article yet so I can't say what the
   result was. The paper was saying it was going to be a massive comp
   with both the boss' there etc. 
   
   Sounds like typical hype to me. Pick one or two companies that you
   think are close to you and pretend that your small group comprises the
   whole market.
   
   How would linux fair in a comp like this ?? The hardware resources
   were virtually unlimited. I think the NT box was a compaq 5000
   (proliant ??). Quad processors, 2 GB RAM, etc. 
   
   The OS really doesn't have to much to do with the SQL performance. The
   main job of the OS in running an SQL engine is to provide system and
   file services as fast as possible and stay the heck out of the way the
   real work.
   
   The other issue is that the hardware makes a big difference. So a
   clever engineer could make a DOG of a OS still look like a charging
   stallion -- by stacking the hardware in his favor.
   
   If it was me -- I'd think about putting in a few large (9 Gig)
   "silicon disks." A silicon disk is really a bunch of RAM that's
   plugged into a special controller that makes it emulate a conventional
   IDE or SCSI hard drive. If you're Microsoft or Novell and you're
   serious about winning this (and other similar) face offs -- the half a
   million bucks you spend on the "silicon disks" may pay for itself in
   one showing.
   
   In answer to your question -- Linux, by itself, can't compete in this
   show -- it needs an SQL server. Postgres '95 is, from what I've seen
   and heard, much too lightweight to go up against MS SQL Server -- and
   probably no match for whatever Novell is using. mSQL is also pretty
   lightweight. Mind you P'gres '95 and mSQL are more than adequate for
   most businesses -- and have to offer a price performance ratio that's
   unbeatable (even after figuring in "hidden" and "cost of ownership"
   factors). I'm not sure if Beagle is stable enough to even run.
   
   So we have to ask:
   What other SQL packages are available for Linux?
   Pulling out my trusty _Linux_Journal_1997_Buyers's_Guide_ (and doing a
   Yahoo! search) I see:
     * Solid
     * Just Logic Technologies
     * YARD Software GmbH
       
   That's all that are listed in the Commercial-HOWTO However -- here's a
   few more:
     * Infoflex-- (which goes into my Lynx hall of shame list -- it was
       quite a challenge reading that without resorting to a GUI).
     * DBIX Information -- (SQL Server???)
     * InterSoft(Essential -- SQL Engine)
     * Byte Designs Home on the Internet (ISAM with ODBC/SQL gateways)
     * SQLGate User's Guide -- (Embedding SQL in HTML)
     * April-15-1995 DATAMATION: International -- Article on Linux
       
   Sadly the "big three" (Informix, Oracle, and Sybase) list nothing
   about Linux on their sites. I suspect they still consider themselves
   to be "too good" for us -- and they are undoubtedly tangled in deep
   licensing aggreements with SCO, Sun, HP, and other big money
   institutions. So they probably view us as a "lesser threat" --
   (compared to the 800 lb gorilla in Redmond). Nonetheless -- it doesn't
   look like they are willing to talk about Linux on their web pages.
   
   I'd also like to take this opportunity to lament the poor organization
   and layout of these three sites. These are the large database software
   companies in the world -- and they can create a simple, INFORMATIVE
   web site. Too much "hype" and not enough "text."
   
   (My father joked: "Oh! you meant 'hypertext' -- I thought it was 'hype
   or text'" -- Obviously too many companies hear it the same way and
   choose the first option of a mutually exclusive pair).
   
   -- Jim
   
     _________________________________________________________________
                                      
  Adding Linux to a DEC XLT-366
  
   From: Alex Pikus of WEBeXpress alex@webexpress.net
   
   I have a DEC XLT-366 with NTS4.0 and I would like to add Linux to it.
   I have been running Linux on an i386 for a while. I have created 3
   floppies:
    1. Linload.exe and MILO (from DEC site)
    2. Linux kernel 2.0.25
    3. RAM disk
       
   I have upgrade AlphaBIOS to v5.24 (latest from DEC) and added a Linux
   boot option that points to a:\ 
   
   You have me at a severe disadvantage. I'll be running Linux on an
   Alpha based system for the first time next week. So I'll have to try
   answering this blind.
   
   When I load MILO I get the "MILO>" prompt without any problem. When I
   do "show" or "boot ..." at the MILO>" I get the following result ...
   SCSI controller gets identified as NCR810 on IRQ 28 ... test1 runs and
   gets stuck "due to a lost interrupt" and the system hangs ... In
   WinNTS4.0 the NCR810 appears on IRQ 29. 
   
   My first instinct is the ask if the autoprobe code in Linux (Alpha) is
   broken. Can you use a set of command-line (MILO) parameters to tell
   pass information about your SCSI controller to your kernel? You could
   also see about getting someone else with an Alpha based system to
   compile a kernel for you -- and make sure that it has values in it's
   scsi.h file that are appropriate to your system -- as well as insuring
   that the corrective drivers are built in.
   
   How can make further progress here? 
   
   It's a tough question. Another thing I'd look at is to see if the
   Alpha system allows booting from a CD-ROM. Then I'd check out Red
   Hat's (or Craftworks') Linux for Alpha CD's -- asking each of them if
   they support this sort of boot.
   
   (I happened to discover that the Red Hat Linux 4.1 (Intel) CD-ROM was
   bootable when I was working with one system that had an Adaptec 2940
   controller where that was set as an option. This feature is also quite
   common on other Unix platforms such as SPARC and PA-RISC systems -- so
   it is a rather late addition to the PC world).
   
   -- Jim
   
     _________________________________________________________________
                                      
  Configuration Problems of a Soundcard
  
   From: Stuby Bernd, eacj1049@inuers17.inue.uni-stuttgart.de
   
   Hello there, First I have to metion that my Soundcard (MAD16 Pro from
   Shuttle Sound System with an OPTi 82C929 chipset) works right under
   Windows. I tried to get my Soundcard configured under Linux
   2.0.25.with the same Parameters as under Windows but as I was booting
   the new compiled Kernel the Soundcard whistled and caused terrible
   noise. The same happened as I compiled the driver as a module and
   installed it in the kernel. In the 'README.cards' file the problem
   coming up just with this Soundcard is mentioned (something like line 3
   mixer channel). I don't know what to do with this information and how
   to change the sounddriver to getting it working right. May be there's
   somebody who knows how to solve this problem or where I can find more
   information.
   
   With best regards Bernd 
   
   Sigh. I've never used a sound card in my machine. I have a couple of
   them floating around -- and will eventually do that -- but for now
   I'll just have to depend on "the basics"
   
   Did you check the Hardware-HOWTO? I see the MAD16 and this chipset
   listed there. That's encouraging. How about the Soundcard-HOWTO?
   Unfortunately this has no obvious reference to your problem. I'd
   suggest browsing through it in detail. Is your card a PnP (plug and
   "pray")? I see notes about that being a potential source of problems.
   I also noticed a question about "noise" being "picked up" by the sound
   card
   http://sunsite.unc.edu/LDP/HOWTO/Sound-HOWTO-6.html#ss6.23 That might
   not match your probelm but it's worth looking at.
   
   Did you double check for IRQ and DMA conflicts? The thing I hate about
   PC sound cards is that most of them use IRQ's and DMA channels. Under
   DOS/Windows you used to be able to be fairly sloppy about IRQ's. When
   your IRQ conflicts caused conflicts the symptoms (like system lockups)
   tend to get lost in the noise of other problems (like system lockups
   and mysterious intermittent failures). Under Linux these problems
   usually rear their ugly heads and have nowhere to hide.
   
   Have you contacted the manufacturer of the card? I see a Windows '95
   driver. No technical notes on their sound cards -- and no mention of
   anything other than Windows on their web site (that I could find).
   That would appear to typify the "we only do Windows" attitude of so
   many PC peripherals manufacturers. I've blind copied their support
   staff on this -- so they have the option to respond.
   
   If this is a new purchase -- and you can't resolve the issue any other
   way -- I'd work with your retailer or the manufacturer to get a refund
   or exchange this with hardware that meets your needs. An interesting
   side note. While searching through Alta Vista on Yahoo! I found a page
   that described itself as The Linux Ultra Sound Project. Perhaps that
   will help you choose your next PC sound system (if it comes to that).
   
   -- Jim
   
     _________________________________________________________________
                                      
  Procmail Idea and Question
  
   From: Larry Snyder, larrys@lexis-nexis.com
   
   Just re-read your excellent article on procmail in the May LJ. (And
   yes, I've read both man pages :-). What I want to try is:
    1. Ignore the header completely
    2. Scan the body for
[*emov* *nstruction*]
   or
remove@*
    3. /dev/null anything that passes that test
       
   This should be a MUCH cheaper (in cpu cycles) way of implementing a
   spam filter than reading the header then going through all the
   possible domains that might be applicable. Most of the headers are
   forged in your average spam anyway....
   
   Not my idea, but it sounds good to me. What do you think, and how
   would I code a body scan in the rc? 
   
   I think it's a terrible idea.
   
   The code would be simple -- but the patterns you suggest are not very
   specific.
   
   Here's the syntax (tested):
                :0 B
                * (\[.*remove.*instruction.*\]|\[.*remove@.*\])
                /dev/null

   ... note the capital "B" specifies that the recipe applies to the
   "Body" of the message -- the line that starts with an asterisk is the
   only conditional (pattern) the parentheses enclose/group the regular
   expression (regex) around the "pipe" character. The pipe character
   means "or" in egrep regex syntax. Thus (foo|bar) means "'foo' or
   'bar'"
   
   The square brackets are a special character in regexes (where they
   enclose "classes" of characters). Since you appeared to want to match
   the literal characters -- i.e. you wanted your phrases to be enclosed
   in square brackets -- I've had to "escape" them in my pattern -- so
   they are treated literally and not taken as delimiters.
   
   The * (asterisk) character in the regex means "zero or more of the
   preceding element" and the . (dot or period) means "any single
   character" -- so the pair of them taken together means "any optional
   characters" If you use a pattern line like:
                * foo*l

   ... it can match fool fooool and fooooolk and even fol but not forl or
   foorl. The egrep man page is a pre-requisite to any meaningful
   procmail work. Also O'Reilly has an entire book (albeit a small one)
   on regular expressions.
   
   The gist of what I'm trying to convey is that .* is needed in regex'es
   -- even though you might use just * in shell or DOS "globbing" (the
   way that a shell matches filenames to "wildcards" is called "globbing"
   -- and generally does NOT use regular expressions -- despite some
   similarities in the meta characters used by each).
   
   Not also that the * token at the beginning of this line is a procmail
   thing. It just identifies this as being a "condition" line. Lines in
   procmail recipes usually start with a token like a : (colon), a *
   (asterisk), a | (pipe) or a ! (bang or exclammation point) -- any that
   don't may consist of a folder name (either a file or a directory) or a
   shell variable assignment (which are the lines with = (equal signs)
   somewhere on them.
   
   In other words the * (star) at the begining of that line is NOT part
   of the expression -- it's a token that tells the procmail processor
   that the rest of the line is a regex.
   
   Personally I found that confusing when I first started with procmail.
   
   Back to your original question:
   
   I'm very hesitant to blindly throw mail away. I'd consider filing spam
   in a special folder which is only review in a cursory fashion. That
   would go something like this:
                :0 B:
                * (\[.*remove.*instruction.*\]|\[.*remove@.*\])
                prob.spam

   Note that I've added a trailing : (colon) to the start of the recipe.
   This whole :x FLAGS business is a throwback to an early procmail which
   required each recipe to specify the number of patterns that followed
   the start of a recipe. Later :0 came to mean "I didn't count them --
   look at the first character of each line for a token." This means that
   procmail will can forward through the patterns and -- when one matches
   -- it will execute ONE command line at the end of the recipe (variable
   assignments don't count).
   
   I'm sure none of that made any sense. So :0 starts a recipe, the
   subsequent * ... lines provide a list of patterns, and each recipe
   ends with a folder name, a pipe, or a forward (a ! -- bang thingee).
   The : at the *END* of the :0 B line is a signal that this recipe
   should use locking -- so that to pieces of spam don't end up
   interlaced (smashed together) in your "prob.spam" mail folder. I
   usually use MH folders (which are directories in which each message
   takes up a single file -- with a number for a filename). That doesn't
   require locking -- you'd specify it with a folder like:
                :0
                * ^TO.*tag
                linux.gazette/.

   ... (notice the "/." (slash, dot) characters at the end of this).
   
   Also note that folder names don't use a path. procmail defaults to
   using Mail (like elm and pine). You can set the MAILDIR variable to
   over-ride that -- mine is set to $HOME/mh. To write to /dev/null
   (where you should NOT attempt to lock the file!) you must use a full
   path (I suppose you could make a symlink named "null" in your MAILDIR
   or even do a mknod but....). When writing procmail scripts just think
   of $MAILDIR as your "current" directory (not really but...) and either
   use names directly under it (no leading slashes or dot/slash pairs) or
   use a full path.
   
   The better answer (if you really want to filter mail that looks like
   spam) is to write an auto-responder. This should say something like:
   
   The mail you've sent to foo has been trapped by a filtering system. To
   get past the filter you must add the following line as the first line
   in the body of your message: ...... ... Your original message follows:
   ......
   
   ... using this should minimize your risks. Spammers rely on volume --
   no spammer will look through thousands of replies like this and
   manually send messages with the requisite "pass-through" or "bypass"
   directive to all of them. It's just not worth it. At the same time
   your friends and business associates probably won't mind pasting and
   resending (be sure to use a response format that "keeps" the body --
   since your correspondents may get irritated if they have to dig up
   their original message for you.
   
   Here's where we can work the averages against the spammer. He uses
   mass mailings to shove their message into our view -- we can each
   configure our systems to require unique (relatively insecure -- but
   unique) "pass codes" to reject "suspicious" mail. Getting the "pass
   codes" on thousands of accounts -- and using them before they are
   changed -- is not a task that can be automated easily (so long as we
   each use different explanations and formatting in our "bypass"
   instructions.
   
   More drastic approaches are:
     * Require that all incoming mail be PGP, PEM or S/MIME signed -- and
       that the signatories signature be on your mail keyring.
       (Enhancements would allow anyone to add themselves to your mail
       keyring if they got thier signature "counter signed" by anyone on
       one of your other keyrings).
     * (Return any unsigned mail with a message of explanation).
     * Test all incoming mail against a list of associates and friends --
       accept anything from them. Test all remaining mail against a list
       of know spammers -- reject those with an error message. Respond to
       all remaining mail to explain your anti-spam policy -- and provide
       "bypass" instuctions (so they can add themselves to your accept
       list).
     * Compare the "mail" and "envelope" addresses (the From: and From_
       (space) header lines). Reject any that are inconsistent.
     * Upgrade to a recent sendmail and configure the "reverse lookup"
       and the "rejection mailer table" features (which I haven't done
       yet -- so I know NOTHING about).
       
   I hope some of these ideas help.
   
   Here is a copy of one of my autoresponders for your convenience:
:0
* < 1000
* !^FROM_DAEMON
* !^X-Loop:[    ]*info@starshine.org
* ^Subject:[    ]*(procmail|mailbot)
| ((formail -rk -A "Precedence: junk" \
-A "X-Loop: info@starshine.org" ; \
echo "Mail received on:" `date`)  \
| $HOME/insert.doc -v file=$DOC/procmail.tutorial ) | $SENDMAIL -t -oi -oe

   I realize this looks ugly. The first condition is to respond only to
   requests that are under 1K in size. (An earlier recipe directs larger
   messages to me). The next two try to prevent reponses to mail lists
   and things like "Postmaster@..." (to prevent some forms of "ringing")
   and check against the "eXtended" (custom) header that most procmail
   scripts use to identify mail loops. The next one matches subjects of
   "procmail" or "mailbot."
   
   If all of those conditions are met than the message is piped to a
   complex command (spread over four lines -- it has the trailing
   "backslash" at the end of each of those -- to force procmail to treat
   it all as a single logical line:
   
   This command basically breaks down like so:
                        (( formail -rk ...

   ... the two parenthese have to do with how the data passes through the
   shell's pipes. Each set allows me to group the output from a series of
   commands into each of my pipes.
   
   .... the formail command creates a mail header the -r means to make
   this a "reply" and the -k means to "keep" the body. The two -A
   parameters are "adding" a couple of header lines. Those are enclosed
   in quotes because they contain spaces.
   
   ... the echo command adds a "timestamp" to when I received the mail.
   The `date` (backtick "date") is a common shell "macro expansion"
   construct -- Korn shell and others allow one to use the $(command)
   syntax to accomplish the same thing.
   
   Now we close the inner group -- so formail's output -- and the echo's
   output are fed into my little awk script: insert.doc. This just takes
   a parameter (the -v file=) and scans its input for a blank line. After
   the first blank line insert.doc prints the contents of "file." Finally
   it then just prints all of the rest of it's input.
   
   Here's a copy of insert.doc:
#! /usr/bin/gawk -f
/^[ \t]*$/ && !INSERTED { print; system("cat " file ); INSERTED=1}
1

   ... that's just three lines: the pattern matches any line with nothing
   or just whitespace on it. INSERTED is a variable that I'm using as a
   flag. When those to conditions are met (a blank line is found *and*
   the variable INSERTED has not yet been set to anything) -- we print a
   blank line, call the system() function to cat the contents of a file
   -- whose name is stored in the 'file' variable, and we set the
   INSERTED flag. The '1' line is just an "unconditional true" (to awk).
   It is thus a pattern that matches any input -- since no action is
   specified (there's nothing in braces on that line) awk takes the
   default action -- it prints the input.
   
   In awk the two lines:
        1

   ... and
                {print}

   ... are basically the same. They both match every line of input that
   reaches them and they both just print that and continue.
   
   ... Back to our ugly procmail recipe. 'insert.doc' has now "inserted"
   the contents of a doc file between formail's header and the body of
   the message that was "kept." So we combine all of that and pipe it
   into the local copy of sendmail. procmail thoughtfully presets the
   variable $SENDMAIL -- so we can use it to make our scripts (recipes)
   more portable (otherwise they would break when written on a system
   with /usr/lib/sendmail and moved to a system that uses
   /opt/local/new/allman/sendmail (or some silly thing like that)).
   
   The switches on this sendmail command are:
     * -t (take the header from STDIN)
     * -oi (option: ignore lines that contain just a dot)
     * -oe (option: errors generate mail)
       
   ... I'll leave it as an exercise to the reader to look those up in the
   O'Reilly "bat" book (the "official" Sendmail reference).
   
   There are probably more elegant ways to do the insertion. However it
   is a little messy that our header and our "kept" body are combined in
   formail's output. If we had a simple shell syntax for handling
   multiple file streams (bash has this feature -- but I said *simple*)
   then it would be nice to change formail to write the header to one
   stream and the body to another. However we also want to avoid creating
   temp files (and all the hassles associated with cleaning up after
   them). So -- this is the shortest and least resource intensive that
   I've come up with.
   
   So that's my extended tutorial on procmail.
   
   I'd like to thank Stephen R. van den Berg (SRvdB) (creator of
   procmail), Eric Allman (creator of sendmail), and Alan Stebbens (an
   active contributor to the procmail mailing list -- and someone who's
   written some nice extensions to SmartList).
   
   Alan Stebbens' web pages on mail handling can be found at:
   http://reality.sgi.com/aks/mail
   
   -- Jim
   
     _________________________________________________________________
                                      
  UUCP/Linux on Caldera
  
   From: David Cook, david_cook@VNET.IBM.COM
   
   We have spoken before on this issue over the caldera-users list (which
   I dropped because of too much crap). I recently gave up on Caldera's
   ability to support/move forward and acquired redhat 4.1.
   
   All works well, except I cannot get uucico & cu to share properly the
   modem under control of uugetty. Other comm programs like minicom and
   seyon have no problem with it.
   
   Both uucico and cu connect to the port and tell me that they cannot
   change the flow control !? and exit.
   
   If I kill uugetty, both uucico and cu work perfectly.
   
   In your discussion on the caldera newsgroup of Nov 2/96 you don't go
   into the details of your inbound connection, but you mention "mgetty"
   as opposed to uugetty.
   
   What works/why doesn't mine?
   What are pros/cons of mgetty?
   
   By the way, I agree wholeheartedly with your rational for UUCP. Nobody
   else seems to apreciate the need for multiple peer connections and the
   inherit security concerns with bringing up an unattended TCP
   connection with an ISP.
   
   Dave Cook, IBM Global Solutions. 
   
   The two most likely problems are: lock files or permissions
   
   There are three factors that may cause problems with lock files:
   location, name, and format.
   
   For lock files to work you must use the same device names for all
   access to a particular device -- i.e. if you use a symlink named
   'modem' to access your modem with *getty -- then you must use the same
   symlink for your cu, uucico, pppd, minicom, kermit, seyon, etc. (or
   you must find some way to force them to map the device name to a
   properly named LCK..* file).
   
   You must also configure each of these utilities to look for their lock
   files in the same location -- /var/lock/ under Red Hat. This
   configuration option may need to be done at compile time for some
   packages (mgetty) or it might be possible to over-ride it with
   configuration directives (Taylor UUCP) or even command line options.
   
   The other things that all modem using packages have to agree on is the
   format of the lock file. This is normally a PID number of the process
   that creates the lock. It can be in "text" (human readable) or
   "binary" form.
   
   Some packages never use the contents of the lock file -- its mere
   existence is sufficient. However most Linux/Unix packages that use
   device lock files will verify the validity of the lock file by reading
   the contents and checking the process status of whatever PID they read
   therefrom. If there is "no such process" -- they assume that it is a
   "stale" lock file and remove it.
   
   I currently have all of my packages use text format and the /dev/modem
   symlink to /dev/ttyS1 (thus if I move my modem to /dev/ttyS2 or
   whatever -- say while migrating everything to a new machine -- all I
   have to change is the one symlink). My lock files are stored in
   /var/lock/
   
   Permissions are another issue that have to be co-ordinated among all
   of the packages that must share a modem. One approach is to allow
   everyone write access to the modem. This, naturally, is a security
   whole large enough to steer an aircraft carrier through.
   
   The most common approach is to make the /dev/ node owned by uucp.uucp
   or by root.uucp and group writable. Then we make all of the programs
   that access it SGID or SUID (uucp).
   
   Here are the permissions I currently have set:
$ ls -ald `which uucico` `which cu`  /dev/modem /dev/ttyS* /var/lock
-r-sr-s---   1 uucp     uucp       /usr/bin/cu
-r-sr-s---   1 uucp     uucp       /usr/sbin/uucico
lrwxrwxrwx   1 uucp     uucp       /dev/modem -> /dev/ttyS1
crw-rw----   1 root     uucp       /dev/ttyS0
crw-rw-r--   1 root     uucp       /dev/ttyS1
crw-------   1 root     tty        /dev/ttyS2
crw-rw----   1 root     uucp       /dev/ttyS3
drwxrwxr-x   6 root     uucp       /var/lock

   On the next installation I do I'll probably experiment with tightening
   these up a little more. For example I might try setting the sticky bit
   on the /var/lock directory (forcing all file removals to be by the
   owner or root). That might prevent some programs from removing stale
   lock files (they would have to be SUID uucp rather than merely SGID
   uucp).
   
   'cu' and 'uucico' are both SUID and SGID because they need access to
   configuration files in which passwords are stored. Those are mode 400
   -- so a bug in minicom or kermit won't be enough to read the
   /etc/uucp/call file (for example). uucico is started by root run cron
   jobs and sometimes from a root owned shell at the console. cu is
   called via wrapper script by members of a modem group.
   
   Things like pppd, diald, and mgetty are always exec'd by root (or SUID
   'root' wrappers). mgetty is started by init and diald and pppd need to
   be able to set routing table entries (which requires root). So they
   don't need to be SUID anything. (If you want some users to be able to
   execute pppd you can make it SUID or you can write a simple SUID
   wrapper or SUID perl script. I favor perl on my home system and I make
   the resulting script inaccessible (unexecutable) by "other". At
   customer sites with multi-user systems I recommend C programs as
   wrappers -- a conservative approach that's been re-justified by recent
   announcements of new buffer overflows in sperl 5.003).
   
   Oddly enough ttyS2 is the null modem that runs into the living room. I
   do a substantial portion of my writing while sitting in my easy chair
   watching CNN and SF (Babylon 5, Deep Space 9, Voyager that stuff).
   
   Permissions are a particularly ugly portion of Unix since we rightly
   don't trust SUID things (with all of the buffer overflows, race
   conditions between stat() and open() calls and complex parsing
   trickery (ways to trick embedded system(), popen() and other calls
   that open a shell behind the programmer's back -- and are vulnerable
   to the full range of IFS, SHELL, alias, and LD_* attacks).
   
   However I'm not sure that the upcoming Linux implementation of ACL's
   will help with this. I really need to read more about the planned
   approach. If it follows the MLS (multi- layer security) model of DEC
   and other commercial Unix implementations -- then using them make the
   system largely unusable for general-purpose computing (i.e. -- cast
   them solely as file servers).
   
   From what I've read some of the problem is inherent in basing access
   primarily on ID and "group member ship" (really an extension of
   "identity"). For a long time I racked my brains to try to dream up
   alternative access control models -- and the only other one I've heard
   of is the "capabilities" of KeyKOS, Multics, and the newer Eros
   project.
   
   Oh well -- we'll see. One nice thing about having the Linux and GNU
   project consolidating some much source code in such a small number of
   places is that it may just be possible to make fundamental changes to
   the OS design and "fix" enough different package to allow some those
   changes to "take" (attain a critical mass).
   
   -- Jim
   
     _________________________________________________________________
                                      
  ActiveX for Linux
  
   To: John D. Messina, messina@bellatlantic.net
   
   I was recently at the AIIM trade show in New York. There was nothing
   for Linux there, but I happened to wander over to the cyber cafe that
   was set up. I happened to be reading last month's Linux Gazette when a
   Microsoft employee walked up behind me. He was excited to find someone
   who was knowledgeable about Linux - he wanted to get a copy for
   himself. 
   
   I presume that you're directing this to the "Linux Gazette Answer
   Guy."
   
   Anyway, we got to talking and he told me that Linux was getting so
   popular that Microsoft had decided to port ActiveX to Linux. Do you
   know if, in fact, this is true? If so, when might we see this port
   completed? 
   
   I have heard the same story from other Microsoft representatives (once
   at a Java SIG meeting where the MS group was showing off their J++
   package).
   
   This doesn't tell me whether or not the rumor is "true" -- but it does
   suggest that it is an "officially condoned leak." Even if I'd heard an
   estimated ship date (I heard this back in Nov. or Dec.) I wouldn't
   give it much credence.
   
   (That is not MS bashing by the way -- I consider ship dates from all
   software companies and groups -- even our own Linus and company -- to
   be fantasies).
   
   To be honest I didn't pursue the rumor. I asked the gentlemem I spoke
   to what ActiveX provides that CGI, SSI (server side includes), XSSI
   (extended server side includes), FastCGI, SafeTCL, Java and JavaScript
   don't. About the only feature they could think of is that it's from
   Microsoft. To be honest they tried valiantly to describe something --
   but I just didn't get it.
   
   So, your message as prompted me to ask this question again. Switching
   to another VC and firing up Lynx and my PPP line (really must get that
   ISDN configured one of these dasy) I surf on over to MS' web site.
   
   After a mildly amusing series of redirects (their sites seems to be
   *all* .ASP (active server procedures?) files) I find my self at a
   reasonably readable index page. That's hopeful -- they don't qualify
   for my "Lynx Hall of Shame" nomination. I find the "Search" option and
   search on the single keyword "Linux."
   
   "No Documents Match Query"
   
   ... hmm. That would be *too* easy wouldn't it. So I search on ActiveX:
   
   "No Documents Match Query"
   
   ... uh-oh! I thought this "Search" Feature would search massive lists
   of press releases and "KnowlegeBase" articles and return thousands of
   hits. Obviously MS and I are speaking radically different languages.
   
   Let's try Yahoo!
   
   So I try "+ActiveX +Linux."
   
   Even more startling was the related rumor -- that I heard at the same
   Java SIG meeting. The Microsoft reps there announced Microsoft's
   intention to port IE (Internet Explorer) to Unix. They didn't say
   which implementations of Unix would be the recipients of this dubious
   honor -- but suggested that Linux was under serious consideration.
   
   (We can guess that the others would include SCO, Solaris, Digital, and
   HP-UX. Some of MS' former bed partners (IBM's AIX) would likely be
   snubbed -- and more "obscure" OS' (like FreeBSD???), and "outmoded"
   OS' like SunOS are almost certainly to be ignored).
   
   It appears that the plan is to port ActiveX to a few X86 Unix
   platforms -- and use that to support an IE port (I bet IE is in
   serious trouble without ActiveX))
   
   They'll run the hype about this for about a year before shipping
   anything -- trying to convince people to wait a little longer before
   adopting any other technologies.
   
   "No! Joe! Don't start that project in Java -- wait a couple of months
   and those "Sun" and "Linux" users will be able to use the ActiveX
   version."
   
   Some Links on this:
     * PC WEEK: ActiveX moving to Unix; Netscape support lags
     * ActiveX--Zendetta
     * ANTENNA ActiveX Mini-HOWTO
     * This last one is amusing since it displays a footer at the end of
       every page:
       "This server is: Digital Multia VX40 - Running RedHat Linux"
       Here's one that meets my criteria for "Hall of Shame",
     * Connected Place Ltd. Now here's one that meets my criteria for
       "Hall of Shame". It contained no text on the main index page --
       all icons. The only reference to Linux on the site seemed to be in
       the Keywords tag:
       <META Name="KEYWORDS" Content="....>
       (Which repeated every term about four times -- this tag was a half
       a screenful long). Unfortunately it showed up first in the hits
       list (first page in English that is -- the one French page that
       preceded just had an "I've moved notice" -- or maybe it was a
       "You're a silly goat" message -- my French never was that good).
     * Jason't Programmer Corner ... which started with the words,
       "ActiveX Sucks!"
       ... and said nothing else on the matter. However, it doesn't make
       it into the Hall of Shame -- because the page is well organized,
       easily read -- only two "un-ALT'd" icons on several pages of
       information -- and has many good Linux and other links. Even the
       "hit counter" works in Lynx saying,
       "You are visitor number 253 since 8.4.97"
       
   Everybody who uses NetNews or E-Mail should read the little essay on
   "Good Subject Lines." A promising page which I didn't have time to
   properly peruse is
     * Sean Michael Mead's Computer Programming Links which had "ActiveX"
       in the Meta, Keywords tag -- but no obvious links to ActiveX
       content.
       There was alot of good info on Java, Linux, HTML, Ada, TCL and
       many other topics. I wouldn't be surprised if there was something
       about ActiveX somewhere below this page.
       Suggestion: Sean -- Install Glimpse!
       (I've copied many of the owners/webmasters at the sites I'm
       referring to here).
     * ActiveX Resources, only had one reference to Linux. This noted
       that the "Liquid Reality Toolkit" is a "is a set of Java class
       libraries that gives you VRML functionality."
       Sounds interesting and wholly unrelated to ActiveX.
       
   Conclusion: Microsoft's mumblings to Linux users about porting IE and
   ActiveX to Linux is interesting. The mumbling is more interesting than
   any product they deliver is likely to be. I still don't know what
   ActiveX "does" well enough to understand what "supporting ActiveX
   under Linux" would mean.
   
   It seems that ActiveX is a method of calling OCX and DLL code. That
   would imply that *using* ActiveX controls on Linux would require
   support for OCS and DLL's -- which would essentially mean porting all
   of the Windows API to work under Linux.
   
   Now I have alot of trouble believing that Microsoft will deliver
   *uncompromised* support for Windows applications under Linux or any
   other non-Microsoft OS.
   
   Can you imaging Bill Gates announcing that he's writing a
   multi-million dollar check to support the WINE project? If that
   happens I'd suggest we call in the Air Force with instructions to
   rescue the poor man from whatever UFO snatched him -- and get the FBI
   to arrest the imposter!
   
   What's amazing is that this little upstart collection of freeware has
   gotten popular enough that the largest software company in the world
   is paying any attention to it at all.
   
   Given Microsoft's history we have to assume that any announcement they
   make regarding Linux is carefully calculated to offer them some
   substantial benefit in their grand plan. That grand plan is to
   dominate the world of software -- to be *THE* software that controls
   everything (including your toaster and your telephone) (and
   everyone???).
   
   This doesn't mean that we should react antagonistically to these
   announcements. The best bet -- for everyone who must make development
   or purchasing plans for any computer equipment -- is to simply cut
   through as much of the hype as possible and ask: What are the BENEFITS
   of the package that is shipping NOW?
   
   Don't be swayed by people who talk about FEATURES (regardless of
   whether they are from from Microsoft, the local used car lot, or
   anywhere else).
   
   The difference between BENEFITS and FEATURES is simply this --
   Benefits are relevant to you.
   
   The reason software publishers and marketeers in general push
   "features" is because they are engaged in MASS marketing. Exploring
   and understanding individual set of requirements is not feasible in
   MASS marketing.
   
   (Personally one of the features that I find to be a benefit in the
   Linux market is the lack of hype. I don't have to spend time
   translating marketese and advertisian into common English).
   
   I hope this answers your question. The short answers are:
   
   Is it true (that MS is porting ActiveX to *ix)?
   
   The rumor is widespread by their employees -- but there are no
   "official" announcements that can be found on their web site with
   their own search engine.
   
   When might we see it? Who nows. Let's stick with NOW.
   
   Finally let me ask this: What would you do with ActiveX support under
   Linux? Have you tried WABI? Does ActiveX work under Windows 3.1 and/or
   Windows 3.11? Would you try it under WABI?
   
   What are your requirements (or what is your wishlist)? (Perhaps the
   Linux programming community can meet your requirements and/or fullfill
   your wishes more directly).
   
     _________________________________________________________________
                                      
  What Packages Do I Need?
  
   From: buck, buck@athenet
   
   I just installed Redhat 4.1 and was not sure what packages that I
   really needed so I installed a lot just to be safe. The nice thing is
   that Redhat 4.1 has the package manager that I can use to safely
   remove items. Well seeing as how my installation was about 400 megs I
   really need to clean house here to reclaim space. Is is save to remove
   the developement packages and a lot of the networking stuff that I
   installed. And what about the shells and window managers that I don't
   use. I have Accelerated X so I know that I can get rid of a lot off
   the X stuff. I need my space back! 
   
   Since you just installed this -- and haven't had much time to put alot
   of new, unrecoverable data on it -- it should be "safe" to do just
   about anything to it. The worst that will happen if you trim out to
   much is that you'll have to re-install.
   
   I personally recommend the opposite approach. Install the absolute
   minimum you think is usable. Then *add* packages one at a time.
   
   I also strongly suggest creating a /etc/README file. Create it *right
   after* you reboot you machine following the install process. Make a
   dated note in there for each *system* level change you make to your
   system. (My rule of thumb is that anything thing I edited or installed
   as 'root' is a "system" level change).
   
   Most of my notes are in the form of comments near the top of any
   config files or scripts that support them. Typical notes in
   /etc/README would be like:
Sun Apr 13 15:32:00 PDT 1997: jimd

                        Installed mgetty.  See comments in
                        /usr/local/etc/mgetty/*.config.

                Sun May  4 01:21:11 PDT 1997: jimd

                        Downloaded 2.0.30 kernel.
                        unpacked into /usr/local/src/linux-2.0.30
                        and replace /usr/src/linux symlink
                        accordingly.

                        Picked *both* methods of TCP SYN
                        cookies.  Also trying built-in kerneld
                        just about everything is loadable modules.
                        Adaptec SCSI support has to be built-in
                        though.

                        Needed to change rc files to do the
                        mount of DOS filesystem *after* rc.modules.
                        
        ... etc.

   Notice that these are free form -- a date, and login name (not ROOT's
   id -- but whoever is actualy doing work as root). I maintain a README
   even on my home machines.
   
   The goal is to keep notes that are good enough that I could rebuild my
   system with all the packages I currently use -- just using the README.
   It tells me what packages I installed and what order I installed them
   in. It notes what things seemed important to me at the time (like the
   note that trying to start a kernel whose root filesystem is on a SCSI
   disk requires that the kernel be compile with that driver built-in --
   easy to overlook and time consuming to fix if you forget it).
   
   Sometime I ask myself questions in the README -- like: "Why is rnews
   throttling with this error:..." (and an excerpt from my /var/log
   messages).
   
   This is handy if you later find that you need to correlate an anomaly
   on your system with some change made by your ISP -- or someone on your
   network.
   
   Of course you could succumb to the modern trend -- buy another disk
   drive. I like to keep plenty of those around. (I have about 62Mb of
   e-mail currently cached in my mh folders -- that's built up since I
   did a fresh install last August -- with a few megs of carry over from
   my previous installation).
   
   -- Jim
   
     _________________________________________________________________
                                      
  Users and Mounted Disks
  
   To: John E. (Ned) Patterson, jpatter@flanders.mit.edu,br>
   
   As a college student on a limited budget, I am forced to comprimise
   between Win95 and Linux. I use linux for just about everything, but
   need the office suite under Win95 since I can't afford to buy
   something for Linux. (Any recommendations you have for cheep
   alternatives would be appreciated, but that is not the point of the
   question.) 
   
   I presume you mean MS Office. (Caps mean a bit here). I personally
   have managed to get by without a couple of Office (Word or Excel) for
   some time. However I realize that many of us have to exchange
   documents with "less enlightened" individuals (like professors
   employers and fellow students).
   
   So getting MS Office so you can handle .DOC and .XLS (and maybe
   PowerPoint) files is only a venial sin in the Church of Linux (say a
   few "Hail Tove's" and go in peace).
   
   As for alternatives: Applixware, StarOffice, CliqSuite, Corel
   Application Suite (in Java), Caldera's Internet Office Suite, and a
   few others are out there. Some of them can do some document
   conversions from (and to??) .DOC format.
   
   Those are all applications suites. For just spreadsheets you have
   Xess, Wingz and others.
   
   In addition there are many individual applications. Take a look at the
   Linux Journal Buyer's Guide Issue for a reasonably comprehensive list
   of commercial applications for Linux (and most of the free was as
   well).
   
   Personally I use vi, emacs (in a vi emulation mode -- to run M-x
   shell, and mh-e), and sc (spreadsheet calculator).
   
   Recently I've started teaching myself TeX -- and I have high hopes for
   LyX though I haven't even seen it yet.
   
   Unfortunately there is no good solution to the problem of proprietary
   document formats. MS DOC and MS XLS files are like a stranglehold on
   corporate America. I can't really blame MS for this -- the competition
   (including the freeware community) didn't offer a sufficiently
   attractive alternative. So everyone seems to have stepped up to the
   gallows and stuck their own necks in it.
   
   "Owning" an ubiquitous data format is the fantasy of every commercial
   software company. You're customers will pass those documents around to
   their associates, vendors, even customers, and *expect* them to read
   it. Obviously MS is trying to leverage this by "integrating" their
   browser, mailer, spreadsheet, and word processors together with OLE,
   DSOM, ActiveX and anything else they can toss together.
   
   The idea is to blur everything together so that customers link
   spreadsheets and documents into their web pages and e-mail -- and the
   recipients are then forced to have the same software. Get a critical
   mass doing that and "everyone" (except a few fringe Unix weirdos like
   me) just *HAS* to belly up and buy the whole suite.
   
   This wouldn't be so bad -- but then MS has to keep revenues increasing
   (not just keep them flowing -- but keep them *increasing*). So we get
   upgrades. Each component of your software system has to be upgraded
   once every year or two -- and the upgrade *MUST* change some of the
   data (a one way conversion to the new format) -- which transparently
   makes your data inaccessible to anyone who's a version behind.
   
   Even that wouldn't be so bad. Except that MS also has its limits. It
   can't be on every platform (so you can't access that stuff from your
   SGI or your Sun or your HP 700 or your OS/400). Not that MS *couldn't*
   create applications for these platforms. However that might take away
   some of Intel's edge -- and MS can't *OWN* the whole OS architecture
   on your Sun, SGI, HP or AS/400.
   
   But enough of that diatribe. Let's just say -- I don't like
   proprietary file formats.
   
   I mount my Win95 partition under /mnt/Win95, and would like to have
   write permission enabled for only certain users, much like that which
   is possible using AFS. Recognizing that is not terribly feasable, I
   have resorted to requireing root to mount the partition manually, but
   want toi be able to write to it as a random user, as long as it is
   mounted. The rw option for mount does not seem to cut the mustard,
   either. it allows write for root uid and gid, but not anyone else. Any
   suggestions? 
   
   You can mount your Win95 system to be writable by a specific group.
   All you have to do is use the right options. Try something like:

mount -t umsdos -w -ogid=10,uid=0,umask=007 /dev/hda1 /mnt/c

   (note: you must use numeric GID and UID values here -- mount would
   look them up by name!)
   
   This will allow anyone in group 10 (wheel on my system) to write to
   /mnt/c.
   
   There are a few oddities in all of this. I personally would prefer to
   see a version of 'mount' -- or an option to 'mount' that would mount
   the target with whatever permissions and modes the underlying mount
   point had at mount time. In other words, as an admin., I'd like to set
   the ownership and permissions on /mnt/c to something like joeshmo user
   with a mode of 1777 (sticky bit set). Then I'd use a command like:
                mount -o inherit /mnt/c /dev/hda1

   Unfortunately I'm not enough of a coder to feel comfortable make this
   change (yet) and my e-mail with the current maintainer of the Linux
   mount (resulting from the last time I uttered this idea in public)
   suggests that it won't come from that source.
   
   (While we were at it I'd also add that it would be nice to have a
   mount -o asuser -- which would be like the user option in that it
   would allow any user (with access to the SUID mount program) to mount
   the filesystem. The difference would be that the resulting mount point
   would be owned by the user -- and the nodev, nosuid etc, options would
   be enforced.)
   
   Getting back to your question:
   
   Another way to accomplish a similar effect (allowing some of your
   users to put files on under you /mnt/Win95 directory) would be to
   create a /usr/Win95 directory -- allow people to write files into that
   and use a script to mirror that over to the /mnt/Win95 tree.
   
   (Personally I think the whole this is pretty dangerous -- so using the
   -o gid=... is the best bet).
   
   -- Jim
   
     _________________________________________________________________
                                      
  [q] Map Left Arrow to Backspace
  
   To: wenbing@statcan.ca
   
   I have a client who would like to use the left arrow key to backspace
   and erase characters to the left of the cursor. Is this possible? And
   how? Thanks for an answer.
   
   Read the Keyboard-HOWTO (section 5). The loadkeys and xmodmap man
   pages, and the Backspace-Mini-HOWTO are also related to this. It is
   possible to completely remap your keys in Linux and in X Windows. You
   can also set up keybindings that are specific to bash (using the built
   in bind command) and to bash and other programs that use the
   "readline" library using the .inputrc file.
   
   The Keyboard-HOWTO covers all of this.
   
   -- Jim
   
     _________________________________________________________________
                                      
  Adding Programs to the Pull Down Menus
  
   To: Ronald B. Simon, rbsimon@anet.bna.boeing.com
   
   I have written several utility programs that I use all the time. I
   would like to add them to either the Application or Utility "pull
   down" menu of the Start menu. Could you address this in your Linux
   Gazette article? 
   
   I assume you are referring to the menus for your X "Window Manager."
   
   Since you don't specify which window manager you're using (fvwm,
   fvwm95, twm, gwm, ctwm, mwm, olwm, TheNextLevel --- there are lots of
   wm's out there) -- I'll have to guess that you're using fvwm (which is
   the default) on most XFree86 systems. The fvwm95 (which is a
   modification of fvwm to provide a set of menus and behaviors that is
   visually similar to that of Windows '95) uses the same file/menu
   format (as far as I know).
   
   The way you customize the menus of almost any wm is to edit (possibly
   creating) an rc file. For fvwm that would be ~/.fvwmrc
   
   Here's an excerpt from mine (where I added the Wingz demo):
Popup "Apps"
        Exec    "Wingz"         exec /usr/local/bin/wingz &
        Nop     ""
        Exec    "Netscape"      exec netscape &
        Exec    "Mosaic"        exec Mosaic &
        Nop     ""
        Exec    "Elm"           exec xterm -e elm &
        Nop     ""
EndPopup

        You'd just add a line like:

        Exec    "Your App"      exec /path/to/your/app &

        .... to this.

        If you add a line like:

        PopUp   "My Menu"       MyMenu

        ... and a whole section like:

PopUp "MyMenu"
        Exec    "One App"       exec /where/ever/one.app &
        Exec    "Another Toy"   exec /my/bin/toy &
EndPopUp

   ... you'll have created your on submenu. Most other Window Managers
   have similar features and man pages to describe them.
   
   -- Jim
   
     _________________________________________________________________
                                      
  Linux and NT
  
   To: Greg C. McNichol, greg_c_mcnichol@em.fcnbd.com
   
   I am new to LINUX (and NT 4.0 for that matter) and would like any and
   all information I can get my hands on regarding the dual-boot issue.
   Any help is appreciated. 
   
   More than you wanted to know about:
   
   Booting Multiple Operating Systems
   
   There are several mini-HOW-TO documents specifically covering
   different combinations of multi-boot. Here's some that can be found
   at: http://www.linuxresources.com//LDP/HOWTO/HOWTO-INDEX.html
     * Linux+DOS+Win95 mini-HOWTO
       How to use Linux and DOS and Windows95 together. Updated 10
       September 1996.
     * Linux+DOS+Win95 mini-HOWTO
       How to use Linux and OS/2 and DOS together. Updated 20 May 1996.
     * Linux+OS2+DOS mini-HOWTO
       How to use Linux and DOS and OS/2 and Win95 together. Updated 6
       March 1996.
     * Linux+DOS+Win95+OS2
       How to use Linux and Windows95 together. Updated 25 June 1996.
     * Linux+WinNT mini-HOWTO
       How to use Linux and WindowsNT together. Updated 19 February 1997.
     * Linux+WinNT++ mini-HOWTO by Kurt Swendson
       How to use Linux and WindowsNT together, with NT preinstalled.
       Updated 21 December 1996.
       
   Personally I think the easiest approach to make Linux co-exsist with
   any of the DOS derived OS' (Win '95, OS/2, or NT) is to use Han
   Lerman's LOADLIN package. Available at "Sunsite":
   ftp://sunsite.unc.edu/pub/Linux/system/Linux-boot/lodlin16.tgz (85k)
   
   To use this -- start by installing a copy of DOS (or Win '95). Be sure
   to leave some disk space unused (from DOS/Win '95's perspective) -- I
   like to add whole disks devoted to Linux.
   
   Now install Linux on that 2nd, 3rd or nth hard drive -- or by adding
   Linux partitions to the unused portion of whichever hard drives you're
   already using. Be sure to configure Linux to 'mount' your DOS
   partition(s) (make them accessible as parts of the Unix/Linux
   directory structure). While installing be sure to answer "No" or
   "Skip" to any questions about "LILO" (Feel free to read the various
   HOW-TO's and FAQ's so you'll understand the issues better -- I'd have
   to give a rather complete tutorial on PC Architecture, BIOS boot
   sequence and disk partitioning to avoid oversimplifying this last
   item)
   
   Once you're done with the Linux installation find and install a copy
   of LOADLIN.EXE. The LOADLIN package is a DOS program that loads a
   Linux kernel. It can be called from a DOS prompt (COMMAND.COM or
   4DOS.COM) or it can be used as a INSTALL directive in your CONFIG.SYS
   (which you'd use with any of the multi-boot features out there --
   including those that were built into DOS 6.x and later). After
   installation you'd boot into DOS (or into the so-called "Safe-Mode"
   for Windows '95) and call LOADLIN with a batch file like:

                C:
                CD \LINUX
                LOADLIN.EXE RH2013.KRN root=/dev/hda2 .....

   (Note the value of your root= parameter must correspond to the Linux
   device node for the drive and partition on which you've installed
   Linux. This example shows the second partition on the first IDE hard
   drive. The first partition on the second IDE drive would be /dev/hdb1
   and the first "logical" partition within an extended partition of your
   fourth SCSI hard drive would be /dev/sdd5. The PC Architecture
   specifies room for 4 partitions per drive. Exactly one of those (per
   drive) may be an "extended" partition. An extended partition may have
   an arbitrary number of "logical" drives. The Linux nomenclature for
   logical drives always starts at 5 since 1 through 4 or reserved for
   the "real" partitions).
   
   The root= parameter may not be necessary in some cases since the
   kernel has a default which was compiled into it -- and which might
   have been changed with the rdev command. rdev is a command that
   "patches" a Linux kernel with a pointer to it's "root device."
   
   This whole concept of the "root device" or "root filesystem" being
   different than the location of your kernel may be confusing at first.
   Linux (and to a degree other forms of Unix) doesn't care where you put
   you kernel. You can put it on a floppy. That floppy can be formatted
   with a DOS, Minix or ext2 filesystem -- or can be just a "raw" kernel
   image. You can put your kernel on ANY DOS filesystem so long as
   LOADLIN can access it.
   
   LOADLIN and LILO are "boot loaders" they copy the kernel into RAM and
   execute it. Since normal DOS (with no memory managers loaded --
   programs like EMM, QEMM, and Windows itself) has no memory protection
   mechanisms it is possible to load an operating sytem from a DOS
   prompt. This is, indeed, how the Netware 3.x "Network Operating
   System" (NOS) has always been loaded (with a "kernel" image named
   SERVER.EXE). It is also how one loads the TSX-32 (a vaguely VMS like
   operating system for 386 and later PC's).
   
   My my example RH2013.KRN is the name of a kernel file. Linux doesn't
   care what you name it's kernel file. I use the convention of naming
   mine LNXvwyy.KRN -- where v is the major version number, w is the
   minor version and yy is the build. LNX is for a "general use" kernel
   that I build myself, RH is a kernel I got from a RedHat CD, YGG would
   be from an Yggdrasil, etc).
   
   One advantage of using LOADLIN over LILO is that can have as many
   kernels and your disk space allows. You can have them arranged in
   complex hierarchies. You can have as many batch files passing as many
   different combinations of of kernel parameters as you like. LILO is
   limited to 16 "stanzas" in its /lilo.conf file.
   
   The other advantage of LOADLIN over LILO is that it is less scary and
   hard to understand for new users. To them Linux is just a DOS program
   that you have to reboot to get out of. It doesn't involve any of that
   mysterious "master boot record" stuff like a computer virus.
   
   A final advantage of LOADLIN over LILO is that LOADLIN does not
   require that the root file system be located on a "BIOS accessible"
   device. That's a confusing statement -- because I just tossed in a
   whole new concept. The common system BIOS for virtually ALL PC's can
   only see one or two IDE hard drives (technically ST-506 or compatible
   -- with a WD8003 (???) or register compatible controller -- however
   ST-506 (the old MFM and RLL drives) haven't been in use on PC's since
   the XT) To "see" a 3rd or 4th hard drive -- or any SCSI hard drive the
   system requires additional software or firmware (or an "enhanced
   BIOS"). There is a dizzying array of considerations in this -- which
   have almost as many exceptions. So to get an idea of what is "BIOS"
   accessible you should just take a DOS boot floppy -- with no
   CONFIG.SYS at all -- and boot off of it. Any drive that you can't see
   is not BIOS accessible.
   
   Clearly for the vast majority of us this is not a problem. For the
   system I'm on -- with two IDE drives, two internal SCSI drives, one
   internal CD reader, an external SCSI hard drive, a magneto optical
   drive, a 4 tape DAT autochanger and a new CD-Writer (which also
   doubles as a CD reader, of course) -- with all of that it makes a
   difference.
   
   Incidentally this is not an "either/or" proposition. I have LILO
   installed on this system -- and I have LOADLIN as well. LILO can't
   boot my main installation (which is on the SCSI drives. But it can
   boot a second minimal root installation -- or my DOS or OS/2
   partitions.
   
   (I'm not sure the OS/2 partition is still there -- I might have
   replaced that with a FreeBSD partition at some point).
   
   Anyway -- once you have DOS and Linux happy -- you can install NT with
   whatever "dual boot" option it supports. NT is far less flexible about
   how it boots. So far as I know there is no way to boot into DOS and
   simply run NT.
   
   It should be noted that loading an OS from DOS (such as we've
   described with LOADLIN, or with FreeBSD's FBSDBOOT.EXE or TSX-32's
   RUNTSX.EXE) is a ONE WAY TRIP! You load them from a DOS prompt -- but
   DOS is completely removed from memory and there is no way to exit back
   to it. To get back to DOS you much reboot. This isn't a new experience
   to DOS users. There have been many games, BBS packages and other
   pieces of software that had not "exit" feature.
   
   (In the case of Netware there is an option to return to DOS -- but it
   is common to use an AUTOEXEC.NCF (netware control file) that issues
   the Netware command REMOVE DOS to free up the memory that's reserved
   for this purpose).
   
   In any event those mini-HOWTO's should get you going. The rest of this
   is just background info.
   
   -- Jim
   
     _________________________________________________________________
                                      
  pcmcia 28.8 Modems and Linux 1.2.13 Internet Servers
  
   To: Brian Justice
   
   I was browsing the web and noticed your web page on Linux. I am not
   familar with Linux but have an ISP who uses the software on their
   server.
   
   I was wondering if anyone at your organization knew of any problems
   with 
   
   I'm the only one at my organization -- Starshine is a sole
   proprietorship.
   
   Pentium notebooks with 28.8 modems connecting to Linux 1.2.13 internet
   servers that would do the following:
     * drop connection at 28.8 after connected for several minutes
     * have trouble on the initial connection or reconnection
       
   It sounds like you're saying that the Pentium Notebook is running some
   other OS -- like Windows or DOS and that it is using a PCMCIA modem to
   dial into another system (with unspecified modem and other hardware --
   but which happens to run Linux).
   
   If that's the case then you're troubleshooting the wrong end of the
   connection.
   
   First identify which system is having the problem -- use the Pentium
   with the "piecemeal" (PCMCIA) modem to call a BBS or other ISP at
   28.8. Try several.
   
   Does your Pentium sytem have problems with all or most of them?
   
   If so then it is quite likely a problem with the combination of your
   Pentium, your OS, and your piecemeal modem.
   
   Try booting the Pentium off of a plain boring copy of DOS (with
   nothing but the PCMCIA drivers loaded). Repeat the other experiments.
   Does it still fail on all or most of them?
   
   If so then it is probably the PCMCIA drivers.
   
   Regular desktop 28.8 modems seem to work fine. I have a few 14.4
   PCMCIA modems that seem to work fine.
   
   Would incorrect settings cause this? Or could this be a program glitch
   that doesn't support these 28.8 modems due to the low level of the
   release? I noticed their are higher versions of Linux out there. 
   
   "incorrect settings" is a pretty vague term. Yes. The settings on your
   hardware *AND THEIRS* and the settings in your software *AND THEIRS*
   has to be right. Yes. The symptoms of incorrect settings (in the
   server hardware, the modem hardware, the OS/driver software or the
   applications software *AT EITHER END OF THE CONNECTION* could cause
   sufficiently sporadic handshaking that one or the other modem in a
   connection "gives up" and hangs up on the other.
   
   The BIG question is "Have you heard of any 28.8 PCMCIA modem problems
   with Linux internet servers? " If so, could you drop me a few lines so
   I can talk this over with my ISP. If not , do you know of any other
   sites or places I can check for info about this subject. 
   
   I've heard of problems with every type of modem for every type of
   operating system running on every platform. None of them has been
   specific to PCMCIA modems with Linux. I've operated a couple of large
   BBS' (over a 100 lines on one and about 50 on the other) and worked
   with a number of corporate modem pools and remote access servers.
   
   I don't understand why your ISP would want a note from me before
   talking to you.
   
   It sounds like your asking me to say: "Oh yeah! He shouldn't be
   running Linux there!" ... or to say" "1.2.13! That fool -- he needs to
   upgrade to 2.0.30!" ... so you can then refer this "expert" opinion to
   some support jockey at your ISP.
   
   Now if you mean that your ISP is running Linux 1.2.13 on a Pentium
   laptop with PCMCIA modems -- and using that as a server for his
   internet customers -- I'd venture to say that this is pretty
   ludicrous.
   
   If you were running Linux on your laptop and having problems with your
   PCMCIA modem I wouldn't be terribly surprised. PCMCIA seems to be an
   unruly specification -- and the designers of PCMCIA equipment seem to
   have enough trouble in their (often unsuccessful) attempts to support
   simple DOS and Windows users. The programmers that contribute drivers
   for Linux often have to work with incomplete or nonexistent
   specifications for things like video cards and chipsets -- and PCMCIA
   cards of any sort.
   
   I mostly avoid PCMCIA -- it is a spec that is ill-suited to any sort
   of peripheral other than *MEMORY CARDS* (which is, after all, what the
   letters MC stand for in this unpronounceable stream of gibberish that
   I dubbed "piecemeal" a few years ago).
   
   Any help would be appreciated. 
   
   I could provide much better suggestions if I had more information
   about the setup. I could even provide real troubleshooting for my
   usual fees.
   
   However, if the problem really is specific to your connections with
   your ISP (if these same 28.8 "piecemeal" modems work fine with say --
   your Cubix RAS server or your favorite neighborhood BBS), then you
   should probably work with them to resolve it (or consider changing
   ISP's).
   
   As a side note: Most ISP's use terminal servers on their modem banks.
   This means that they have their modems plugged into a device that's
   similar to a router (and usually made be a company that makes
   routers). That device controls the modems and converts each incoming
   session into an rlogin or "8-bit clean" telnet session on one more
   more ethernet segments.
   
   Their Unix or other "internet servers" don't have any direct
   connections to any of the normal modems. (Sometimes an sysadmin will
   connnect a modem directly to the serial ports of one or more of these
   systems -- for administrative access so they can call on a special
   number and bypass the terminal servers, routers, etc).
   
   It's possible that the problem is purely between the two brands of
   modems involved. Modern modems are complex devices (essentiall
   dedicated microcomputers) with substantial amounts of code in their
   firmware. Also the modem business sports cutthroat competition -- with
   great pressure to add "enhancements," a lot of fingerpointing, and
   *NO* incentive to share common code bases for interoperability's sake.
   So slight ambiguities in protocol specification lead to sporadic and
   chronic problems. Finally we're talking about analog to digital
   conversion at each end of the phone line. The phone companies have
   *NO* incentive to provide good clean (noise free) phone lines to you
   and your ISP. They make a lot more money on leased lines -- and get
   very little complaint for "voice grade" line quality.
   
   The problem is that none of us should have been using modem for the
   last decade. We should have all had digital signals coming into our
   homes a long time ago. The various phone companies (each a monopoly in
   it's region -- and all stemming from a small set of monopolies) have
   never had any incentive to implement this, every incentive NOT to
   (since they can charge a couple grand for installationn and several
   hundred per month on the few T1's they to do sell -- and they'll never
   approach that with digital lines to the home. They do, however, have
   plenty of money to make their concerns heard in regulatory bodies
   throughout the government. So they cry "who's going to pay for it?" so
   loudly and so continuously that no one can hear the answer of the
   American people. Our answer should be "You (monopolies) will pay for
   it -- since we (the people) provided you with a legal monopoly and the
   funds to build OUR copper infrastructure" (but that answer will never
   be heard).
   
   If you really want to read much more eloquent and much better
   researched tirades and diatribes on this topic -- subscribe to
   Boardwatch magazine and read Jack Rickard (the editor) -- who mixes
   this message with new information about communications technology
   every month.
   
   -- Jim
   
     _________________________________________________________________
                                      
                     Copyright  1997, James T. Dennis
            Published in Issue 18 of the Linux Gazette June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                         bash String Manipulations
                                      
                     By Jim Dennis, jimd@starshine.org
                                      
     _________________________________________________________________
                                      
   The bash shell has many features that are sufficiently obscure you
   almost never see them used. One of the problems is that the man page
   offers no examples.
   
   Here I'm going to show how to use some of these features to do the
   sorts of simple string manipulations that are commonly needed on file
   and path names.
   
Background

   In traditional Bourne shell programming you might see references to
   the basename and dirname commands. These perform simple string
   manipulations on their arguments. You'll also see many uses of sed and
   awk or perl -e to perform simple string manipulations.
   
   Often these machinations are necessary perform on lists of filenames
   and paths. There are many specialized programs that are conventionally
   included with Unix to perform these sorts of utility functions: tr,
   cut, paste, and join. Given a filename like
   /home/myplace/a.data.directory/a.filename.txt which we'll call $f you
   could use commands like:
   
        dirname $f
        basename $f
        basename $f .txt
        
   ... to see output like:
   
        /home/myplace/a.data.directory
        a.filename.txt
        a.filename 

   Notice that the GNU version of basename takes an optional parameter.
   This handy for specifying a filename "extension" like .tar.gz which
   will be stripped off of the output. Note that basename and dirname
   don't verify that these parameters are valid filenames or paths. They
   simple perform simple string operations on a single argument. You
   shouldn't use wild cards with them -- since dirname takes exactly one
   argument (and complains if given more) and basename takes one argument
   and an optional one which is not a filename.
   
   Despite their simplicity these two commands are used frequently in
   shell programming because most shells don't have any built-in string
   handling functions -- and we frequently need to refer to just the
   directory or just the file name parts of a given full file
   specification.
   
   Usually these commands are used within the "back tick" shell operators
   like TARGETDIR=`dirname $1`. The "back tick" operators are equivalent
   to the $(...) construct. This latter construct is valid in Korn shell
   and bash -- and I find it easier to read (since I don't have to squint
   at me screen wondering which direction the "tick" is slanted).
   
A Better Way

   Although the basename and dirname commands embody the "small is
   beautiful" spirit of Unix -- they may push the envelope towards the
   "too simple to be worth a separate program" end of simplicity.
   
   Naturally you can call on sed, awk, TCL or perl for more flexible and
   complete string handling. However this can be overkill -- and a little
   ungainly.
   
   So, bash (which long ago abandoned the "small is beautiful" principal
   and went the way of emacs) has some built in syntactical candy for
   doing these operations. Since bash is the default shell on Linux
   systems then there is no reason not to use these features when writing
   scripts for Linux.
   
    If your concerned about portability to other shells and systems --
       you may want to stick with dirname, basename, and sed
       
The bash Man Page

   The bash man page is huge. In contains a complete reference to the
   "readline" libraries and how to write a .inputrc file (which I think
   should all go in a separate man page) -- and a run down of all the csh
   "history" or bang! operators (which I think should be replaced with a
   simple statement like: "Most of the bang! tricks that work in csh work
   the same way in bash").
   
   However, buried in there is a section on Parameter Substitution which
   tells us that $foo is really a shorthand for ${foo} which is really
   the simplest case of several ${foo:operators} and similar constructs.
   
   Are you confused, yet?
   
   Here's where a few examples would have helped. To understand the man
   page I simply experimented with the echo command and several shell
   variables. This is what it all means:
    Given:
         foo=/tmp/my.dir/filename.tar.gz
       
   We can use these expressions:
       
        path = ${foo%/*}
                To get: /tmp/my.dir (like dirname)
                
        file = ${foo##*/}
                To get: filename.tar.gz (like basename)
                
        base = ${file%%.*}
                To get: filename
                
        ext = ${file#*.}
                To get: tar.gz
                
   Note that the last two depend on the assignment made in the second one
       
   Here we notice two different "operators" being used inside the
   parameters (curly braces). Those are the # and the % operators. We
   also see them used as single characters and in pairs. This gives us
   four combinations for trimming patterns off the beginning or end of a
   string:
   
   ${variable%pattern}
          Trim the shortest match from the end
          
   ${variable##pattern}
          Trim the longest match from the beginning
          
   ${variable%%pattern}
          Trim the shortest match from the end
          
   ${variable#pattern}
          Trim the shortest match from the beginning
          
   It's important to understand that these use shell "globbing" rather
   than "regular expressions" to match these patterns. Naturally a simple
   string like "txt" will match sequences of exactly those three
   characters in that sequence -- so the difference between "shortest"
   and "longest" only applies if you are using a shell wild card in your
   pattern.
   
   A simple example of using these operators comes in the common question
   of copying or renaming all the *.txt to change the .txt to .bak (in
   MS-DOS' COMMAND.COM that would be REN *.TXT *.BAK).
   
   This is complicated in Unix/Linux because of a fundamental difference
   in the programming API's. In most Unix shells the expansion of a wild
   card pattern into a list of filenames (called "globbing") is done by
   the shell -- before the command is executed. Thus the command normally
   sees a list of filenames (like "foo.txt bar.txt etc.txt") where DOS
   (COMMAND.COM) hands external programs a pattern like *.TXT.
   
   Under Unix shells, if a pattern doesn't match any filenames the
   parameter is usually left on the command like literally. Under bash
   this is a user-settable option. In fact, under bash you can disable
   shell "globbing" if you like -- there's a simple option to do this.
   It's almost never used -- because commands like mv, and cp won't work
   properly if their arguments are passed to them in this manner.
   
   However here's a way to accomplish a similar result:
   
     for i in *.txt; do cp $i ${i%.txt}.bak; done
     
   ... obviously this is more typing. If you tried to create a shell
   function or alias for it -- you have to figure out how to pass this
   parameters. Certainly the following seems simple enough:
   
     function cp-pattern { for i in $1; do cp $i ${i%$1}$2; done
     
   ... but that doesn't work like most Unix users would expect. You'd
   have to pass this command a pair of specially chosen, and quoted
   arguments like:
   
     cp-pattern '*.txt' .bak
     
   ... note how the second pattern has no wild cards and how the first is
   quoted to prevent any shell globbing. That's fine for something you
   might just use yourself -- if you remember to quote it right. It's
   easy enough to add check for the number of arguments and to ensure
   that there is at least one file that exists in the $1 pattern. However
   it becomes much harder to make this command reasonably safe and
   robust. Inevitably it becomes less "unix-like" and thus more difficult
   to use with other Unix tools.
   
   I generally just take a whole different approach. Rather than trying
   to use cp to make a backup of each file under a slightly changed name
   I might just make a directory (usually using the date and my login ID
   as a template) and use a simple cp command to copy all my target files
   into the new directory.
   
   Another interesting thing we can do with these "parameter expansion"
   features is to iterate over a list of components in a single variable.
   
   For example, you might want to do something to traverse over every
   directory listed in your path -- perhaps to verify that everything
   listed therein is really a directory and is accessible to you.
   
   Here's a command that will echo each directory named on your path on
   it's own line:
   
     p=$PATH until [ $p = $d ]; do d=${p%%:*}; p=${p#*:}; echo $d; done
     
   ... obviously you can replace the echo $d part of this command with
   anything you like.
   
   Another case might be where you'd want to traverse a list of
   directories that were all part of a path. Here's a command pair that
   echos each directory from the root down to the "current working
   directory":
   
     p=$(pwd) until [ $p = $d ]; do p=${p#*/}; d=${p%%/*}; echo $d; done
     
   ... here we've reversed the assignments to p and d so that we skip the
   root directory itself -- which must be "special cased" since it
   appears to be a "null" entry if we do it the other way. The same
   problem would have occurred in the previous example -- if the value
   assigned to $PATH had started with a ":" character.
   
   Of course, its important to realize that this is not the only, or
   necessarily the best method to parse a line or value into separate
   fields. Here's an example that uses the old IFS variable (the
   "inter-field separator in the Bourne, and Korn shells as well as bash)
   to parse each line of /etc/passwd and extract just two fields:
   
                cat /etc/passwd | ( \
                        IFS=: ; while read lognam pw id gp fname home sh; \
                                do echo $home \"$fname\"; done \
                                )
                        
   Here we see the parentheses used to isolate the contents in a subshell
   -- such that the assignment to IFS doesn't affect our current shell.
   Setting the IFS to a "colon" tells the shell to treat that character
   as the separater between "words" -- instead of the usual "whitespace"
   that's assigned to it. For this particular function it's very
   important that IFS consist solely of that character -- usually it is
   set to "space," "tab," and "newline.
   
   After that we see a typical while read loop -- where we read values
   from each line of input (from /etc/passwd into seven variables per
   line. This allows us to use any of these fields that we need from
   within the loop. Here we are just using the echo command -- as we have
   in the other examples.
   
   My point here has been to show how we can do quite a bit of string
   parsing and manipulation directly within bash -- which will allow our
   shell scripts to run faster with less overhead and may be easier than
   some of the more complex sorts of pipes and command substitutions one
   might have to employ to pass data to the various external commands and
   return the results.
   
   Many people might ask: Why not simply do it all in perl? I won't
   dignify that with a response. Part of the beauty of Unix is that each
   user has many options about how they choose to program something. Well
   written scripts and programs interoperate regardless of what
   particular scripting or programming facility was used to create them.
   Issue the command file /usr/bin/* on your system and and you may be
   surprised at how many Bourne and C shell scripts there are in there
   
   In conclusion I'll just provide a sampler of some other bash parameter
   expansions:
   
   ${parameter:-word}
          Provide a default if parameter is unset or null.
          Example:
          
         echo ${1:-"default"}
            
   Note: this would have to be used from within a functions or shell
          script -- the point is to show that some of the parameter
          substitutions can be use with shell numbered arguments. In this
          case the string "default" would be returned if the function or
          script was called with no $1 (or if all of the arguments had
          been shifted out of existence. ${parameter:=word}
          Assign a value to parameter if it was previously unset or null.
          
   Example:
          
         echo ${HOME:="/home/.nohome"}
            
          
          ${parameter:?word}
          Generate an error if parameter is unset or null by printing
          word to stdout.
          
   Example:
          
         : ${HOME:="/home/.nohome"} 
            
          
          ${TMP:?"Error: Must have a valid Temp Variable Set"}
          
   This one just uses the shell "null command" (the : command) to
   evaluate the expression. If the variable doesn't exist or has a null
   value -- this will print the string to the standard error file handle
   and exit the script with a return code of one.
   
   Oddly enough -- while it is easy to redirect the standard error of
   processes under bash -- there doesn't seem to be an easy portable way
   to explicitly generate message or redirect output to stderr. The best
   method I've come up with is to use the /proc/ filesystem (process
   table) like so:
        function error { echo "$*" > /proc/self/fd/2 }
       
   ... self is always a set of entries that refers to the current process
   -- and self/fd/ is a directory full of the currently open file
   descriptors. Under Unix and DOS every process is given the following
   pre-opened file descriptors: stdin, stdout, and stderr.
   
   ${parameter:+word}
          Alternative value. ${TMP:+"/mnt/tmp"}
          use /mnt/tmp instead of $TMP but do nothing if TMP was unset.
          This is a weird one that I can't ever see myself using. But it
          is a logical complement to the ${var:-value} we saw above.
          
   ${#variable}
          Return the length of the variable in characters.
          Example:
          
         echo The length of your PATH is ${#PATH}
            
          
     _________________________________________________________________
                                      
                        Copyright  1997, Jim Dennis
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
 Brave GNU World: Towards A Bioregional, Community-based Linux Support Net
                                      
                      By Michael Stutz, stutz@dsl.org
                                      
     _________________________________________________________________
                                      
   I believe there's strong potential now for the growing LUG phenomenon
   to intertwingle with both the Linux Documentation Project and the
   Linux support network of the c.o.l.* newsgroups and create the next
   "level" of support for Linux. The net result of this would be a
   self-documenting, technical support, training and social network on an
   Internet-wide scale (perhaps some would say that's what it already is
   -- then I mean it would be the same only exponentially better). Right
   now, I see a lot of work (documentation, debugging, support) being
   duplicated. If these efforts could be combined (LUG + LDP + c.o.l.*),
   it would eliminate a lot of this excess work; the net result would
   would be greater than its parts, a synergy.
   
   Many LUGs give demos and post the notes on their web servers. That
   information is diffused across many obscure sites, but bringing these
   efforts together with the LDP folks, I wonder if a new breed of HOWTOs
   (DEMOs?) could be created; a common indexing scheme could have a list
   of all demos or tutorials ever given at any LUG, both searchable and
   listed by subject or other criteria.
   
   And while the c.o.l.* newsgroups are invaluable for a great many
   things, sometimes local help is preferable. With the right
   organization, community-based LUGs could be the first stop for a Linux
   user's questions and problems, with an easy forwarding mechanism to go
   up a chain to be broadcast to the next larger bioregion, then
   continent-wide and finally, if the question is still not answered,
   world-wide.
   
   By not duplicating the work, we'll be freeing up our time to develop
   even more things than the current rate, plus the increased support
   net, replete with documentation and local support, will allow for a
   greater user base. More ideas could be implemented to strengthen this
   base, such as "adopt-a-newbie" programs. For instance, there's a guy
   in town named Rockie who's in this rock band called Craw; I once saw
   in a zine he published that he was starting a volunteer initiative to
   collect old donated computer equipment, refurbish them, and make them
   available to musicians who otherwise wouldn't be able to use
   computers. Why not take that a step further and make them Linux boxes?
   Not only would you get a non-corporate, rock-solid OS, but you'd have
   an instant support network in your own town. This kind of
   community-based approach seems the best way to "grow" GNU/Linux at
   this stage.
   
   This community-based LUG network would be capable of handling any and
   all GNU/Linux support, including the recently-discussed Red Hat
   Support Initiative, as well as Debian support, Slackware support, etc.
   It's above and beyond any single "distribution" and in the interest of
   the entire Linux community.
   
   I think the key to all this is planning. It need not happen all at
   once. It's happening already, with local LUGs making SQL databases of
   LUG user's special interests and/or problems, and their own
   bioregional versions of the Consultants-HOWTO, etc. What is needed
   most of all is a formal protocol, a set of outlines and guidelines,
   that all LUGs, when ready, can initiate -- from technical details such
   as "What format to develop the database?" to everything else. It need
   not be centralized -- like the rest of Linux, it will probably come
   together from all points in the network -- but our base is large
   enough now that taking a look at the various Linux efforts from a
   biological and geographical community-based standpoint, and
   re-coordinating from there, is something that only makes sense.
   
   Copyright (C) 1997 Michael Stutz; this information is free; it may be
   redistributed and/or modified under the terms of the GNU General
   Public License, either Version 2 of the License, or (at your
   preference) any later version, and as long as this sentence remains.
   
     _________________________________________________________________
                                      
                      Copyright  1997, Michael Stutz
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                   Building Your Linux Computer Yourself
                                      
                    By Josh Turial, josht@janeshouse.com
                                      
     _________________________________________________________________
                                      
   I've been in the habit for years of building my own PCs, partly for
   the cost savings, partly because I'm a geek, and partly (mostly),
   because I've found the best way to tune a system exactly to my liking
   is to pick only and exactly the parts that I need. Once I discovered
   Linux a couple of years ago, I had the perfect match for my hobby.
   I'll lay out on these pages what I've learned by trial and error, what
   to look for in a DIY computer, and how to best mix-and-match according
   to your desires and budget.
   
   For starters, the key to building your own system is to find the best
   sources for parts. Computer Shopper is probably the DIY bible, crammed
   with mail--order ads from companies selling parts. I prefer the
   face-to-face purchase, myself. Most of my buying takes place at the
   ubiquitous "computer flea markets" that take place every month or so
   in most major metropolitan areas. In Greater Boston (my stomping
   grounds), there are two major shows put on; KGP and Northern. These
   are held in halls around the metro area, and there's one every few
   weeks within driving distance. Typically, many vendors attend all the
   shows in a given area.
   
   Most vendors are pretty reliable in my area (your mileage may vary),
   and are usually willing to play the deal game. This is where your
   objectives come into play.
   
   Fortunately, Linux isn't too picky about the hardware it runs on--just
   about any old CPU will suffice. The major areas of concern are in
   deciding whether or not to use IDE or SCSI drives and what type of
   video card to install. Assuming that you will use a standard Linux
   distribution, the screaming video card that plays Doom at warp speed
   under DOS may not be supported by Xfree 86. For instance, the
   immensely popular Trident 9440 VGA chipset only recently became
   supported by X, though it shipped with Windows 95and OS/2 drivers.
   Anyhow, in making these decisions, I have a simple checklist:
     * Will the system only run Linux, or will you dual-boot another OS?
     * Are you going to power-use the system?
     * Will you connect to the Internet over a network, or will you use a
       modem and dial-up?
       
   The answers to these questions should help determine what you need to
   purchase. First off, let's cover processor type/speed and RAM. Linux
   is somewhat more efficient in its consumption of system resources than
   DOS (or pretty much any other Intel OS), so you may not necessarily
   need the screaming Pentium 200 that you need for your Windows 95
   system. In the Pentium class processors, currently the 100 and 133 MHz
   Pentiums are the best values in bang-for-the-buck specs. Both chips
   are well under $200, and the 100 MHz processor is close to $100. I
   tend to suggest those processors that operate on a 66 MHz motherboard
   bus clock (like the above two chips--the P166 and P200 are also in
   that category). Generally speaking, the faster clock speed of the
   Pentium 120 and 150 are offset by the slower 60 MHz bus and higher
   price. A good PCI motherboard to accompany the chip costs about $100
   to $150. Stick with boards that use the Intel chipset for safest
   results, though I have had good luck with other vendors.
   
   If you don't need to go Pentium class, there are some bargains out
   there. AMD makes some very good 486 chips, running at up to 120 MHz.
   This is about equivalent in horsepower to the original Pentiums, but
   without the math errors. The most recent system I built uses a hybrid
   motherboard (one older VL-bus slot, 4 PCI slots), and has an AMD
   5x86-133 chip. This processor is kind of a cross between a 486 and a
   Pentium, and competes very well with the Pentium Overdrive upgrades
   that Intel sells to 486 owners. The 5x86's performance is roughly on a
   par with a Pentium-90, and motherboard/processor combined cost roughly
   $100 (as opposed to about $150 for the Overdrive itself).
   
   Basically; you can factor out the price/performance scale like this:
   
   ProcessorBus Performance Price
   486 (66-120MHz) VL bus low-decent $75-$100
   5x86VL PCI or both low-end Pentium $100-$120
   Pentium 100PCI only Good for multiple OS $200-$250
   Pentium 133PCI only Fast Linux, games'll rock $300-$350
   Pentium 166PCI only Wow, that's fast! $475-$550
   Pentium 200PCI only Ahead ludicrous speed, cap'n! $700+
   Pentium ProPCI only If you need it, buy it built...
   
   When you buy the motherboard, there is another factor that has
   recently become worth considering: what form factor do you use? Newer
   Pentium and Pentium Pro-based motherboards are often available in the
   ATX form factor. The board is easier to service, and the cases are
   easier to take apart. ATX boards and cases are a little tougher to
   find, but there is no real cost difference between ATX and the
   traditional Baby-AT form factor, so you may wish to consider the ATX
   alternative at purchase time.
   
   If you buy the motherboard and case from the same vendor, often they
   will mount it in the case for you. If you do it ourself, be careful to
   make sure that the power supply is properly connected, both to the
   motherboard and to the power switch. Power supplies have two keyed
   connectors attaching them to the motherboard. It is difficuly, but not
   impossible, to wire them wrong (I have a friend who did), so make sure
   the black wires on the power leads are touching on the inside: ADD
   DIAGRAM HERE
   
   The motherboard also should be connected to the case with at least two
   spacers that screw down in addition to all the plastic posts that will
   be in the case kit. This insures that cards fit properly, and keeps
   the board stable.
   
   Besides the processor/motherboard combination, there are other
   performance issues, of course. RAM is finally cheap enough that you
   should buy at least 16 MB worth (about $100 at current street prices).
   Linux will run OK in 8 MB (and even 4 MB is OK for text-based work),
   but why scrimp there when it costs so little to do it right? If you
   buy from a show vendor, make sure they test it in front of you. Any
   reputable vendor has their own RAM tester. Generally, there is no real
   price difference between conventional fast-page RAM and the slightly
   faster EDO variety, but make sure your motherboard uses the type of
   RAM you're buying. Most better motherboards will happily auto-detect
   the type of RAM you use and configure themselves correctly. But you
   can't mix, so make sure you only install one type, whatever that is.
   Newer Pentium chipsets support the newer SDRAM, which promises even
   more speed. I have not yet tried it in a system, so I cannot tell you
   whether or not that is so. Buy 32 MB if you can afford it--you won't
   regret it.
   
   There's also the IDE-SCSI decision. IDE interfaces are built into most
   modern motherboards, so it costs nothing extra. And IDE hard drives
   are a little cheaper, and IDE CD-ROMs are fast, cheap (under $80 for a
   4x drive), and easy to set up. But the controllers only support four
   devices total (two ports, with two devices each), and each IDE channel
   is only as fast as the slowest device on it (meaning you really can
   only have two hard drives, and the CD-ROM has to go on channel 2). And
   modern multitasking OSs like Linux can't get their best performance
   out of IDE. But it's cheap and easy. SCSI is higher performance, and
   has none of IDE's restrictions (up to 7 devices per controller, no
   transfer rate limit beyond the adapter's), but the controller will set
   you back $70 (for a basic Adaptec 1522) to $200 (a PCI controller)
   plus. The drives don't cost much more, and you can only get the
   highest performance drives in SCSI versions. SCSI CD-ROM drives are a
   little harder to find, but the basic 4x drive will only cost you about
   $125. And SCSI tape drives (you were planning to back up your data,
   weren'>t you?), are much easier to install and operate than their
   non-SCSI counterparts (faster, too). I'd say the decision is one to be
   made after you've priced the rest of the system out. If you can afford
   it, SCSI will make for a better system in the long run.
   
   The video card decision is also an important one. The critical part of
   this decision is picking a card that uses a chipset (the actual brains
   of the card) which is supported by XFree86, the standard Linux
   XWindows with most distributions. A few distributions (Caldera, Red
   Hat) ship with commercial X implementations that have a little more
   flexibility in video support. I find S3-based video cards to be the
   most universally supported--the S3 driver in XFree86 is very solid and
   works even with most of the generic, no-name video cards on the
   market. The S3 cards generally have a large (about 1.5" x 1.5") chip
   with the S3 brand name prominently displayed on it. Diamond and Number
   Nine make extensive use of S3 chips in their video card lines, to name
   a couple of brands. Among other SVGA chipset makers, Cirrus and
   Trident are also well-supported. Only the latest X versions include
   support for the popular Trident 9440 chips, so be careful before
   buying a video card with that chipset. XFree86 includes a very
   complete readme with the status of support for most video
   cards/chipsets, so consult it if you have any questions.
   
   Your sound card (if you want one) is a relatively simple decision. The
   SoundBlaster 16 is the defacto standard for sound cards, and is
   supported by virually all software. Some motherboards even include the
   SB16 chipset on them. If at all possible, buy your card in a jumpered
   version, rather than the SoundBlaster 16 Plug-and-Play that is popular
   today. Most vendors have jumpered versions available. There are also
   SB16-compatible cards out on the market, and they are definitely worth
   considering. Expect to pay around $80 for your sound card.
   
   Possibly the choice that'll get me in the most trouble is the Ethernet
   card selection (if your system is going on a LAN). A Novell NE2000
   clone is the cheapest choice you can make (the clones cost around
   $20), but most clones will hang the machine at boot time if the kernel
   is probing for other Ethernet card brands when the NE2000 is set to
   its default address of 300h. The solution is to either boot from a
   kernel with no network support (then recompile the kernel without the
   unneeded drivers), or to move the address of the NE2000 to another
   location. I've used 320h without problems to avoid this hang.
   
   But the best way around the problem is to use a major-brand card. I
   currently rely on 3Com's EtherLink III series cards (the 3C5x9), which
   are universally supported, and software-configurable (from DOS, so
   keep a DOS floppy around). It's available in ISA or PCI versions, ISA
   being cheaper. This card costs around $90 from most vendors. I know
   that's more expensive than some motherboards, but it's a worthwhile
   investment.
   
   If you are using dial-up access to the Internet instead (or just want
   a modem anyways), you can approach buying a modem with two
   alternatives. If your motherboard has built-in serial ports (almost
   all the non-VL bus boards do), then you could buy an external modem. I
   prefer them to internal modems, since the possibility of setting an
   address incorrectly is then gone, ad you can always tell if it is
   working from the status lights on the front of the modem. Internal
   modems generally cost a little less, but there's a greater risk of
   accidentally creating an address or interrupt conflict in the process
   of installing it. An additional problem is that many modems sold now
   are plug-and-play compatible. Unless you're already running Windows
   95, P&P is a scourge on the Intel computing world (Macs do P&P in a
   fashion that actually works). Because most Intel-based OSs need to
   know the interrupt and memory location of peripherals at boot time,
   any inadverdent change caused by a P&P device can adversely impact the
   boot process. Linux can find many devices regardless (SCSI
   controllers, most Ethernet cards), but serial ports and sound devices
   are hard-mapped to interrupts at the OS level. So try to make sure
   that any such devices can be operated in a non-P&P mode, or in the
   case of modems, buy an external one if possible to avoid the situation
   entirely.
   
   Remember, there are really two bottom-line reasons to build your Linux
   box yourself. One is to save money (and I hope I've shown you how to
   do that), but the real main reason is to have fun. Computing is a fun
   hobby, and building the system yourself can be a fun throwback to the
   early days when a computer was bought as a bag of parts and a
   schematic. I've been building machines like this for several years,
   and never had trouble--not to mention that I've gotten away with
   bringing in a lot of stuff under my wife's nose by buying them a part
   at a time! (Oops, the secret's out) So, for your next computer, give
   homebrewing a whirl. It'ss easier than you think, and what better
   companion for a free, home-brewed OS than a cheap, home-brewed PC?
   
     _________________________________________________________________
                                      
                       Copyright  1997, Josh Turiel
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                    Cleaning Up Your /tmp...The Safe Way
                                      
                       By Guy Geens, ggeens@iname.com
                                      
     _________________________________________________________________
                                      
  Introduction
  
   Removing temporary files left over in your /tmp directory, is not as
   easy as it looks like. At least not on a multi-user system that's
   connected to a network.
   
   If you do it the wrong way, you can leave your system open to attacks
   that could compromise your system's integrity.
   
  What's eating my disk space?
  
   So, you have your Linux box set up. Finally, you have installed
   everything you want, and you can have some fun! But wait. Free disk
   space is slowly going down.
   
   So, you start looking where this disk space is going to. Basically,
   you will find the following disk hogs:
   
     * Formatted man pages in/var/catman;
     * The /tmp and /var/tmp hierarchies.
       
   Of course, there are others, but in this article, I'll concentrate on
   these three, because you normally don't lose data when you erase the
   contents. At the most, you will have to wait while the files are
   regenerated.
   
  The quick and dirty solution
  
   Digging through a few man pages, you come up with something like this:
   
   find /var/catman -type f -atime 7 -print | xargs -- rm -f --
   
   This will remove all formatted man pages that have not been read for 7
   days. The find command makes a list of these, and sends them to the
   xargs. xargs puts these files on the command line, and calls rm -f to
   delete them. The double dashes are there so that any files starting
   with a minus will not be misinterpreted as options.
   
   (Actually, in this case, find prints out full path names, which are
   guaranteed to start with a /. But its better to be safe than sorry.)
   
   This will work fine, and you can place this in your crontab file or
   one of your start-up scripts.
   
   Note that I used /var/catman in the previous example. You might be
   thinking ``So, why not use it for /tmp?'' There is a good reason for
   this. Let me start by elaborating on the difference between
   /var/catman and /tmp directories. (The situation for /var/tmp is the
   same as for /tmp. So you can change all instances of /tmp by /var/tmp
   in the following text.)
   
    Why /var/catman is easy
    
   If you look at the files in /var/catman, you will notice that all the
   files are owned by the same user (normally man). This user is also the
   only one who has write permissions on the directories. That is because
   the only program that ever writes to this directory tree is man .
   Let's look at /usr/bin/man:
   
-rwsr-sr-x 1 man man 29716 Apr 8 22:14 /usr/bin/man*

   (Notice the two letters `s' in the first column.)
   
   The program is running setuid man, i.e., it takes the identity and
   privileges of this `user'. (It also takes the group privileges, but
   that is not really important in our discussion.) man is not a real
   user: nobody will ever log in with this identity. Therefore, man (the
   program) can write to directories a normal user cannot write to.
   
   Because you know all files in the directory tree are generated by one
   program, it is easy to maintain.
   
    And now /tmp
    
   In /tmp, we have a totally different situation. First of all, the file
   permissions:
   
drwxrwxrwt 10 root root 3072 May 18 21:09 /tmp/

   We can see that everyone can write to this directory: everyone can
   create, rename or delete files and directories here.
   
   There is one limitation: the `sticky bit' is switched on. (Notice the
   t at the end of the first column.) This means a user can only delete
   or rename files owned by himself. This prohibits users peskering each
   other by removing the other one's temporary files.
   
   If you were to use the simple script above, there are security risks
   involved. Let me repeat the simple one-line script from above:
   
find /tmp -type f -atime 7 -print | xargs -- rm -f --

   Suppose there is a file /tmp/dir/file, and it is older than 7 days.
   
   By the time find passes this filename to xargs, the directory might
   have been renamed to something else, and there might even be another
   directory /tmp/dir.
   
   (And then I didn't even mention the possibility of embedded newlines.
   But that can be easily fixed by using -print0 instead of -print.)
   
   All this could lead to a wrong file being deleted, Either
   intentionally or by accident. By clever use of symbolic links, an
   attacker can exploit this weakness to delete some important system
   files.
   
   For an in-depth discussion of the problem, see the Bugtraq mailing
   list archives. (Thread ``[linux-security] Things NOT to put in root's
   crontab'').
   
   This problem is inherently linked with find's algoritm: there can be a
   long time between the moment when find generates a filename internally
   and when it is passed on to the next program. This is because find
   recurses subdirs before it tests the files in a particular directory.
   
    So how do we get around this?
    
   A first idea might be:
   
   find ... -exec rm {} \;
   
   but unfortunately, this suffers from the same problem, as the `exec'
   clause passes on the full pathname.
   
   In order to solve the problem, I wrote this perl script , which I
   named cleantmp.
   
   I will explain how it works, and why it is safer than the
   aforementioned scripts.
   
   First indicate I'm using the File::Find module. After this statement,
   I can call the &find subroutine.
   
use File::Find;

   Then do a chroot to /tmp. This changes the root directory for the
   script to /tmp. It will make sure the script can't access any files
   outside of this hierarchy.
   
   Perl only allows a chroot when the user is root. I'm checking for this
   case, to facilitate testing.
   
# Security measure: chroot to /tmp

$tmpdir = '/tmp/';

chdir ($tmpdir) || die "$tmpdir not accessible: $!";

if (chroot($tmpdir)) { # chroot() fails when not run by root

($prefix = $tmpdir) =~ s,/+$,,;

$root = '/';

$test = 0;

} else {

# Not run by root - test only

$prefix = '';

$root = $tmpdir;

$test = 1;

}

   Then we come to these lines:
   
&find(\&do_files, $root);

&find(\&do_dirs, $root);

   Here, I let the find subroutine recurse through all the subroutines of
   /tmp. The functions do_files and do_dirs are called for each file
   found. There are two passes over the directory tree: one for files,
   and one for directories.
   
   Now we have the function do_files.
   
sub do_files {

(($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) &&

(-f _ || -l _ ) &&

(int(-A _) > 3) &&

! /^\.X.*lock$/ &&

&removefile ($_) && push @list, $File::Find::name;
}

   Basically, this is the output of the find2perl program, with a little
   changes.
   
   This routine is called with $_ set to the filename under inspection,
   and the current directory is the one in which it resides. Now let's
   see what it does. (In case you don't know perl: the && operator
   short-circuits, just like in C.)
   
    1. The first line gets the file's parameters from the kernel;
    2. If that succeeds, we check if it is a regular file or a symbolic
       link (as opposed to a directory or a special file);
    3. Then, we test if the file is old enough to be deleted (older than
       3 days);
    4. The fourth line makes sure X's lockfiles (of the form
       /tmp/.X0-lock are not removed;
    5. The last line will remove the file, and keep a listing of all
       deleted files.
       
   The removefile subroutine merely tests if the $test flag is set, and
   if not, deletes the file.
   
   The do_dirs subroutine is very similar to this one, and I won't go
   into the details.
   
    A few remarks
    
   I use the access time to determine the file's age. The reason for this
   is simple. I sometimes unpack archives into my /tmp directory. When it
   creates files, tar gives them the date they had in the archive as the
   modification time. In one of my earlier scripts, I did test on the
   mtime. But then, I was looking in an unpacked archive, at the same
   time when cron started to clean up. (Hey?? Where did my files go?)
   
   As I said before, the script checks for some special files (and also
   directories in do_dirs). This is because they are important for the
   system. If you have a separate /tmp partition, and have quota
   installed on it, you should also check for quota's support files -
   quota.user and quota.group.
   
   The script also generates a list of all deleted files and directories.
   If you don't want this output, send the output to /dev/null.
   
  Why this is safe
  
   The main difference with the find constructions I have shown before is
   this: the file to be deleted is not referenced by its full pathname.
   If the directory is renamed while the script is scanning it, this
   doesn't have any effect: the script won't notice this, and delete the
   right files.
   
   I have been thinking about weaknesses, and I couldn't find one. Now
   I'm giving this to you for inspection. I'm convinced that there are no
   hidden security risks, but if you do find one, let me know.
   
     _________________________________________________________________
                                      
                        Copyright  1997, Guy Geens
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                      By Mike List, troll@net-link.net
                                      
    Welcome to installment 5 of Clueless at the Prompt: a new column for new
    users.
    
     _________________________________________________________________
                                      
    Getting Serious
    
   If you've been experimenting with linux, reading all the docs you can
   get your hands on, downloading software to try, and generally cleaning
   up after the inevitable ill advised rm as root, you are probably
   starting to get enough confidence in linux to use it to do more than
   browse the internet. After all, why use Gates when you can jump the
   fences? This month I'm going to discuss some strategies for damage
   control, and how you can safely upgrade without losing valuable files
   and configurations, as well as some more general scouting around the
   filesystem.
     _________________________________________________________________
                                      
    Partitions as Safety Devices
    
   If you have your entire linux installation on one partition, or
   partition, you could be putting your files and accmulated data in
   jeopardy as well as making the business of upgrading more difficult.
   
   I understand that some distributions, notably Debian, are capable of
   upgrading any part of the system's component software without a full
   install, but I'm running Slackware, and it's generally recommended
   that when certain key system components are upgraded, a full reinstall
   is the safest way to avoid conflicts between old and new parts. What
   to do when the time comes can be much simpler if you have installed at
   least your /home direcory on a separate partition.
   
   When you do a fresh install you are asked to describe mount points for
   your partitions. You are also asked if you want to format those
   partitions. If your /home directory doesn't contain much in the way of
   system files you can opt to skip formatting it, thereby reducing the
   chance that you'll have to use your backup to recover lost files in
   those directories. No, I'm not suggesting tht you don't have to backup
   your /home or other personal files, since there is no reliable
   undelete for linux that I'm aware of at this time. However, if you are
   just experimenting with linux and using a separate OS to do your
   important work and it's located on another disk, you may not feel to
   compelled to backup much in the way of linux files. Sooner or later
   though, if you are committed(or ought to be :) ) enough to linux to
   drop the other system, you WILL want to rethink that omission.
     _________________________________________________________________
                                      
    Formatting Floppies
    
   When you format a floppy disk in MSDOS you do several operations in
   one fell swoop. You erase files, line up the tracks, sectors, etc, and
   install a MSDOS compatible filesystem. Another thing to recognize is
   that MS mounts the floppy drive as a device, while in linux the device
   is mounted as a part of the filesystem, to a specific directory.
   
   There is a suite of utilities called mtools that can be used to create
   DOS formatted floppies, as well as some other MS specific operations,
   but I haven't had a lot of fun with it. I use the standard utilities
   instead Here is how I format a floppy disk:

     fdformat /dev/fd0xxx

   where xxx is the full device name. My floppy drive is /dev/fd0u1440
   but your mileage may vary. Try ls'ing your /dev directory to see. I
   installed from floppies, so I'm not real sure about CDROM installation
   but I took note of the drive specified to install the system. When the
   drive finishes formatting, you can type:

     mkfs -t msdos /dev/fd0xxxx

   once again if necessary adding any specifiers. Your disk should be
   formatted.
   
     _________________________________________________________________
                                      
    Writing to your Floppy Disk
    
   You are probably sitting there with a newly msdos formatted floppy
   disk and wondering how to write to it. If you use mtools, you are on
   your own, but don't feel bad you will save some steps, ie. mount and
   umount the floppy drive before and after writing to the drive, but it
   seems that I always fail to remember some option when I try to use
   mtools, so I don't use them. I type :

     mount -t msdos /dev/fd0xxxx /mnt

   you can specify another mount point besides /mnt if you would like,
   perhaps a different mount point for each filesystem type that you
   might want to use, ext2, or minix for example, but if you or people
   that you work with use MS the msdos format might be the best, at least
   for now.
   
   You can put an entry in your /etc/fstab that specifies the mount point
   for your floppy drive, with a line that looks something like:

     /dev/fd0         /mnt      msdos       rw,user,noauto  0   0

   This particular line will keep the floppy drive from mounting on
   bootup (noauto), and allow users to mount the drive. You should take
   the time to alert your users that they MUST mount and umount /dev/fd0
   each time they change a disk, otherwise they will not get a correct ls
   when they try to read from the mount point. Assuming that this line is
   added to the /etc/fstab file the correct command for mounting the
   drive is:

     mount /dev/fd0

   which will automatically choose /mnt as the mount point.To read from
   the drive, the present working directory must be changed by:

     cd /mnt

   after which the contents of the disk can be read or written to> Linux
   is capable of reading files from several filesystem types, so it's a
   pretty good first choice, since you can share files with DOS users.
   
   Anyway, assuming you didn't get any error messages, you are ready to
   copy a file to the disk using the:

     cp anyfile.type /mnt

   assuming tha /mnt is the mount point that you specified in the mount
   command, you should have copied the file to your floppy disk. Try:

     ls /mnt

   you should see the file you just cp'ed. if not, you should retry the
   mount command, but if you didn't get any error messages when you tried
   to mount the drive, you should be OK. To verify that you did write to
   the floppy instead of the /mnt directory, (there is a difference, if
   no drive is mounted it's just a directory) you can:

     umount /dev/fd0xxxx

   and then try:
     ls /mnt

   upon which you should get a shell prompt. If you get the file name
   that you tried to copy to floppy, merely rm it and try the whole
   routine again. If you find this confusing, read up on mtools by:

    info mtools

   You may like what you see, give them a try. As I said I haven't had
   much luck with them, but basically the mformat command should do the
   abovementioned format tasks in one pass. Mcopy should likewise copy
   the named file to the floppy without the need to separately mount the
   drive.
     _________________________________________________________________
                                      
    Other Filesystems
    
   There are several filesystems, as mentioned above that can be read by
   linux. Minix, ext2, ext, xiaf, vfat, msdos(I'm still a little bit
   foggy on the difference between these two).Still others can be read
   with the use of applications, amiga for instance. That's why it makes
   sense to split up what is a single step process in DOS.
     _________________________________________________________________
                                      
    Humbly acknowledging...
    
   I got a lot of mail regarding the locate command, which I'm woefully
   guilty of spreading misinformation about. The real poop is that locate
   is a byproduct of a command, updatedb, which can be run at any time.
   It is run as default in the wee hours of the morning from
   /usr/bin/crontab, which is where I got the idea to leave the computer
   on overnight.
     _________________________________________________________________
                                      
   Next Time- Let me know what you would like to see in here and I'll try
   to oblige just e-mailtroll@net-link.net me and ask, otherwise I'll
   just write about what gave me trouble and how I got past it.
   
   TTYL, Mike List
   
     _________________________________________________________________
                                      
                        Copyright  1997, Mike List
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
         DiskHog: Using Perl and the WWW to Track System Disk Usage
                                      
                    By Ivan Griffin, Ivan.Griffin@ul.ie
                                      
     _________________________________________________________________
                                      
   An irksome job that most system administrators have to perform at some
   stage or other is the implementation of a disk quota policy. Being a
   maintainer of quite a few machines (mostly Linux and Solaris, but also
   including AIX) without system enforced quotas, I needed an automatic
   way of tracking disk quotas. To this end, I created a Perl script to
   regularly check users disk usage, and compile a list of the largest
   hoggers of disk space. Hopefully, in this way, I can politely
   intimidate people into reducing the size of their home directories
   when they get ridiculously large.
   
   The du command summarises disk usage for a given directory hierarchy.
   When run in each users home directory, it can report how much disk
   space the user is occupying. At first, I had written a shell script to
   run du on a number of user directories, with an awk back-end to
   provide nice formatting of the output. This proved difficult to
   maintain if new users were added to the system. Users home directories
   were unfortunately located in different places on each operating
   system.
   
   Perl provided a convenient method of rewriting the shell / awk scripts
   into a single executable, which not only provided more power and
   flexibility but also ran faster! Perl's integration of standard Unix
   system calls and C library functions (such as getpwnam() and
   getgrname()) makes it perfectly suited to tasks like this. Rather than
   provide a tutorial on the Perl language, in this article I will
   describe how I used Perl as a solution to my particular need. The
   complete source code to the Perl script is shown in listing 1.
   
   The first thing I did was to make a list of the locations in which
   users home directories resided, and isolate this into a Perl array.
   For each sub-directory in the directories listed in this array, a disk
   usage summary was required. This was implemented by using the Perl
   system command to spawn off a process running du.
   
   The du output was redirected to a temporary file. The temporary file
   was named using the common $$ syntax, which is replaced at run time by
   the PID of the executing process. This guaranteed that multiple
   invocations of my disk usage script (while unlikely) would not clobber
   each others temporary working data.
   
   All the sub-directories were named after the user who owned the
   account. This assumption made life a bit easier in writing the Perl
   script, because I could skip users such as root, bin, etc.
   
   I now had, in my temporary file, a listing of a disk usage and
   username, one pair per line of the file. I wanted to split these up
   into an associated hash of users and disk usage, with users as the
   index key. I also wanted to keep a running total of the entire disk
   usage, and also the number of users. Once Perl had parsed all this
   information from the temporary file, I could delete it.
   
   I decided the Perl script would dump its output as an HTML formatted
   page. This allowed me great flexibility in presentation, and also
   permitted the information to be available over the local intranet -
   quite useful when dealing with multiple heterogeneous environments.
   
   Next I had to work out what information I needed to present. Obviously
   the date when the script had run was important, and a sorted table
   listing disk usage from largest to smallest was essential. Printing
   the GCOS information field from the password file allowed me to view
   both real names, and usernames. I also decided it might be nice to
   provide a hypertext link to the users homepage, if one existed. So
   extracting their official home directory from the password file, and
   adding on to it the standard user directory extensions to it
   (typically public_html or WWW) allowed this.
   
   Sorting in Perl usually involves the use of the spaceship operator (
   ). The sort function sorts a list and returns the sorted list value.
   It comes in many forms, but the form used in the code is:
   

sort sub_name list

   where sub_name is a Perl subroutine. sub_name is call during element
   comparisons, and it must return an integer less than, equal to, or
   greater than zero, depending on the desired order of the list
   elements. sub_name may also be replaced with an inline block of Perl
   code.
   
   Typically sorting numerically ascending takes the form:
   

@NewList = sort { $a <=> $b } @List;

   whereas sorting numerically descending takes the form:
   

@NewList = sort { $b <=> $a } @List;

   I decided to make the page a bit flashier by adding a few of those
   omnipresent coloured ball GIFs. Green indicates that the user is
   within allowed limits. Orange indicates that the user is in a danger
   buffer zone - no man's land, from which they are dangerously close to
   the red zone. The red ball indicate a user is over quota, and
   depending on the severity multiple red balls may be awarded to really
   greedy, anti-social users.
   
   Finally, I plagued all the web search engines until I found a suitable
   GIF image of a pigglet, which I included on the top of the page.
   
   The only job left was to include the script to run nightly as a cron
   job. It needed to be run as root in order to accurately assess the
   disk usage of each user - otherwise directory permissions could give
   false results. To edit roots cron entries (called a crontab), first
   ensure you have the environment variable VISUAL (or EDITOR) set to
   your favourite editor. Then type
   

crontab -e

   Add the line from listing 2 to any existing crontab entries. The
   format of crontab entries is straightforward. The first five fields
   are integers, specifying the minute (0-59), hour (0-23), day of the
   month (1-31), month of the year (1-12) and day of the week(0-6,
   0=Sunday). The use of an asterix as a wild-card to match all values is
   permitted, as is specifying a list of elements separated by commas, or
   a range specified by start and end (separated by a minus). The sixth
   field is the actual program to being scheduled.
   
   A script of this size (which multiple invocations of du) takes some
   time to process. As a result, it is perfectly suited for scheduling
   under cron - I have it set to run once a day on most machines
   (generally during the night, which user activity is low). I believe
   this script shows the potential of using Perl, Cron and the WWW to
   report system statistics. Another variant of it I have coded performs
   an analysis of web server log files. This script has served me well
   for many months, and I am confident it will serve other sysadmins too.
   
     _________________________________________________________________
                                      

#!/usr/local/bin/perl -Tw

# $Id: issue18.txt,v 1.1.1.1 1997/09/14 15:01:47 schwarz Exp $
#
# Listing 1:
# SCRIPT:       diskHog
# AUTHOR:       Ivan Griffin (ivan.griffin@ul.ie)
# DATE:         14 April 1996
#
# REVISION HISTORY:
#   06 Mar 1996 Original version (written using Bourne shell and Awk)
#   14 Apr 1996 Perl rewrite
#   01 Aug 1996 Found piggie image on the web, added second red ball
#   02 Aug 1996 Added third red ball
#   20 Feb 1997 Moved piggie image :-)

#
# outlaw barewords and set up the paranoid stuff
#
use strict 'subs';
use English;

$ENV{'PATH'} = '/bin:/usr/bin:/usr/ucb'; # ucb for Solaris dudes
$ENV{'IFS'} = '';

#
# some initial values and script defines
#
$NumUsers = 0;
$Total = 0;
$Position = 0;

$RED_ZONE3 = 300000;
$RED_ZONE2 = 200000;
$RED_ZONE = 100000;
$ORANGE_ZONE = 50000;

$CRITICAL = 2500000;
$DANGER   = 2200000;

$TmpFile = "/var/tmp/foo$$";
$HtmlFile = '>/home/sysadm/ivan/public_html/diskHog.html';
$PerlWebHome = "diskHog.pl";

$HtmlDir = "WWW";
$HtmlIndexFile = "$HtmlDir/index.html";
$Login = " ";
$HomeDir=" ";
$Gcos = "A user";

@AccountDirs = ( "/home/users", "/home/sysadm" );
@KeyList = ();
@TmpList = ();

chop ($Machine = `/bin/hostname`);
# chop ($Machine = `/usr/ucb/hostname`); # for Solaris


#
# Explicit sort subroutine
#
sub by_disk_usage
{
    $Foo{$b} <=> $Foo{$a};  # sort integers in numerically descending order
}


#
# get disk usage for each user and total usage
#
sub get_disk_usage
{
    foreach $Directory (@AccountDirs)
    {
        chdir $Directory or die "Could not cd to $Directory\n";
        # system "du -k -s * >> $TmpFile"; # for Solaris
        system "du -s * >> $TmpFile";
    }

    open(FILEIN, "<$TmpFile") or die "Could not open $TmpFile\n";

    while (<FILEIN>)
    {
        chop;
        ($DiskUsage, $Key) = split(' ', $_);

        if (defined($Foo{$Key}))
        {
            $Foo{Key} += $DiskUsage;
        }
        else
        {
            $Foo{$Key} = $DiskUsage;

            @TmpList = (@KeyList, $Key);
            @KeyList = @TmpList;
        };

        $NumUsers ++;
        $Total += $DiskUsage;
    };

    close(FILEIN);
    unlink $TmpFile;
}


#
# for each user with a public_html directory, ensure that it is
# executable (and a directory) and that the index.html file is readable
#
sub user_and_homepage
{
    $User = $_[0];

    ($Login, $_, $_, $_, $_, $_, $Gcos, $HomeDir, $_) = getpwnam($User)
        or return "$User</td>";

    if ( -r "$HomeDir/$HtmlIndexFile" )
    {
        return "$Gcos <a href=\"/~$Login\">($User)</a>";
    }
    else
    {
        return "$Gcos ($User)</td>";
    };
}

#
# generate HTML code for the disk usage file
#
sub html_preamble
{
    $CurrentDate = localtime;

    open(HTMLOUT, $HtmlFile) or die "Could not open $HtmlFile\n";
    printf HTMLOUT <<"EOF";
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 3.0//EN">

<!--
  -- Automatically generated HTML
  -- from $PROGRAM_NAME script
  --
  -- Last run: $CurrentDate
  -->

<html>
<head>
<title>
Disk Hog Top $NumUsers on $Machine
</title>
</head>

<body bgcolor="#e0e0e0">
<h1 align=center>Disk Hog Top $NumUsers on $Machine</h1>

<div align=center>
<table>
<tr>
    <td valign=middle><img src="images/piggie.gif" alt="[PIGGIE!]"></td>
    <td valign=middle><em>This is a <a href=$PerlWebHome>Perl</a>
        script which runs<br>
        automatically every night</em><br></td>
</tr>
</table>

<p>
<b>Last run started</b>: $StartDate<br>
<b>Last run finished</b>: $CurrentDate
</p>

<p>
<table border=2>
<tr>
<th>Status</th>
<td>
EOF

    if ($Total > $CRITICAL)
    {
        print HTMLOUT "CRITICAL!!! - Reduce Disk Usage NOW!";
    }
    elsif (($Total <= $CRITICAL) && ($Total > $DANGER))
    {
        print HTMLOUT "Danger - Delete unnecessary Files";
    }
    else
    {
        print HTMLOUT "Safe";
    }


    printf HTMLOUT <<"EOF";
</td>
</tr>
</table>
</P>

<hr size=4>

<table border=2 width=70%%>
    <tr>
        <th colspan=2>Chart Posn.</th>
        <th>Username</th>
        <th>Disk Usage</th>
    </tr>

EOF
}

#
#
#
sub html_note_time
{
    $StartDate = localtime;
}



#
# for each user, categorize and display their usage statistics
#
sub dump_user_stats
{
    foreach $Key (sort by_disk_usage @KeyList)
    {
        $Position ++;

        print HTMLOUT <<"EOF";
    <tr>\n
        <td align=center>
EOF

        #
        # colour code disk usage
        #
        if ($Foo{$Key} > $RED_ZONE)
        {
            if ($Foo{$Key} > $RED_ZONE3)
            {
                print HTMLOUT "        <img src=images/ball.red.gif>\n";
            }

            if ($Foo{$Key} > $RED_ZONE2)
            {
                print HTMLOUT "        <img src=images/ball.red.gif>\n";
            }

            print HTMLOUT "        <img src=images/ball.red.gif></td>\n";
        }
        elsif (($Foo{$Key} <= $RED_ZONE) && ($Foo{$Key} > $ORANGE_ZONE))
        {
            print HTMLOUT "        <img src=images/ball.orange.gif></td>\n";
        }
        else
        {
            print HTMLOUT "        <img src=images/ball.green.gif></td>\n";
        }

        print HTMLOUT <<"EOF";

        <td align=center>$Position</td>
EOF

        print HTMLOUT "        <td align=center>";
        print HTMLOUT &user_and_homepage($Key);
        print HTMLOUT "</td>\n";

        print HTMLOUT <<"EOF";
        <td align=center>$Foo{$Key} KB</td>
    </tr>

EOF
    };
}

#
# end HTML code
#
sub html_postamble
{
    print HTMLOUT <<"EOF";
    <tr>
        <th></th>
        <th align=left colspan=2>Total:</th>
        <th>$Total</th>
    </tr>
</table>

</div>

<hr size=4>
<a href="/">[$Machine Home Page]</a>

</body>
</html>
EOF


    close HTMLOUT ;

#
# ownership hack
#
    $Uid = getpwnam("ivan");
    $Gid = getgrnam("users");

    chown $Uid, $Gid, $HtmlFile;
}


#
# main()
#

&html_note_time;
&get_disk_usage;
&html_preamble;
&dump_user_stats;
&html_postamble;

# all done!

                    Listing 1. diskHog.pl script source.
                           _____________________
                                      

0 0 * * * /home/sysadm/ivan/public_html/diskHog.pl

                      Listing 2. root's crontab entry.
                           _____________________
                                      
                                  [INLINE]
                         Figure 1. diskHog output.
                           _____________________
                                      
     _________________________________________________________________
                                      
                       Copyright  1997, Ivan Griffin
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
                       dosemu & MIDI: A User's Report
                                      
                    By Dave Phillips, dlphilp@bright.net
                                      
     _________________________________________________________________
                                      
   First, the necessary version info:
     * Linux kernel 2.0.29
     * dosemu 0.66.1
     * Sound Driver 3.5.4
       
   And then there's the hardware:
     * AMD 486/120
     * MediaVision Pro Audio Spectrum 16 (PAS16) soundcard w. MIDI
       interface adapter
     * Music Quest MQX32M MIDI interface
     * two Yamaha TX802 synthesizers
       
   dosemu is an MS-DOS emulator for Linux. The on-line manual describes
   it as
   
     "...a user-level program which uses certain special features of the
     Linux kernel and the 80386 processor to run MS-DOS in what we in
     the biz call a DOS box. The DOS box, a combination of hardware and
     software trickery, has these capabilities:
     * the ability to virtualize all input/output and processor control
       instructions
     * the ability to support the word size and addressing modes of the
       iAPX86 processor family's real mode, while still running within
       the full protected mode environment
     * the ability to trap all DOS and BIOS system calls and emulate such
       calls as are necessary for proper operation and good performance
     * the ability to simulate a hardware environment over which DOS
       programs are accustomed to having control.
     * the ability to provide MS-DOS services through native Linux
       services; for example, dosemu can provide a virtual hard disk
       drive which is actually a Linux directory hierarchy.
       
     The hardware component of the DOS box is the 80386's virtual-8086
     mode, the real mode capability described above. The software
     component is dosemu."
     
   I installed version 0.66.1 because I read that it supported MIDI, and
   I was curious to find whether I would be able to run my favorite DOS
   MIDI sequencer, Sequencer Plus Gold from Voyetra. Installation
   proceeded successfully, and after some initial fumbling (and a lot of
   help from the Linux newsgroups), I was running some DOS programs under
   Linux.
   
   However, the MIDI implementation eluded me. I followed the directions
   given in the dosemu package: they are simple enough, basically setting
   up a link to /dev/sequencer. But since Sequencer Plus requires a
   Voyetra API driver, I ran into trouble: the VAPI drivers wouldn't
   load.
   
   I tried to use the VAPIMV (Voyetra API for Media Vision) drivers, but
   they complained that MVSOUND.SYS wasn't loaded. These drivers are
   specific to the PAS16 soundcard, so I was puzzled that they couldn't
   detect MVSOUND.SYS (which was indeed successfully loaded by
   config.sys). I also tried using the SAPI drivers, Voyetra's API for
   the SoundBlaster: the PAS16 has a SB emulation mode which I had
   enabled in MVSOUND.SYS, but those drivers wouldn't load, again
   complaining that MVSOUND.SYS wasn't installed. VAPIMQX, the driver for
   the MQX32M, refused to recognize any hardware but a true MQX. Checking
   the Linux sound driver status with 'cat/dev/sndstat' reported my MQX
   as installed, but complete support for the sound driver (OSS/Free) has
   yet to be added to dosemu.
   
   Since MVSOUND.SYS was indeed installed (I checked it in dosemu using
   MSD, the Microsoft Diagnostics program), and since the MIDI interface
   on the soundcard was activated, I began to wonder whether that
   interface could be used. I tested the DOS MIDI programming environment
   RAVEL, which is "hardwired" internally to only an MPU-401 MIDI
   interface: to my surprise and satisfaction, the soundcard's MIDI
   interface worked, and I now had a DOS MIDI program working under
   Linux.
   
   Following that line of action, I figured that the Voyetra native MPU
   driver just might load. I tried VAPIMPU: it failed, saying it couldn't
   find the interrupt. I added the command-line flag /IRQ:7 and the
   driver loaded. I now had a Voyetra MIDI interface device driver
   loaded, but would Sequencer Plus Gold run ?
   
   Not only does Sequencer Plus run, I am also able to use Voyetra's
   Sideman D/TX patch editor/librarian for my TX802s. And I can run
   RAVEL, adding a wonderful MIDI programming language to my Linux music
   & sound arsenal.
   
   All is not perfect: RAVEL suffers the occasional stuck note, and the
   timing will burp while running Seq+ in xdos, particularly when the
   mouse is moved. The mouse is problematic with Seq+ in xdos anyway,
   sometimes locking cursor movement. Since my configuration for the
   dosemu console mode doesn't support the mouse, that problem doesn't
   arise there. Switching to another console is possible; this is
   especially useful if and when dosemu crashes. Also, programs using VGA
   "high" graphics will crash, but I must admit that I have barely begun
   to tweak the video subsystem for dosemu. It may eventually be possible
   to run Sound Globs, Drummer, and perhaps even M/pc, but for now it
   seems that only the most straightforward DOS MIDI programs will load
   and run without major problems.
   
   And there is a much greater problem: only version 1.26 of the VAPIMPU
   driver appears to work properly. A more recent version (1.51) will not
   load, even with the address and interrupt specified at the
   command-line. However, Rutger Nijlunsing has mentioned that he is
   working on an OSS/Free driver for dosemu which would likely permit
   full use of my MQX interface card. When that arrives I may be able to
   utilize advanced features of Seq+ such as multiport MIDI (for 32 MIDI
   channels) and SMPTE time-code.
   
   [Since writing the above text, I have tweaked /etc/dosemu.conf for
   better performance in both X and console modes. Setting hogthreshold
   0seems to improve playback stability. I have yet to fix the problem
   with the mouse in xdos, but it isn't much of a real problem.
   
   Linux is free, dosemu is free, RAVEL is free. My DOS MIDI software
   can't be run in a DOS box under Win95 with my hardware: it canbe done,
   but I'd have to buy another soundcard. Linux will run its DOS
   emulator, with MIDI and sound support, from an X window or from a
   virtual console (I have six to choose from). If I want to run
   Sequencer Plus in DOS itself, I have to either drop out of Win95
   altogether (DOS mode) or not boot into Win95 at all. With Win95 I get
   one or the other; with Linux, I get the best of all possible worlds.
   
     _________________________________________________________________
                                      
                               Dave Phillips
                                      
             Some Interesting Sound & Music Software For Linux
                                      
     _________________________________________________________________
                                      
                      Copyright  1997, Dave Phillips
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
   
     _________________________________________________________________
                                      
    "Linux Gazette...making Linux just a little more fun!"
    
     _________________________________________________________________
                                      
   Welcom to the Graphics Muse
   Set your browser to the width of the line below for best viewing.
    1997 by mjh
     _________________________________________________________________
                                      
   Button Bar muse:
    1. v; to become absorbed in thought
    2. n; [ fr. Any of the nine sister goddesses of learning and the arts
       in Greek Mythology ]: a source of inspiration
       
   W elcome to the Graphics Muse! Why a "muse"? Well, except for the
   sisters aspect, the above definitions are pretty much the way I'd
   describe my own interest in computer graphics: it keeps me deep in
   thought and it is a daily source of inspiration.
   
                   [Graphics Mews] [Musings] [Resources]
    indent T his column is dedicated to the use, creation, distribution,
     and discussion of computer graphics tools for Linux systems. This
    month I'll finally get around to the article on HF-Lab, John Beale's
    wonderful tool for creating 3D Heightfields. I've been meaning to do
     this for the past few months. I made sure I made time for it this
                                   month.
         The other article from me this month is a quick update on the 3D
        modellers that are available for Linux. I didn't really do a
    comparative review, its more of a "this is whats available, and this
   is where to find them". A full comparative review is beyond the scope
   of this column. Perhaps I'll do one for the Linux Journal sometime in
                                the future.
          I had planned to do a preview of the Gimp 1.0 release which is
     coming out very soon. However, I'll be doing a full article on the
   Gimp for the November graphics issue of the Linux Journal and decided
    to postpone the introduction I had planned for the Muse. At the same
   time I had decided to postpone my preview, Larry Ayers contacted me to
   see if I was still doing my Gimp article for the Muse. He had planned
    on doing one on the latest version but didn't want to clash with my
    article. I told him to feel free and do his since I wasn't doing one
    too. He has graciously offered to place the preview here in the Muse
            and it appears under the "More Musings..." section.
                                      
                               Graphics Mews
                                      
          Disclaimer: Before I get too far into this I should note that
     any of the news items I post in this section are just that - news.
    Either I happened to run across them via some mailing list I was on,
       via some Usenet newsgroup, or via email from someone. I'm not
         necessarily endorsing these products (some of which may be
     commercial), I'm just letting you know I'd heard about them in the
                                past month.
                                      
                                   indent
                                      
                                  Zgv v2.8
                                      
           Zgv is a graphic file viewer for VGA and SVGA displays which
      supports most popular formats. (It uses svgalib.) It provides a
      graphic-mode file selector to select file(s) to view, and allows
     panning and fit-to-screen methods of viewing, slideshows, scaling,
                                    etc.
                                      
   Nothing massively special about this release, really, but some of the
         new features are useful, and there is an important bugfix.
    New features added
     * PCX support. (But 24-bit PCXs aren't supported.)
     * Much faster generation of JPEG thumbnails, thanks to Costa
       Sapuntzakis.
     * Optionally ditch the logo to get a proper, full-screen selector,
       with `f' or `z', or with `fullsel on' in config file.
     * Thumbnail files can be viewed like other images, and thumbnail
       files are their own thumbnails - this means you can browse
       thumbnail directories even if you don't have the images they
       represent.
     * `-T' option, to echo tagged files on exit.
       
    Bugfixes
     * Thumbnail create/update for read-only media and DOS filesystems
       fixed. It previously created all of them each time rather than
       only doing those necessary.
     * Fixed problem with uncleared display when switching from zoom mode
       to scaling up.
     * The switching-from-X etc. now works with kernel 2.0.x. Previously
       it hanged. (It should still work with 1.2.x, too.)
     * Now resets to blocking input even when ^C'ed.
     * Various documentation `bugs' fixed, e.g. the `c' and `n' keys
       weren't previously listed.
       
    Other changes
     * ANSIfied the code. This caught a couple of (as it turned out)
       innocuous bugs. (Fortuitously, they had no ill effect in
       practice.)
     * Updated PNG support to work with libpng 0.81 (and, hopefully, any
       later versions).
     * Sped up viewing in 15/16-bit modes a little.
     * Incorporated Adam Radulovic's patch to v2.7 allowing more files in
       the directory and reducing memory usage.
       
                         Zgv can be found either in
                   sunsite.unc.edu:/pub/Linux/Incoming or
              sunsite.unc.edu/pub/Linux/apps/graphics/viewers.
     The files of interest are zgv2.8-src.tar.gz and zgv2.8-bin.tar.gz.
                                      
    Editors Note: I don't normally include packages that aren't X-based,
    but the number of announcements for this month were relatively small
   so I thought I'd go ahead and include this one. I don't plan on making
                          it a practice, however.
                                   indent
                                      
                 Attention: OpenGL and Direct3D programmers
                                      
           Mark Kilgard, author of OpenGL Programming for the X Window
              System, posted the following announcement on the
   comp.graphics.api.opengl newsgroup. I thought it might be of interest
                      to at least a few of my readers.
                                      
     The URL below explains a fast and effective technique for applying
   texture mapped text onto 3D surfaces. The full source code for a tool
      to generate texture font files (.txf files) and an API for easy
           rendering of the .txf files using OpenGL is provided.
                                      
      For a full explanation of the technique including sample images
                showing how the technique works, please see:
                      http://reality.sgi.com/mjk_asd/
                            tips/TexFont/TexFont.html
                                      
    Direct3D programmers are invited to see how easy and powerful OpenGL
   programming is. In fact, the technique demonstrated is not immediately
    usable on Direct3D because it uses intensity textures (I believe not
      in Direct3D), polygon offset, and requires alpha testing, alpha
    blending, and texture modulation (not required to be implemented by
      Direct3D). I mean this to be a constructive demonstration of the
                    technical inadequacies of Direct3D.
                                      
     I hope you find the supplied source code, texture font generation
         utility, sample .txf files, and explanation quite useful.
                                      
      Note: for those that aren't aware of it, Direct3D is Microsoft's
      answer to OpenGL. Despite their original support of OpenGL, they
       aparently decided to go with a different 3D standard, one they
   invented (I think). Anyway, the discussion on comp.graphics.api.opengl
   of late has been focused on which of the two technologies is a better
                                 solution.
                                   indent
                                   indent
                                      
               Epson PhotoPC and PhotoPC 500 digital cameras
                                      
          Epson PhotoPC and PhotoPC 500 are digital still cameras. They
      are shipped with Windows and Mac based software to download the
       pictures and control the camera parameters over a serial port.
                                      
    Eugene Crosser wrote a C library and a command-line tool to perform
                       the same tasks under UNIX. See
                                      
                        ftp://ftp.average.org/pub/photopc/
                                      
         MD5(photopc-1.0.tar.gz)= 9f286cb3b1bf29d08f0eddf2613f02c9
                                      
    Eugene Crosser; 2:5020/230@fidonet; http://www.average.org/~crosser/
                                      
                                   indent
                                      
                             ImageMagick V3.8.5
                                      
          Alexander Zimmerman has released a new version of ImageMagick.
    The announcment, posted to comp.os.linux.announce, reads as follows:
                                      
     I just uploaded to sunsite.unc.edu
     
     ImageMagick-3.8.5-elf.lsm
     ImageMagick-3.8.5-elf.tgz
     
     This is the newest version of my binary distribution of
     ImageMagick. It will move to the places listed in the LSM-entry at
     the end of this message. Please remember to get the package
     libIMPlugIn-1.1 too, to make it working.
     
     This version brings together a number of minor changes made to
     accomodate PerlMagick and lots of minor bugs fixes including
     multi-page TIFF decoding and writing PNG.
     
     ImageMagick (TM), version 3.8.5, is a package for display and
     interactive manipulation of images for the X Window System.
     ImageMagick performs, also as command line programs, among others
     these functions:
     * Describe the format and characteristics of an image
     * Convert an image from one format to another
     * Transform an image or sequence of images
     * Read an image from an X server and output it as an image file
     * Animate a sequence of images
     * Combine one or more images to create new images
     * Create a composite image by combining several separate images
     * Segment an image based on the color histogram
     * Retrieve, list, or print files from a remote network site
       
     ImageMagick supports also the Drag-and-Drop protocol form the OffiX
     package and many of the more popular image formats including JPEG,
     MPEG, PNG, TIFF, Photo CD, etc.
     Primary-site: ftp.wizards.dupont.com /pub/ImageMagick/linux
     986k ImageMagick-i486-linux-ELF.tar.gz
     884k PlugIn-i486-linux-ELF.tar.gz
     Alternate-site: sunsite.unc.edu /pub/Linux/apps/graphics/viewers/X
     986k ImageMagick-3.8.5-elf.tgz
     1k ImageMagick-3.8.5-elf.lsm
     sunsite.unc.edu /pub/Linux/libs/graphics
     884k libIMPlugIn-1.1-elf.tgz
     1k libIMPlugIn-1.1-elf.lsm
     Alternate-site: ftp.forwiss.uni-passau.de
     /pub/linux/local/ImageMagick
     986k ImageMagick-3.8.5-elf.tgz
     1k ImageMagick-3.8.5-elf.lsm
     884k libIMPlugIn-1.1-elf.tgz
     1k libIMPlugIn-1.1-elf.lsm
     indent
     
                            VARKON Version 1.15A
                                      
            VARKON is a high level development tool for parametric CAD
     and engineering applications developed by Microform, Sweden. 1.15A
        includes new parametric functions for creation and editing of
             sculptured surfaces and rendering based on OpenGL.
                                      
     Version 1.15A of the free version for Linux is now available for
     download at:
     http://www.microform.se indent indent
     
                     Shared library version of xv 3.10a
                                      
           xv-3.10a-shared is the familiar image viewer program with all
      current patches modified to use the shared libraries provided by
                                   libgr.
                                      
     xv-3.10a-shared is available from ftp.ctd.comsat.com:/pub.
     libgr-2.0.12.tar.gz is available from
     ftp.ctd.comsat.com:/pub/linux/ELF.
     indent
     
 t1lib-0.2-beta - A Library for generating Bitmaps from Adobe Type 1 Fonts
                                      
                 t1lib is a library for generating character- and
      string-glyphs from Adobe Type 1 fonts under UNIX. t1lib uses most
           of the code of the X11 rasterizer donated by IBM to the
         X11-project. But some disadvantages of the rasterizer being
      included in X11 have been eliminated. Here are the main features:
     * t1lib is completely independent of X11 (although the program
       provided for testing the library needs X11)
     * fonts are made known to library by means of a font database file
       at runtime
     * searchpaths for all types of input files are configured by means
       of a configuration file at runtime
     * characters are rastered as they are needed
     * characters and complete strings may be rastered by a simple
       function call
     * when rastering strings, pairwise kerning information from
       .afm-files may optionally be taken into account
     * an interface to ligature-information of afm-files is provided
     * rotation is supported at any angles
     * there's limited support for extending and slanting fonts
     * new encoding vectors may be loaded at runtime and fonts may be
       reencoded using these encoding vectors
     * antialiasing is implemented using three gray-levels between black
       and white
     * a logfile may be used for logging runtime error-, warning- and
       other messages
     * an interactive test program called "xglyph" is included in the
       distribution. This program allows to test all of the features of
       the library. It requires X11.
       
      Author: Rainer Menzner ( rmz@neuroinformatik.ruhr-uni-bochum.de)
                                      
     You can get t1lib by anonymous ftp at:
     ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/
         pub/software/t1lib/t1lib-0.2-beta.tar.gz
     
     An overview on t1lib including some screenshots of xglyph can be
     found at:
     http://www.neuroinformatik.ruhr-uni-bochum.de/
         ini/PEOPLE/rmz/t1lib.html
     indent
     
              Freetype Project - The Free TrueType Font Engine
                              Alpha Release 4
                                      
             The FreeType library is a free and portable TrueType font
        rendering engine. This package, known as 'Alpha Release 4' or
         'AR4', contains the engine's source code and documentation.
                                      
     What you'll find in this release are:
     * better portability of the C code than in the previous release.
     * font smoothing, a.k.a. gray-level rendering. Just like Win95, only
       the diagonals and curves are smoothed, while the vertical and
       horizontal stems are kept intact.
     * support for all character mappings, as well as glyph indexing and
       translation functions (incomplete).
     * full-featured TrueType bytecode interpreter !! The engine is now
       able to hint the glyphs, thus producing an excellent result at
       small sizes. We now match the quality of the bitmaps generated by
       Windows and the Mac! Check the 'view' test program for a
       demonstration.
     * loading of composite glyphs. It is now possible to load and
       display composite glyphs with the 'zoom' test program. However,
       composite glyph hinting is not implemented yet due to the great
       incompleteness of the available TrueType specifications.
       
     Also, some design changes have been made to allow the support of
     the following features, though they're not completely implemented
     yet:
     * multiple opened font instances
     * thread-safe library build
     * re-entrant library build
     * and of course, still more bug fixes ;-)
       
     Source is provided in two programming languages: C and Pascal, with
     some common documentation and several test programs. The Pascal
     source code has been successfully compiled and run with Borland
     Pascal 7 and fPrint's Virtual Pascal on DOS and OS/2 respectively.
     The C source code has been successfully compiled and run on various
     platforms including DOS, OS/2, Amiga, Linux and several other
     variants of UNIX. It is written in ANSI C and should be very easily
     ported to any platform. Though development of the library is mainly
     performed on OS/2 and Linux, the library does not contain
     system-specific code. However, this package contains some graphics
     drivers used by the test programs for display purposes on DOS,
     OS/2, Amiga and X11.
     
     Finally, the FreeType Alpha Release 4 is released for informative
     and demonstration purpose only. The authors provide it 'as is',
     with no warranty.
     
     The file freetype-AR4.tar.gz (about 290K) is available now at
     ftp://sunsite.unc.edu/pub/Linux/X11/fonts or at the FTP site in:
     ftp://ftp.physiol.med.tu-muenchen.de/pub/freetype
     
     Web page:
     http://www.physiol.med.tu-muenchen.de/~robert/freetype.html
     The home site of the FreeType project is
     ftp://ftp.physiol.med.tu-muenchen.de/pub/freetype
     There is also a mailing list:
     freetype@lists.tu-muenchen.de
     Send the usual subscription commands to:
     majordomo@lists.tu-muenchen.de
     
     Copyright 1996 David Turner
     Copyright 1997 Robert Wilhelm
     Werner Lemberg
     indent
     indent
     indent
     
                               Did You Know?
                                      
      ...the Portal web site for xanim has closed down. The new primary
               sites are: http://xanim.va.pubnix.com/home.html
              http://smurfland.cit.buffalo.edu/xanim/home.html
              http://www.tm.informatik.uni-frankfurt.de/xanim/
                  The latest revision of xanim is 2.70.6.4.
                                      
     I got the following message from a reader. Feel free to contact him
     with your comments. I have no association with this project. I'm
     currently working on an application to do image processing and
     Computer Vision tasks. In the stage of development, I would like to
     know what the community expects from such a product, so if you
     would like the status of the work, please come and visit:
     http://www-vision.deis.unibo.it/~cverond/cvw
     Expecially the "sample" section, where you can see some of the
     application's functionality at work, and leave me a feedback.
     Thanks for your help. Cristiano Verondini cverondini@deis.unibo.it|
     
     Q and A
     
     Q: Can someone point me to a good spot to download some software to
     make a good height map? 
     
     A: I'd suggest you try either John Beale's hflab available at:
     http://shell3.ba.best.com/~beale/ Look under sources. You will find
     executables for Unix and source code for other systems. It is
     pretty good at manipulating and creating heightfields and is great
     at making heightfields made in a paint program more realistic.
           For the ultimate in realism use dem2pov by Bill Kirby, also
     available at John Beale's web site to convert DEM files to TGA
     heightfields. You can get DEM files trough my DEM mapping project
     at http://www.sn.no/~svalstad/hf/dem.html or directly from
     ftp://edcftp.cr.usgs.gov/pub/data/DEM/250/
           As for your next question about what the pixel values of
     heightfields mean, there are three different situations:
    1. High quality heightfields use a 24bit TGA or PNG file to store 16
       bit values with the most significant byte in the red component,
       the least significant byte in the green component and the blue
       component empty.
    2. 8bit GIF files store a colour index where the colour with index
       number 0 becomes the lowest part of the heightfield and the colour
       with index number 255 becomes the highest part.
    3. 8bit greyscale GIF files; the darkest colours become the lowest
       part of the heightfield and the lightest colours becomes the
       higherst part.
       
     From Stig M. Valstad via the IRTC-L mailing list
     svalstad@sn.no
     http://www.sn.no/~svalstad
     
     Q: Sorry to pester you but I've read your minihowto on graphics in
     Linux and I still haven't found what I'm looking for. Is there a
     tool that will convert a collection of TGA files to one MPEG file
     in Linux? 
     
     A: I don't know of any off hand, but check the following pages.
     They might have pointers to tools that could help.
     
   http://sunsite.unc.edu/pub/multimedia/animation/mpeg/berkeley-mirror/
     http://xanim.va.pubnix.com/home.html (this is Xanim's home page).
                                      
      You probably have to convert your TGA's to another format first,
     then encode them with mpeg_encode (which can be found at the first
                             site listed above).
                                      
     Q: Where can I find some MPEG play/encode tools? 
     
     A:
     http://sunsite.unc.edu/pub/multimedia/animation/mpeg/berkeley-mirro
     r/
     
     Q: Where can I find free textures on the net in BMP, GIF, JPEG, and
     PNG formats? 
     
     A: Try looking at:
           http://axem2.simplenet.com/heading.htm
     
     These are the textures I've started using in my OpenGL demos. They
     are very professional. There are excellent brick and stone wall
     textures. If you are doing a lot of modeling of walls and floors
     and roads, the web site offers a CD-ROM with many more textures.
     
     Generally, I load them into "xv" (an X image viewer utility) and
     resample them with highest-quality filtering to be on even powers
     of two and then save them as a TIFF file. I just wish they were
     already at powers of two so I didn't have to resample.
     
     Then, I use Sam Leffler's very nice libtiff library to read them
     into my demo. I've got some example code of loading TIFF images as
     textures at:
           http://reality.sgi.com/mjk_asd/tiff_and_opengl.html
     
     From: Mark Kilgard <mjk@fangio.asd.sgi.com>, author of OpenGL
     Programming for the X Window System, via the
     comp.graphics.api.opengl newsgroup.
     
     Q: Why can't I feed the RIB files exported by AMAPI directly into
     BMRT? 
     
     A: According to shem@warehouse.net: Thomas Burge from Apple who has
     both the NT and Apple versions of AMAPI explained to me what the
     situation is - AMAPI only exports RIB entity files; you need to add
     a fair chunk of data before a RIB WorldBegin statement to get the
     camera in the right place and facing the right way. As it were, no
     lights were enabled and my camera was positioned underneath the
     object, facing down! There is also a Z-axis negation problem in
     AMAPI, which this gentleman pointed out to me and gave me to the
     RIB instructions to compensate for it.
     
     Q: Is there an OpenGL tutorial on-line? The sample code at the
     OpenGl WWW center seems pretty advanced to me. 
     
     A: There are many OpenGL tutorials on the net. Try looking at:
     http://reality.sgi.com/mjk_asd/opengl-links.html
     
     Some other good ones are:
     * OpenGL overview -
       http://www.sgi.com/Technology/openGL/paper.design/opengl.html
     * OpenGL with Visual C++ -
       http://www.iftech.com/oltc/opengl/opengl0.stm
     * OpenGL and X, an intro -
       http://www.sgi.com/Technology/openGL/mjk.intro/intro.html
       
     From Mark Kilgard
     
     Q: So, like, is anyone really reading this column? 
     
     A: I have no idea. Is anyone out there?
     indent
     indent
     indent
     
                                  Musings
                                      
                            3D Modellers Update
                                      
         Recently there has been a minor explosion of 3D modellers. Most
   of the modellers I found the first time out are still around although
    some are either no longer being developed or the developers have not
    released a new version in some time. Since I haven't really covered
   the range of modellers in this column since I started back in November
      1996, I decided it was time I provided a brief overview of whats
                      available and where to get them.
            The first thing to do is give a listing of what tools are
     available. The following is the list of modellers I currently know
                       about, in no particular order:
                                      
     * AC3D
     * SCED/SCEDA
     * Midnight Modeller
     * AMAPI
     * Bentley Microstation 95
       
     * Aero
     * Leo3D
     * MindsEye
     * 3DOM
       
    There is also the possibility that bCAD is available for Linux as a
   commercial port, but I don't have proof of this yet. Their web site is
    very limited as to contact information so I wasn't able to send them
   email to find out for certain. The web pages at 3DSite for bCAD do not
    list any Unix ports for bCAD, although they appear to have a command
                          line renderer for Unix.
            There are also a couple of others that I'm not sure how to
    classify, but the modelling capabilities are not as obvious so I'll
    deal with them in a future update (especially if the contact me with
                        details on their products).
          All of these use graphical, point-and-click style interfaces.
      There are other modellers that use programming languages but no
     graphical interface, such as POV-Ray, Megahedron and BMRT (via its
      RenderMan support). Those tools not covered by this discussion.
            The list of modellers can be broken into three categories:
       stable, under development, and commercial. The stable category
   includes AC3D, SCED/SCEDA, and Midnight Modeller. Commercial modellers
    are the AMAPI and Megahedron packages, and Bentley Microstation. The
    latter is actually free for non-commercial unsupported use, or $500
     with support. Below are short descriptions of the packages, their
   current or best known status and contact information. The packages in
                    the table are listed alphabetically.
                                      
                          Product and description
                    Imports Exports Availability Contact
                                      
        3DOM - Very early development. I haven't tried this one yet.
                          Unknown Unknown Freeware
         http://www.cs.kuleuven.ac.be/cwis/research/graphics/3DOM/ 
                                      
   AC3D - OpenGL based vertex modeller with multiple, editable views plus
     a 3D view. Includes ability to move, rotate, resize, position, and
   extrude objects. Objects can be named and hidden. Includes support for
    2D (line (both poly and polylines) , circle, rectangle, ellipse, and
       disk) and 3D (box, sphere, cylinder and mesh). Fairly nice 3D
    graphical interface that looks like Motif but doesn't require Motif
                                 libraries.
      Imports DXF, Lightwave, Triangle, vector formatted object files.
       Generates RenderMan, POV-Ray 2.2, VRML, Massive, DVS, Dive and
                 Triangle formatted object files. Shareware
                        http://www.comp.lancs.ac.uk/
                    computing/users/andy/ac3dlinux.html
                                      
   Aero - The following is taken from the documentation that accompanies
                                the package:
                                      
     AERO is a tool for editing and simulating scenes with rigid body
     systems. You can use the built-in 4-view editor to create a virtual
     scene consisting of spheres, cuboids, cylinders, planes and fix
     points. You can link these objects with rods, springs, dampers and
     bolt joints and you can connect forces to the objects. Then you can
     begin the simulation and everything starts moving according to the
     laws of physics (gravitation, friction, collisions). The simulation
     can be viewed as animated wire frame graphics. In addition you can
     use POV-Ray to render photo-realistic animation sequences.
     
   This package requires the FSF Widget library, which I don't have. The
    last time I tried to compile that library it didn't work for me, but
   maybe the build process works better now. Anyway, I haven't seen this
                            modeller in action.
                   Proprietary ASCII text format POV-Ray
            http://www.informatik.uni-stuttgart.de/ipvr/bv/aero/
               ftp://ftp.informatik.uni-stuttgart.de/pub/AERO
                                      
   AMAPI - Fairly sophisticated, including support for NURBS and a macro
   language. Interface is quit unique for X applications, probably based
   on OpenGL. The version available from Sunsite doesn't work quite right
   on my system. Some windows don't get drawn unless a refresh is forced
     and the method for doing a refresh is kind of trial-and-error. The
   trial version of 2.11 has the same problem. Perhaps this is a problem
    with the OpenGL they use, although a check with ldd doesn't show any
       dependencies on OpenGL. I wish this worked. I really like the
                                 interface.
                                      
   Yonowat, the maker of AMAPI, has a trial version, 2.11, available for
    download from their web site. They are also porting another of their
    products AMAPI Studio 3.0, a more advanced modeling tool, to Linux.
        The web site doesn't mention when it might be ready but the
             description on the pages look *very* interesting.
    DXF, 3DS R3 and R4, IGES, Illustrator, Text, has its own proprietary
   format DXF, CADRender, Text, AMAPI, 3DS R3 and R4, Ray Dream Designer,
     Lightwave, 3DGF, Truespace V2.0, Caliray, POV 3.0, IGES, Explore,
    VRML, STL, Illustrator, RIB Shareware - $25US, $99US will get you a
   200 page printed manual. Personal use copies for Linux are free for a
       year, but commercial, government, and institutional users must
                           register their copies.
            http://www.informatik.uni-stuttgart.de/ipvr/bv/aero/
               ftp://ftp.informatik.uni-stuttgart.de/pub/AERO
                                      
      Leo3D - The following is taken from the README file in the Leo3D
                               distribution:
                                      
     Leo 3D is a real time 3D modelling application which enables you to
     create realistic 3D scenes using different rendering applications
     (such as Povray or BMRT for example). It also exports VRML files.
     
     What distinguishes Leo 3D from most other modelling applications is
     that all object transformations are done directly in the viewing
     window (no need for three seperate x, y, and z windows). For
     example, to move an object, all you need to do is grab and drag
     (with the mouse) one of the 'blue dots' which corresponds to the 2D
     Plane for which you wish to move the object. Scaling and rotation
     is done in the same way with the yellow and magenta dots
     respectively.
     
   This modeller has a very cool interface based on OpenGL, GLUT, TCL and
      Tix. I had problems with it when trying to load files, but just
     creating and shading a few objects was quite easy and rather fun,
    actually. This modeller certainly has some of the most potential of
     the non-commercial modellers that I've seen. However, it still has
                 some work to do to fix a few obvious bugs.
          DXF POV-Ray, RenderMan, VRML 1.0, JPEG Shareware - $25US
    ftp://s2k-ftp.cs.berkeley.edu/pub/personal/mallekai/leo3d.html (Yes,
                   thats an ftp site with an HTML page.)
                                      
    Bentley Microstation 95 and MasterPiece - Commercial computer-aided
   design product for drafting, design, visualization, analysis, database
   management, and modeling with a long history on MS, Mac and other Unix
     platforms. Includes programming support with a BASIC language and
   linkages to various commericial databases such as Oracle and Informix.
    The product seems quite sophisticated based on their web pages, but
    I've never seen it in action. I have seen a number of texts at local
   bookstores relating to the MS products, so I have a feeling the Linux
     ports should be quite interesting. Bentley's product line is quite
     large. This looks like the place to go for a commercial modeller,
   although I'm not certain if they'll sell their educational products to
     the general public or not. If anyone finds out please let me know.
    Note that the Linux ports have not been released (to my knowledge -
                   I'm going by whats on the web pages).
        DXF, DWG and IGES Unknown Commercial, primarily targeted at
   educational markets, however they appear open to public distributions
    and ports of their other packages if enough interest is shown by the
      Linux community. http://www.bentley.com/ema/academic/aclinux.htm
              http://www.bentley.com/ema/academic/academic.htm
                                      
    Midnight Modeller - A direct port of the DOS version to Linux. The X
      interface looks and acts just like the DOS version. On an 8 bit
   display the colors are horrid, but its not so bad on 24 bit displays.
    It seems to have a problem seeing all the directories in the current
                    directory when trying to open files.
                                      
     The DOS version is being ported to Windows but it doesn't appear a
   port of this version will be coming for Linux. The original Linux-port
      author says he's still interested in doing bug fixes but doesn't
            expect to be doing any further feature enhancement.
                         DXF, Raw DXF, Raw Freeware
               ftp://ftp.infomagic.com/pub/mirrors/.mirror1/
                         sunsite/apps/graphics/rays/pov/
                           mnm-linux-pl2.static.ELF.gz
               ftp://ftp.infomagic.com/pub/mirrors/.mirror1/
                         sunsite/apps/graphics/rays/pov/
                           mnm-linux-pl2.static.ELF.gz
                Author: Michael Lamertz <mlamertz@odars.de>
                                      
   MindsEye - A new modeller in very early development which is based on
   both OpenGL/MesaGL and QT. Is designed to allow plug-ins. The project
     has a mailing list for developers and other interested parties and
       appears to have more detailed design specifications than most
      "community developed" projects of this nature. Its been a while
   coming, but the modeller is starting to take shape. Last I looked they
    were beginning to work on adding autoconf to the build environment,
   which is a very good thing to do early on in a project, like this one
                                    is.
    DXF, others planned Unknown GNU GPL http://www.ptf.hro.nl/free-d/ -
                                  Web Site
             ftp.cs.umn.edu:/users/mein/mindseye/ - source code
                                      
    SCED/SCEDA - The following is taken from the README file in the SCED
                               distribution:
                                      
     Sced is a program for creating 3d scenes, then exporting them to a
     wide variety of rendering programs. Programs supported are: POVray,
     Rayshade, any VRML browser, anything that reads Pixar's RIB format,
     and Radiance. Plus a couple of local formats, for me.
     
     Sced uses constraints to allow for the accurate placement of
     objects, and provides a maintenance system for keeping this
     constraints satisfied as the scene is modified.
     
   This is a very sophisticated modeller, but the Athena interface makes
   it look less powerful than it is. I used this modeller for many of the
      scenes I created when I first started into 3D and still like its
   constraint system better than what is available in AC3D (which doesn't
   really have constraints in same sense, I don't think). SCED's biggest
    limitation is its lack of support for importing various 3D formats.
                                      
    SCEDA is a port of SCED that allows for keyframed animation. Objects
    are given initial and ending positions and the modeller creates the
       frames that will fill in the spaces between these two points.
   Proprietary scene format and OFF (wireframe format) POV 3.0, Radiance,
                    RenderMan, VRML 1.0 Freeware (GPL'd)
            http://http.cs.berkeley.edu/~schenney/sced/sced.html
                     ftp://ftp.cs.su.oz.au/stephen/sced
                ftp://ftp.povray.org/pub/pov/modellers/sced
                                   indent
                                      
                                  HF-Lab 
                                      
         Height fields are convenient tools for representing terrain data
       that are supported directly by POV-Ray and through the use of
        displacement maps or patch meshes in BMRT. With POV-Ray and
   displacement maps in BMRT, a 2D image is used to specify the height of
   a point based on the color and/or intensity level for the point in the
    2D image. The renderer uses this image, mapped over a 3D surface, to
     create mountains, valleys, plateaus and other geographic features.
        Creating a representative 2D image is the trick to realistic
   landscapes. HF-Lab, an X based interactive tool written by John Beale,
     is an easy to use and extremely useful tool for creating these 2D
                                  images.
           Once you have retrieved the source, built (instructions are
   included and the build process is fairly straightforward, although it
       could probably benefit from the use of imake or autoconf) and
     installed it you're ready to go. HF-Lab is a command line oriented
    tool that provides its own shell from which commands can be entered.
                      To start HF-Lab using BASH type
                                      
                     % export HFLHELP=$HOME/hf/hf-lab.hlp
                                    % hlx
                                      
                              and in csh type
                                      
                     % setenv HFLHELP $HOME/hf/hf-lab.hlp
                                    % hlx
                                      
   Note that the path you use for the HFHELP environment variable depends
   on where you installed the hf-lab.hlp file from the distribution. The
    build process does not provide a method for installing this file for
     you so you'll need to be sure to move the file to the appropriate
      directory by hand. You definitely want to make sure this file is
   properly installed since the online help features in HF-Lab are quite
                                   nice.
         The first thing you notice is the shell prompt. From the prompt
    you type in one or more commands that manipulate the current height
    field (there can be more than one, each of which occupies a place on
     the stack). We've started by using the online help feature. Typing
    help by itself brings up the list of available commands, categorized
   by type. Typing help <command> (without the brackets, of course) gets
   you help on a particular command. In Figure 1 the help for the crater
                             command is shown.
           Now lets look at the available features. John writes in the
                 documentation that accompanies the source:
                                      
     HF-Lab commands fall into several categories: those for generating
     heightfields (HFs), combining or transforming them, and viewing
     them are the three most important. Then there are other
     'housekeeping' commands to move HFs around on the internal stack,
     load and save them on the disk, and set various internal variables.
     
     Generating HFs are done with one of gforge, random, constant, and
    zero. The first of these, gforge, is the most interesting as it will
     create fractal-based fields. Random creates a field based on noise
   patterns (lots of spikes, perhaps usable as grass blades up close in a
   rendered scene) while constant and zero create level planes. Zero is a
        just a special case of constant where the height value is 0.
         Each HF that is generated gets placed on the stack. The stack is
    empty to start. Running one of the HF generation commands will add a
   HF to top of the stack. By default there are 4 slots in the stack that
   can be filled, but this number can be changed using the set stacksize
   command. The HFs on the stack can be popped, swapped, duplicated, and
      named and the whole stack can be rotated. Also, rotation can be
                   between the first 3 HFs on the stack.
             The normal proces for creating a HF usually includes the
                              following steps:
    1. Generate one or two HFs with gforge
    2. Manipulate the HFs with the crater or pow commands.
    3. View the HF in 3D.
    4. Manipulate some more.
    5. Check it again.
    6. Continue, ad infinitum.
       
   Manipulating a HF can be done in several ways. First, there are a set
   of commands to operate on a single HF, the One HF-Operators. A few of
    the more interesting of these are the pow, zedge, crater, fillbasin,
   and flow commands. Zedge flattens the edges of the HF (remember that a
   HF is really just a 3D representation of a 2D image, and those images
    are rectangular). Crater adds circular craters to the HF of various
                      radii and depths. Fillbasin and
                                      
                        -Top of next column- indent
    More Musings...
     * Gimp 1.0 - Larry Ayers provides a preview of the newest version of
       the Unix worlds answer to Adobe Photoshop.
       
                                   indent
                                   indent
                                   indent
       flow can be used together to etch out river valleys. There are
   examples, erosion1.scr and erosion2.jpg in the distribution which show
                                   this.
           There are two ways to view the images you create with HF-Lab
     from within the application. One is to view the 2D greyscale image
   that will be saved to file. Viewing the 2D image is done with the show
   command. The other method is as an representative rendering of the HF
    in 3D, so that you'll get a better idea of what the final rendering
   will be with POV or BMRT. Viewing the 3D images is done in a secondary
      shell (although it is also possible to simply ask that shell to
    display the image and return immediately to the command shell - this
    is probably what you'll do once you've gotten more experienced with
    HF-Lab). The view command enters the user into the 3D viewing shell.
    From here you can set the level of detail to show, the position of a
    lightsource or the cameras eye, lighten, darken, tile and change the
     scale of the display. To exit the secondary shell you simply type
                                   quit.
          HF-Lab supports a number of different file formats for reading
   and writing: PNG, GIF, POT, TGA, PGM, MAT, OCT, and RAW. Most of these
    formats have special purposes, but for use with POV-Ray and BMRT you
   should save files in TGA format. POV-Ray can use this format directly,
    but for use with BMRT you will need to convert the TGA image to TIFF
     format. Using TGA allows you to save the image information without
   data loss and conversion from TGA to TIFF is relatively easy using XV,
                          NetPBM, or ImageMagick.
          Since creating a reasonably realistic HF can be a long session
   of trial and error you may find it useful to use the builtin scripting
   capability. John provides a very good set of sample scripts along with
    the source. A quick glance at one of these, erosion1.scr, shows that
   multiple commands can be run at a time. This is also possible from the
    HF> prompt, so you can try these commands one at a time to see what
     effect each has. Once you have a rough guess as the to process you
    need to create the scene you want, you should place this in a script
         and then edit the script to get the detail level desired.
               HF-Lab creates its images through the use of lots of
   mathematical tricks that are far beyond the scope of this column. I'd
      love to say I understand all of them, but I only have a limited
   understanding of fractals and their use in creating terrain maps and I
   have no real understanding of Fast Fourier Transforms or Inverse Fast
   Fourier Transforms. These latter two are methods of filtering a HF in
      order to smooth or sharpen features. Filters include a high pass
      filter (hpfilter), low pass filter (lpfilter), band pass filter
       (bpfilter) and band reject filter (brfilter). Although I don't
   understand the math behind them, I was able to use a High Pass Filter
      to take a simple gforge-created HF and turn it into a very nice
   heightfield that simulates a leathery surface. This HF was created in
                              only two steps:
    1. gforge 400 2.2
    2. hpfilter 0.095 30
       
    So you can see how powerful this tool can be. Using height fields in
      BMRT, or as bump maps in POV, can produce some very interesting
                                 textures!
             There are many other features of HF-Lab which I have not
       covered. And in truth, I really didn't give much detail on the
   features I did discuss. John gives much better descriptions of some of
     the features in the README file that accompanies the source and I
    highly recommend you read this file while you experiment with HF-Lab
   for the first few times. He has gone to great lengths to provide very
      useful online help and sample scripts. The interface may not be
        point-and-click, but it certainly is not difficult to learn.
            When I first came across John Beale and HF-Lab I was quite
   impressed with its ease of use for creating interesting landscapes. I
    haven't really used it much since the early days of my 3D rendering
       lifetime, but since writing this article I've rediscovered how
   powerful this tool can be. Originally I viewed the tool only as a tool
   for creating landscapes, ie as a tool for modelling a world. Now I see
   how it can be used to create surface features of all kinds that can be
   used as textures and not just models. I think I'll be making more use
                        of this tool in the future.
                                      
                                 Resources
       The following links are just starting points for finding more
     information about computer graphics and multimedia in general for
    Linux systems. If you have some application specific information for
   me, I'll add them to my other pages or you can contact the maintainer
   of some other web site. I'll consider adding other general references
    here, but application or site specific information needs to go into
        one of the following general references and not listed here.
                                      
                         Linux Graphics mini-Howto 
                          Unix Graphics Utilities 
                           Linux Multimedia Page 
                                      
   Some of the Mailing Lists and Newsgroups I keep an eye on and where I
                get alot of the information in this column:
                                      
              The Gimp User and Gimp Developer Mailing Lists.
                         The IRTC-L discussion list
                     comp.graphics.rendering.raytracing
                     comp.graphics.rendering.renderman
                          comp.graphics.api.opengl
                           comp.os.linux.announce
                                      
                             Future Directions
                                      
                                Next month:
     * BMRT Part 3: Advanced Topics or a short tutorial on writing an
       OpenGL application. I'm currently working on a little Motif/OpenGL
       application which I plan on using to create models for use with
       BMRT. I'd like to finish it before I return to BMRT, but I have
       promised the third part on BMRT for July. I'm not sure which I'll
       get to, especially since I also have an article for the Linux
       Journal due July 1st.
     * ..and who knows what else
       
                                      
                 Let me know what you'd like to hear about!
                                      
     _________________________________________________________________
                                      
                    Copyright  1997, Michael J. Hammel
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
                                  More...
                                      
                                      
                                  Musings
                                   indent
                           1997 Michael J. Hammel
                                   indent
                                      
                                  [INLINE]
                                      
                  Figure 1: HF-Lab command line interface
                                      
                             [INLINE] [INLINE]
     Figure 2: HF produced from erosion1.scr Figure 3: HF produced from
                                erosion2.scr
                                      
                                  [INLINE]
                                      
     Figure 4: leathery surface, which I created completely by accident
                                      
                                   indent
                                      
                         1997 by Michael J. Hammel
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
                             GIMP 1.00 Preview
                                      
                   By Larry Ayers, layers@vax2.rainis.net
                                      
     _________________________________________________________________
                                      
                                Introduction
                                      
        Allow me to state up front that I'm not a computer graphics
   professional (or even much of an amateur!) and I've never used any of
      the common commercial tools such as Photoshop. Thus it's not too
      surprising that my efforts to use version 0.54 of the Gimp, the
   GNU-licensed image-editing tool developed by Spencer Kimball and Peter
       Matis, often were frustrating. But one day I happened upon the
      developer's directory of the Gimp FTP site and saw there a beta
    release, version 0.99.9. This sounded awfully close to version 1.00,
                      so I thought I'd give it a try.
                                      
     At first it absolutely refused to compile. After downloading this
     large archive, I wasn't about to give up, and after several false
   starts I found that if I compiled each subdirectory first, followed by
     installation of the various libs and running ldconfig to let ld.so
   know about them, the main Makefile in the top directory would compile
   without errors. The Motif libs aren't needed with this release, as the
       new Gimp ToolKit (GTK) has been implemented as a replacement.
                                      
    An analogy occurred to me as I gradually discovered how complicated
    and powerful this application is. It's the XEmacs of image editors!
     The plug-ins and scripts are like Emacs LISP extensions and modes,
    both in their relationship with the parent application and in their
    origin: contributed by a wordwide community of users and developers.
                                      
   This release does have a few problems. Occasionally it will crash, but
   politely; i.e. it doesn't kill the X-server or freeze the system. The
   benefits of this release far outweigh these occasional inconveniences,
                      especially for a rank beginner.
                                      
                             Structural Changes
                                      
   Image editing is a notorious consumer of memory. This new version has
     a method of attempting to minimize memory usage called tile-based
     memory management. This allows the Gimp to work with images larger
      than can be held in physical memory. Disk space is heavily used
               instead, so make sure you have plenty of swap!
                                      
   A new file format specific to the Gimp, (*.xcf), allows an image to be
      saved with it's separate layers, channels, and tiles intact. In
   ordinary image formats all such information disappears when the image
   is saved. This would be ideal if an image had to be changed at a later
         date, allowing effective resumption of an editing session.
                                      
    An extension is like a plug-in but is not called from or associated
     with a specific image; the first of these is described in the next
                                  section.
                                      
                                 Script Fu
                                      
   The Gimp now has a built-in scripting language, based on Scheme, which
    bears some resemblance to LISP. An extension called Script Fu (which
      can be started from the Gimp menubar) can read these scripts and
     perform a series of image manipulations on user-specified text or
    images, using user-selected fonts and colors. What this means for a
    beginner like myself is that a complicated series of Gimp procedures
   (which would probably take me a day to laboriously figure out) is now
    automated. A collection of these scripts is installed along with the
    other Gimp files, and more are periodically released by skilled Gimp
    users. Many of the scripts facilitate the creation of text logos and
                   titles suitable for use in web pages.
                                      
               Here is a screenshot of the Script Fu window:
                                      
                              Script Fu Window
                                      
   As you can see, entry-boxes are available for filling in. Most scripts
    have default entries, and scripts will certainly fail if the default
                   font is not available on your system.
                                      
   This script-processing ability should greatly expand the popularity of
    the Gimp. I showed Script-Fu to my teenage kids and they took to it
    like ducks to water, whereas before they had been intimidated by the
    Gimp's complexity and deeply nested menus. A little easy success can
                  give enough impetus to explore further.
                                      
                                  Plug-Ins
                                      
    I believe that among the most important factors contributing to the
      success and continuing development of the Gimp are the built-in
    "hooks" allowing third-party plug-in modules to add capabilities to
    the program. The GTK ends up doing all of the mundane tasks such as
    creating windows and their components; all a plug-in needs to do is
       manipulate graphics data. One result is that the plug-ins are
          surprisingly small considering what they can accomplish.
                                      
    One reason the release of Gimp version 1.00 has been delayed is that
    the plug-ins which had been written for version 0.54 won't work with
   version 1.00 (or any of the recent betas). This was partly due to the
   switch from Motif to the GTK, and partly to the new memory-management
      scheme. The plug-in developers have been busily modifying their
    modules and the great majority have been successfully ported. Since
      the release of 0.99.9 several interesting new plug-ins have been
                            released, including:
     * IFSCompose, by Owen Taylor, is a utility for the interactive
       creation of Iterated Function System fractals, which can then be
       included in an image. See my review of Xlockmore in this issue for
       a brief description of this fractal type.
     * CML Explorer, by Shuji Narazaki, creates Coupled Map Lattice
       images; these are models of complex systems' time-changes and the
       results can be striking patterns. This is a complex plug-in with
       many parameters to tweak. The best way to get an idea of what it
       can do is to download parameter files from this site.
     * Whirl and Pinch is a merging of two older plug-ins (you guessed it
       -- Whirl and Pinch!). Federico Mena Quintera is the author, as
       well as being one of the Gimp developers.
     * FP, or FilterPack, is a useful utility for adjusting the
       color-balance of an image in a variety of ways, with thumbnail
       images showing the results of changes as you make them. It was
       written by Pavel Greenfield; his page here explains and
       illustrates its usage.
       
     As well as these and other new plug-ins, many of the old ones were
    enhanced in the process of adapting them to the new release. Several
    now have real-time preview windows, in which the results of changes
                    can be seen without committing them.
                                      
                                 Tutorials
                                      
    The Gimp has never had much documentation included with the archive.
     This will eventually be remedied; the Gimp Documentation Project,
        analogous to the Linux Documentation Project, will be making
   documentation freely available. Until the fruits of that project begin
      to appear there are some excellent tutorials, written by various
     charitable Gimp users and developers and available on the WWW. The
     Gimp Tutorials Collection is a site which has links to many of the
   tutorials out there. The tutorials situation is in flux at the moment,
    as some are specific to Gimp 0.54 while others are intended for the
                                newer betas.
                                      
     A site which has helped me get started is Jens Lautenbacher's Home
       Page. His tutorials are very lucid and easy to follow, and are
   specific to version 0.99.9. This site is also an inspiring example of
          how the Gimp can contribute to the design of a web-page.
                                      
                             News and Compendia
                                      
    If you'd like to keep up with the rapidly evolving Gimp scene, these
   links are among the best I've found and can serve as starting points.
     * Archived messages from the three Gimp mailing lists; new plug-ins
       are announced here and source patches are posted.
     * Federico Mena Quintera's Gimp page is full of links, tips, and
       news.
     * The Gazette's own Michael J. Hammel has a series of Gimp pages
       containing information, tips and tutorials.
     * Zachary Beane maintains this oft-updated Gimp news page; there is
       quite a bit of other good Gimp-related stuff at his site.
     * And of course the official Gimp home page!
       
     _________________________________________________________________
                                      
                       Copyright  1997, Larry Ayers
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
                                    BOMB
                                      
                       An Interactive Image Generator
                                      
                   By Larry Ayers, layers@vax2.rainis.net
                                      
                                  [INLINE]
                                      
                                Introduction
                                      
      Last month I wrote about Cthugha, a sound-to-image converter and
     display engine. Bomb is another image-generating program, but the
    sound component is subsidiary. The images produced have an entirely
      different character than those produced by Cthugha. Rather than
       working with and displaying audio data, bomb uses a variety of
    algorithms to generate images. Most of these are one form or another
   of artificial life (John Conway's Life is the most familiar of these),
     while some others are fractal, reaction-diffusion, or IFS-related.
                                      
    Bomb is a console Svgalib program, with no X11 version at this time.
                                      
                               Bomb's Images
                                      
    The output of bomb has a distinctive character, due in large part to
     the color palettes used by the program, which are contained in the
       file cmap-data. The images have a naturalistic, painting-like
   character, with earth-tones predominating. The reason for this is that
     Scott Draves generated the palettes using his program image2cmap,
   which extracts a representative 256-color palette from an image file.
    Scott used a variety of scanned photographs as input. The result is
    that bomb is strongly marked by Scott Draves' esthetic preferences.
                                      
      The format of the cmap-data file is ascii text, with an example
                  palette's first lines looking like this:
                                      
                            (comment leafy-face)
                                   (cmap
             (42 37 33) (31 23 25) (23 19 22) (20 20 24) [etc]
                                      
    This is similar to the format of the palette files used by Fractint
     and Cthugha; it probably wouldn't be too difficult to convert one
                            format to the other.
                                      
     The images are displayed full-screen, at 320x200 resolution. This
   gives them a somewhat chunky, pixel-ish appearance, and also seems to
      contribute to the painting-like quality. Many of the screens are
       reminiscent of a magnified view of microorganisms; there is an
                   illusion of opaque, non-human purpose.
                                      
     Here are a pair of sample bomb screens. The program has a built-in
            capture facility with the images saved as ppm files.
                                      
                               Bomb Screen #1
                                      
                               Bomb Screen #2
     _________________________________________________________________
                                      
                      Compilation and/or Installation
                                      
    The bomb archive file is rather large, over two megabytes; installed
   the bomb directory occupies nearly four and one-half mb., which seems
   like a lot for a relatively small program. Most of this space is taken
   up by the suck subdirectory. Suck contains about 200 TIFF image files.
     Some of the bomb modes use these images as seeds. The program will
   work fine without these images, so if you're short of disk space they
    could be deleted; another approach is to weed through the images and
   retain just a few favorites. If examined with an image viewer the TIFF
    files can be seen to be mostly basic, small black-and-white images,
    including large heavily-serifed single letters and logo-like images
     from a variety of cultures. When used as a seed, the image appears
     nearly full-screen but is eventually "eaten" by the pullulating AI
                   organisms until it is unrecognizable.
                                      
    Another subdirectory, called dribble, is where your screen-captures
     end up. Each captured PPM image takes up 197 kb., so it is wise to
    check the directory from time to time and weed through the captures.
                                      
   Bomb is rather picky about the versions of the required JPEG and TIFF
    libs on your system; they must be compatible with each other in some
      mysterious way. Initially I couldn't get it to run at all, but a
       reinstallation of the two graphics lib packages (from the same
   distribution CD, so that theoretically they would be compatible) cured
     this. Oddly enough my previous TIFF and JPEG libs, though updated
   independently of each other, worked with other programs which required
          them. Another argument for staying with a distribution!
                                      
    A binary is included in the distribution; the source is there if for
     some reason the binary won't work, or if you'd like to modify it.
                                      
   This program is one of those which is designed to be run from its own
     directory; in other words, you just can't move the executable to a
    pathed directory and leave the datafiles somewhere else. The easiest
   way to install it is to unarchive the package right where you want it
   to stay. Then when you want to run bomb, cd to its directory and start
                               it from there.
                                      
                          Controlling the Display
                                      
    You can get by using bomb just knowing that the spacebar randomizes
   all parameters and control-c quits. I found it worthwhile to print out
     the section of the readme file which details the various keyboard
               commands, as nearly every key does something.
                                      
   A different mode of keyboard control is enabled by pressing one of the
   first four number keys. Scott calls this the "mood organ", and when in
    this mode subtle parameters of the currently active display-type can
      be changed. In this state the entire keyboard changes parameters
   within the current mode; it's completely remapped, and can be returned
                to the default mode by pressing the "1" key.
                                      
   Left to its own devices, bomb periodically randomizes its parameters.
    Some combinations of color-map and algorithm are more appealing than
    others, so that if it seems stuck in a type of image you don't like,
   just press the spacebar and give it a fresh start. Another approach is
    to key in some preferred parameters; the display will still randomly
            change but will remain within the category selected.
                                      
      Bomb is the sort of program I like to set running when I'm doing
   something else within sight of the computer; if something interesting
    appears some tweaking will often nudge the program along a fruitful
                                  channel.
                                      
                           Obtaining the Archive
                                      
      The current version of bomb (version 1.14) can be obtained from
                  Sunsite or from the Bomb Home FTP site.
                                      
                  Is There Any Real Use For Such Programs?
                                      
   Aside from the obvious real-time entertainment value, programs such as
     bomb, cthugha, and xlockmore can serve as grist for the Gimp, the
   incredible (but difficult to learn) GNU image-processing tool. Lately
   I've been fascinated by the 0.99.9 developer's version of the Gimp. In
       this release an image can be saved as a *.pat file, which is a
     Gimp-specific image format used most often as flood-fill material.
   There is a "Patterns" window which, when invoked, shows thumbnails of
    all of the *.pat files in the Gimp pattern directory, including new
   ones you've dropped in. These are available for flood-fill if, in the
    "Tool Options" dialog, patterns rather than color has been checked.
     (Don't ask how long it took me to discover this!) Many of the bomb
     modes will produce tileable images, which makes them particularly
    useful as background fill material. The tricky aspect of this (as is
   true with any animated image generator) is capturing the screen at the
   right time. All too often the perfect image fleetingly appears (on its
           way to /dev/null) and is gone before you can save it.
                                      
     _________________________________________________________________
                                      
                       Copyright  1997, Larry Ayers
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
                     E2compr Disk Compression For Linux
                                      
                               by Larry Ayers
                                      
     _________________________________________________________________
                                      
    OS/2 used to be my main operating system, and there are still a few
   OS/2 applications which I miss. One of them is Zipstream, a commercial
     product from the Australian firm Carbon Based Software. Zipstream
   enables a partition to be mirrored to another drive letter; all files
   on the mirrored virtual partition are transparently decompressed when
    accessed and recompressed when they are closed. The compression and
   decompression are background processes, executed in a separate thread
      during idle processor time. Zipstream increased the system load
   somewhat, but the benefits more than adequately compensated for this.
     I had a complete OS/2 Emacs installation which only occupied about
                        four and one-half megabytes!
                                      
   A few weeks ago I was wandering down an aleatory path of WWW links and
     came across the e2compr home page . This looked interesting: a new
    method of transparent, on-the-fly disk compression implemented as a
   kernel-level modification of the ext2 filesystem. Available from that
    page are kernel patches both for Linux 2.0.xx and 2.1.xx kernels. I
      thought it might be worth investigating so I downloaded a set of
   patches, while I thought about how I may be just a little too trusting
         of software from unknown sources halfway across the world.
                                      
   The set of patches turned out to be quite complete, even going so far
     as to add a choice to the kernel configuration dialog. As well as
       patches for source files in /usr/src/linux/fs/ext2, three new
      subdirectories are added, one for each of the three compression
   algorithms supported. The patched kernel source compiled here without
     any problems. Also available from the above web-page is a patched
    version of e2fsprogs-1.06 which is needed to take full advantage of
   e2compr. If you have already upgraded to e2fsprogs-1.07 (as I had) the
    patched executables (e2fsck, chattr, and lsattr seem to coexist well
              with the remainder of the e2fsprogs-1.07 files.
     _________________________________________________________________
                                      
                                  Origins
                                      
   Not surprisingly, a small hard-drive was what led Antoine Dumesnil de
   Maricourt to think about finding a method of automatically compressing
     and decompressing files. He was having trouble fitting all of the
    Linux tools he needed on the 240 mb. disk of a laptop machine, which
    led to a search for Linux software which could mitigate his plight.
                                      
      He found several methods implemented for Linux, but they all had
      limitations. Either they would only work on data-files (such as
   zlibc), or only on executables (such as tcx). He did find one package,
   DouBle, which would do what he needed, but it had one unacceptable (to
   Antoine at least) characteristic. DouBle transparently compresses and
         decompresses files, but it also compresses ext2 filesystem
    administrative data, which could lead to loss of files if a damaged
            filesystem ever had to be repaired or reconstructed.
                                      
    Monsieur de Maricourt, after some study of the extended-2 filesystem
    code, ended up by writing the first versions of the e2compr patches.
     The package is currently maintained by Peter Moulder, for both the
                        2.0.x and the 2.1.x kernels.
                                      
                           Usage and Performance
                                      
   E2compr is almost too transparent. After rebooting the patched kernel
       of course the first thing I wanted to do was to compress some
      nonessential files and see what would happen. Using the modified
   chattr command, chattr +c * will set the new compression flag on every
   file in the current directory. Oddly enough, though, running ls -l on
    the directory afterwards shows the same file sizes! I found that the
    only way to tell how much disk space has been saved is to run du on
   the directory both before and after the compression attribute has been
     toggled. Evidently du and ls use different methods of determining
     sizes of files. If you just want to see if a file or directory has
      been compressed, running the patched lsattr on it will result in
                            something like this:


%-> lsattr libso312.so
--c---- 32 gzip9     libso312.so

   The "c" in the third field shows that the file is compressed, "gzip9"
     is the compression algorithm used, and "32" is the blocksize. If a
    file hasn't been compressed the output will just be a row of dashes.
                                      
   E2compr will work recursively as well, which is nice for deeply nested
                directory hierarchies. Running the command:


%->chattr -R +c  /directory/*

         will compress everything beneath the specified directory.
                                      
         If an empty directory is compressed with chattr, all files
        subsequently written in the directory will be automatically
                                compressed.
                                      
      Though the default compression algorithm is chosen during kernel
     configuration, the other two can still be specified on the command
   line. I chose gzip, only because I was familiar with it and had never
    had problems. The other two algorithms, lzrw3a and lzv1, are faster
   but don't compress quite as well. A table in the package's README file
   shows results of a series of tests comparing performance of the three
                                algorithms.
                                      
   The delay caused by decompression of accessed files I haven't found to
     be too noticeable or onerous. One disadvantage in using e2compr is
     that file fragmentation will increase somewhat; Peter Moulder (the
       current maintainer) recommends against using any sort of disk
             defragmenting utility in conjunction with e2compr.
                                      
       I have to admit that, although e2compr has caused no problems
     whatsoever for me and has freed up quite a bit of disk space, I've
   avoided compressing the most important and hard-to-replace files. The
     documentation specifically mentions the kernel image (vmlinuz) and
                    swap files as files not to compress.
                                      
    It's ideal for those software packages which might not be used very
     often but are nice to have available. An example is the StarOffice
    suite, which I every now and then attempt to figure out; handicapped
   by lack of documentation, I'm usually frustrated. I'd like to keep it
     around, as it was a long download and maybe docs will sometime be
   available. E2compr halved its size, which makes it easier to decide to
                                   keep.
                                      
       Another use of e2compr is compression of those bulky but handy
   directories full of HTML documentation which are more and more common
     these days. They don't lend themselves to file-by-file compression
     with gzip; even though Netscape will load and display gzipped HTML
   files, links to other files will no longer work with the .gz suffix on
                             all of the files.
                                      
                                  Warning!
                                      
   E2compr is still dubbed an alpha version by its maintainer, though few
      problems have been reported. I wouldn't recommend attempting to
      install it if you aren't comfortable compiling kernels and, most
                     important, reading documentation!
                                      
     _________________________________________________________________
                                      
                       Copyright  1997, Larry Ayers
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
                                 Xlockmore
                                      
                   By Larry Ayers, layers@vax2.rainis.net
                                      
     _________________________________________________________________
                                      
                                Introduction
                                      
   Several years ago, in the dark backward and abysm of (computing) time,
    Patrick J. Naughton collected several screen hacks and released them
     to other Unix users as a package called Xlock. A screen hack is a
    clever bit of programming which will display a changing image to the
   computer screen. People felt vaguely guilty about wasting time writing
    these little programs and gazing at the hypnotic, often geometrical
   patterns which they produced, and thus the concept of the screensaver
   was born. The rationale was that if a screen statically displayed text
      (or whatever) for a long period of time, a faint imprint of the
   display would "burn in" and would thereafter be faintly visible on the
     monitor screen. This actually did happen with early monitors, but
   modern monitors are nearly impervious to the phenomenon (i.e, it would
     take months). Nonetheless, the screensaver has survived, which is
       evidence that its appeal ranges beyond the merely prudent and
                                 practical.
                                      
    David A. Bagley has become the current maintainer of Xlock, which is
     now known as Xlockmore, due to the many new modes included in the
                                  package.
                                      
                                 Evolution
                                      
   Xlockmore can be thought of as a museum of screen hacks. The old Xlock
   modes are all still included, and some of them (at least to this jaded
   observer) aren't particularly impressive. On the other hand, there is
     a certain haiku-like charm to certain of the older modes. The pyro
     mode, for example, manages to convey something of the appeal of a
     fireworks display with nothing more than parabolically arcing dots
            which explode just over the peak of the trajectory.
                                      
    Over the years as computers have become more powerful the complexity
        of the added modes has increased. Some of the newer ones are
            CPU-intensive and need a fast processor to run well.
                                      
    David Bagley must be receiving contributed modes and bugfixes quite
    often, as he releases a new version every couple of months. Some of
      the newer modes are amazing to behold and take full advantage of
                         modern graphics hardware.
                                      
                                OpenGL Modes
                                      
    I'm sure most of you have seen some of the OpenGL screensavers which
    many Win95 and NT users run. Even though many of them advertise one
      product or another, they tend to be visually compelling, with a
      three-dimensional and shaded appearance. In the latest Xlockmore
    package the option is offered to compile in several flashy new modes
                    based on the Mesa OpenGL libraries.
                                      
   Gears is an impressive Mesa mode: nicely shaded gears turning against
                 each other while the group slowly rotates.
                                      
                              Gears screenshot
     _________________________________________________________________
                                      
     The Pipes mode, displaying a self-building network of 3D pipes, is
      also OpenGL-dependent. Marcelo F. Vianna came up with this one.
       Luckily most Linux distributions these days have prebuilt Mesa
                            packages available.
                                      
                              Pipes screenshot
     _________________________________________________________________
                                      
   Ed Mackey contributed the Superquadrics mode, which displays esoteric
        mathematical solids morphing from one to another. He also is
            responsible for porting the Gears mode to Xlockmore.
     _________________________________________________________________
                                      
                             Mathematical Modes
                                      
      Jeremie Petit, a French programmer, has written one of the most
   intriguing "starfield" modes I've ever seen. It's called Bouboule, and
     if you can imagine an ellipsoidal aggregation of stars... I really
   can't describe this one well, and a screenshot wouldn't do it justice.
   It's appeal is in part due to the stately movement of the star-cloud,
    somehow reminiscent of a carnival Tilt-A-Whirl ride in slow motion.
                                      
   Another excellent mode which doesn't show well in a screenshot is Ifs.
   If you have never seen Iterated Functions Systems images (Fractint and
    Dick Oliver's Fractal Graphics program display them well) this mode
   would be a good introduction. IFS fractals seem to have two poles: at
   one extreme they are severely geometrical (Sierpinski's pyramid comes
   to mind) and at the other, organic-looking forms which resemble ferns,
      shells, and foliage predominate. The Ifs mode induces a cloud of
    particles to fluidly mutate between various of these IFS forms. The
       result (at least to my mathematically-inclined eyes) is often
                                spectacular.
                                      
      The upcoming Gimp version 1.0 will include a nicely-implemented
    plug-in called IFS-Explorer, which enables the creation of IFS forms
                         in an interactive fashion.
                                      
   Massimino Pascal, another Frenchman, wrote Ifs, and as if that wasn't
   enough, he has contributed another math-oriented mode called Strange.
   This one recruits the ubiquitous cloud of particles and convinces them
    to display mutating strange attractors. They are strange to behold,
       diaphanous sheets and ribbons of interstellar dust (or is that
     subatomic dust?) twisting and folding into marvellously intricate
                   structures which almost look familiar.
                                      
    The eminent British physicist Roger Penrose invented (discovered?) a
   peculiar method of tiling a plane in a non-repeating manner many years
    ago. The Penrose tiling (as it came to be known) was popularized by
     several articles by Martin Gardner in his Mathematical Recreations
     column, which appeared in Scientific American magazine in the late
      sixties and seventies. The tessellation or tiling is based on a
    rhombus with angles of 72 and 108 degrees. The resulting pattern at
     first glance seems symmetrical, but looking closer you will notice
     that it varies from region to region. Timo Korvola wrote the xlock
    mode, and it can render two of the several variations of the tiling.
                                      
    An aside: recently Roger Penrose noticed the Penrose tiling embossed
       into the surface of a roll of toilet paper, of all things. He
       previously had patented the pattern, thinking that it might be
      profitably implemented in a puzzle game, so now he has sued the
     manufacturer. It'll be an interesting and novel trial, I imagine.
                                      
                           Sample Penrose Window
     _________________________________________________________________
                                      
     Another mathematical mode, very spare but elegant and pleasing to
     regard, is Caleb Cullen's Lisa mode. This one displays an animated
           lissajous loop which bends and writhes in a remarkably
   three-dimensional manner. As with so many of these modes, a still shot
                       doesn't really do it justice.
                                      
                                Lisa Window
     _________________________________________________________________
                                      
      The modes I've described are just a sampling of newer ones; the
      Xlockmore package contains many others, and more are continually
                                   added.
                                      
                               Configuration
                                      
    Xlockmore is included with most Linux distributions and tends to be
      taken for granted; the default configuration files for Fvwm and
      Afterstep (which most users use as templates for customization)
    include root-menu items for several of the older modes. I'd like to
    encourage anyone who has used Xlockmore to take the time to download
    the current version (4.02 as I write this). Not only because of the
    newer screensaving modes, but also because compiling it from source
           allows you to easily tailor Xlockmore to your tastes.
                                      
    Here is the procedure I follow when compiling an Xlockmore release:
      first I'll try to compile it "as is", just running the configure
    script and then compiling it. If by chance it can't find, say, your
      X11 or Xpm libs, you may have to point the Makefile in the right
                 direction by editing in the correct paths.
                                      
    If you are unfamiliar with Xlockmore, now is a good time to try out
    all of the modes. The quickest way to run through all of them is to
      run Xlock from an xterm window, with the following command line:
                                      
                    xlock -inwindow -mode [name of mode]
                                      
   A window will open up with the mode displayed. Dismiss it with a left-
    mouse-button click, press the up-arrow key to redisplay the command,
     and edit the command for the next mode. Keep track of the ones you
     would rather not keep, perhaps in a small editor window. There are
   three files which need to be edited: the Makefile, mode.c, and mode.h.
    Just edit out references to the unwanted modes (you can grep for the
    mode names to find the line numbers). Recompile, and you will have a
    smaller executable with only your selected modes included. You also
   will now be able to run xlock with the -fullrandom switch, which will
      display a random mode selected from the ones you chose to keep.
                                      
      Something to consider -- since at this point you have a compiled
   source tree there on your hard disk, you might want to take a look at
   the source files for some of the modes. In general, the *.c files for
     the various modes are unusually well commented. If you are curious
    about the origin or author of a mode, you'll find it in the source.
       There are often parameters that can be changed, if you like to
      experiment, and some files can be altered to suit your processor
    speed. A few modes even have entire commented-out sections which can
   be uncommented and thus enabled. It may not work, but if you save the
   original xlock executable before you start fooling with the source you
    can always revert to it. An advantage of keeping a built source tree
         while experimenting is that if you modify a single C file,
   recompilation is quick as only the modified file is recompiled. After
     all, one of the oft-touted virtues of Linux (and free software in
    general) is that source is available. Why not take advantage of the
                                   fact?
                                      
                                Availability
                                      
    The source archive for Xlockmore-4.02 can be obtained from ftp.x.org
                              or from Sunsite.
                                      
     _________________________________________________________________
                                      
                       Copyright  1997, Larry Ayers
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
   SSC is expanding Matt Welsh's Linux Installation & Getting Started by
   adding chapters about each of the major distributions. Each chapter is
    being written by a different author in the Linux community. Here's a
        sneak preview--the Red Hat chapter by Henry Pierce.--editor
                                      
     _________________________________________________________________
                                      
                               Red Hat Linux
                                      
                  By Henry Pierce, hmp@boris.infomagic.com
                                      
     _________________________________________________________________
                                      
                                 Contents:
                                      
     * Getting Started With Red Hat
     * Obtaining Red Hat Linux
     * Planning Your Installation
     * A Note About Upgrading Red Hat Linux
     * Choosing Your Installation Method
     * Creating the Installation Floppy Kit
     * Setting Up Yourr Installation Media
     * Recommendations
     * Using FIPS
     * Installing Red Hat Linux
     * Walking Through the rest of the Installation
     * Understanding the LILO Prompt
     * Logging in the First Time
     * Shutting Down Linux
       
     _________________________________________________________________
                                      
     The Red Hat distribution is an ever-growing and popular commercial
        distribution from Red Hat Software, Inc. Even though it is a
      "Commercial" distribution under the Official Red Hat Linux label
    purchased directly from Red Hat Software Inc., it may be downloaded
    from the Internet or purchased from third party CD-ROM vendors (see
                            Appendix B) as well.
                                      
     Much of Red Hat's growing popularity is due to its Red Hat Package
    Management Technology (RPM) which not only simplifies installation,
   but software management as well. This in fact, is one of the goals of
   the Red Hat distribution: to reduce the system administration burdens
   of obtaining, fixing and installing new packages so that Linux may be
   used to get some real work done. RPM provides software as discrete and
   logical packages. For example, the Emacs editor binary executable file
     is bundled together in a single package with the supporting files
    required for configuration of the editor and the extension of basic
                               functionality.
                                      
     The version of Red Hat described here is version 4.0/4.1, released
    October 1996/December 1996. Installation of earlier installations of
    Red Hat do differ in their installation procedures than the version
    described here. Installation of later versions of Red Hat should be
    very similar to the information given here. This document focuses on
    Intel based installation of Red Hat Linux. However, many aspects of
     installing the Alpha and Sparc versions of Red Hat are similar to
                  Intel Systems which are out lined here.
                                      
                        Getting Started With Red Hat
                                      
   The process of installing or upgrading Red Hat Linux requires backing
   up the existing operating system, obtaining the Red Hat distribution,
      planning your installation, preparing the hard disk, making the
     appropriate installation diskettes, going through the installation
    program and, finally, rebooting your system with the newly installed
    operating system. For those who currently have Red Hat Linux 2.0 or
      higher installed, you may upgrade by following the same process
   outlined here except you should choose "UPGRADE" instead of "INSTALL"
                 when prompted by the installation program.
                                      
                          Obtaining Red Hat Linux
                                      
    There are only two ways of obtaining the Red Hat Linux Distribution:
      on CD-ROM from Red Hat Software, Inc.\ or other 3rd party CD-ROM
   distributor or via FTP from: ftp://ftp.redhat.com/pub/redhat or anyone
    of the frequently less busy Red Hat mirror sites. No matter how Red
      Hat Linux is obtained, you should read the Red Hat Errata which
   contains a list of known problems for the release you install. You can
   obtain the current errata via: http://www.redhat.com/errata or by send
    email to errata@redhat.com. If you obtained Red Hat Linux from a 3rd
    party CD-ROM distributor (such as InfoMagic, Inc.) they often delay
    releasing their CD-ROM kit for 2 weeks to a month+ after a major new
     release of Linux so they can include the inevitable bug fixes and
   updates that follow on the CD-ROM an saving the trouble of downloading
                      them. Planning Your Installation
                                      
                         Planning Your Installation
                                      
    Planning an installation of Linux cannot be understated. The success
   or failure of installing or upgrading Linux is directly related to how
       well you know your hardware and understand how Linux should be
       installed on the target computer. This section outlines basic
     installation planning and considers common mistakes and oversights
    that prevent the successful installation of Linux. This is also true
    for people upgrading Red Hat Linux version 2.0 or higher to version
   4.X. In either case, it cannot be understated that you should back up
      you existing system before going further. In the rare cases that
   something should go wrong when you have not backed up your system that
     results in the loss of an existing operating system, your data is
   lost. So if it is worth saving, back up your system before continuing.
                         I now get off my soap box.
                                      
                            What Is RPM Anyway?
                                      
      Before we begin, it is worth taking a moment to discuss Red Hat
    Package Management (RPM) Technology as it is the core of installing
    and maintaining Red Hat Linux and helps you simplify the planning of
     installing and provides Red Hat Linux's ability to upgrade from an
              older version of Red Hat Linux to a current one.
                                      
        Traditionally, software under Linux and Unix system has been
                         distributed as a series of
package.tar package.tgz
package.tar.gz

                                     or
package.tar.Z

    files. They often required the system administrator who installs the
    packages to configure the package for the target system, install the
        auxiliary and documentation files separately, and setup any
      configuration files by hand. And if the package requires another
    supporting package that isn't installed, you won't know a package is
     missing until you try to use the new package. And the more add-on
    packages installed, the harder it is to keep track of them. Then if
     you want to remove or upgrade such a package, you have to remember
    where all the files for the package are, and remove then. And if you
    are upgrading a package, and forgot a pesky configuration file, then
        the upgraded package may not work correctly. In summary, the
    traditional method of distributing software does provide centralized
   management system of installing nor upgrading software packages which
      is crucial to easing the administrative burdens of managing the
                                  system.
                                      
   RPM, in contrast, is designed to manage software packages by defining
    how a package is built and collecting information about the package
   and its installation process the during package's build process. This
    allows RPM to create an organized packet of data in the header of a
package.rpm

    that can be added to an organized database that describes where the
      package belongs, what supporting packages are required, are the
        required packages installed and a means to determine package
   dependency information. These are, in fact, describe the design goals
    of RPM: the ability to upgrade an individual component or the entire
   system without re-installing while preserving the configuration files
   for the system/package; be able querying the RPM database to find the
   location of files, packages or other relevant package information; to
      perform package verification to make sure packages are installed
        properly or can be installed at all; to keep source packages
    "pristine" (provide the package author's original source with second
   party patches separate) so that porting issues can be tracked. Because
   RPM does this management for you, you can install, upgrade, or remove
    a package with a single command line in text mode or a few clicks of
   the mouse in the X Window Package Management Tool. Simple examples of
                    using RPM from the command line are:
rpm --install package.rpm

                        --this will install package
rpm --upgrade package.rpm

                        --this will upgrade package
rpm --erase package

                      --this will remove/erase package
                                      
    There are many more complicated things RPM can do such as querying a
    package to find out if it is installed, what version the package is,
    or query an uninstalled package for information. In essence, it does
   almost everything a package management tool should do. And Red Hat has
                       GPL'd this innovative system.
                                      
                         Anatomy of An RPM Package
                                      
   Essentially, RPM works by maintaining a central database of installed
       packages, the packages files and its version. A properly built
package.rpm

    has all of the following characteristics: its name will identify the
       package, the version of the package, the build revision of the
    package, the architecture the package is intended for, and of course
   the extension "rpm" to identify it as an rpm based package. Take, for
                                  example,
bash-1.14.7-1.rpm

   . The name, itself, contains a lot of useful information: the package
     is "bash", the Bourne Again Shell, it is version 1.14.7 and it is
   build 1 of the current version for Red Hat, it was built for an Intel
   or compatible 386 or higher CPU, and of course, it is in "rpm" format.
   So, if you see a package called bash-1.14.7-2.i386.rpm, you know it is
       a second build of bash v1.14.7 and probably contains fixes for
      problems with build 1 and obviously more current. And while the
       internal organization of an *.rpm is beyond the scope of this
   discussion, a properly built package contains an executable file, the
    configuration files (if any), the documentation (at least man pages
     for the package), any miscellaneous files directory related to the
    package, and record of where the packages files should be installed
   and a record of any required packages. Upon successful installation of
   a \<package\>.rpm, information about the package is registered in the
    RPM database. A more thorough discussion of RPM may be found in the
                         RPM-HOWTO available from:
      http://www.redhat.com/support/docs/rpm/RPM-HOWTO/RPM-HOWTO.html
                                      
                    A Note About Upgrading Red Hat Linux
                                      
     From the discussion above, you should have the sense that RPM is a
    powerful tool, so powerful in fact, that Red Hat Linux is one of the
   few Linux and Unix distributions that can truly claim to upgrade from
    an old release to a current release. If you are planning to upgrade,
    you should know that only upgrades from version 2.0 of Red Hat Linux
      and onward are supported due to major changes in Linux's binary
   format. Otherwise, upgrades can be performed from the same methods of
    installation: CD-ROM, NFS, FTP and a Hard Drive. As of Red Hat Linux
      v4.0, the upgrade option is incorporated into the Boot Diskette
    instead of being a program. For example, if you upgraded in the past
    from v2.1 to v3.0.3 and now want to upgrade to version 4.0, you will
    need to create the Boot Diskette (instead of looking for an upgrade
   script) just like those installing Red Hat 4.X from scratch. However,
     it will not reformat you partitions nor delete your configuration
                                   files.
                                      
                             Know Your Hardware
                                      
     Given the scope and variety of hardware, it is not surprising many
    people become confused. However, taking a little time to collect the
       following information will save much frustration and the time
       frustration costs when things don't install or work correctly:
     * Any existing operating systems on the target system and the hard
       drives on which they are installed.
     * Hard drive: interface type; the hard drive settings; the number of
       cylinders, heads, and sectors. The main consideration is whether
       your hard drive uses a SCSI or an IDE interface. If it is SCSI,
       you should know the SCSI ID of the drive for its settings. If it
       is an IDE drive, you should know if the drive(s) are on the
       primary or secondary IDE controller and which drives are set to
       "master" or "slave". The settings are crucial in determining
       whether LILO (LInux LOader) should be used to manage the booting
       of your operating system(s).
     * SCSI adaptor: You should know the make and model. This is useful
       in troubleshooting if you have a supported card that is not
       detected.
     * Memory: amount of installed RAM. Used to consider the amount of
       swap space.
     * Network Card: You should know the make and model.
     * CD-ROM: If you are installing from CD-ROM, you must know its make
       and model and settings as you would for a hard drive.
     * Mouse: You need to know if you have a PS/2, serial or bus mouse.
       You also need to know what protocol it uses. This is necessary for
       both the character based mouse server and for configuration of the
       X Window System (if you choose to install it).
     * Video Card: If you want to run the X Window System, you must know
       the brand and model of your card to configure the system to run X.
     * Monitor: If your want to run the X Window System, you must know
       the allowable vertical and horizontal synchronization frequencies
       for X to work.
       
     Again, taking the time to list the above information before going
    further will save you time and frustration and make the installation
    both easier and smoother. If your system didn't come with literature
    detailing the above parameters for your hardware, you should consult
    with your system vendor or the manufacturer of the equipment. Other
   useful information to have if you are going to be on a network are the
     TCP/IP networking settings for your system (check with your system
          administrator for these if you don't already know them).
                                      
                     Choosing Your Installation Method
                                      
     Red Hat Linux may be installed or upgraded via CD-ROM, FTP, NFS or
    from an existing Hard Drive partition. Installation nor Upgrading is
   not supported from floppy diskettes containing Red Hat packages. Which
    supported method chosen depends on your needs, available equipment,
     availability of Red Hat Linux and time. For example, if you are a
    network administrator that needs to update or install 16 Linux boxes
   over the weekend, an NFS install is generally the most prudent way. If
     you have a Red Hat CD-ROM for your personal machine, then a CD-ROM
     install is order or Hard Drive install if your CD-ROM drive isn't
   supported. If you don't have the CD-ROM and simply want to try Red Hat
      out and have a couple of hours to spare, then an FTP/Hard Drive
      install is a reasonable choice with a 28.8 speed modem or faster
     connection to the Internet. No matter which method you choose, the
    installation of Red Hat is similar in all cases. To begin, everyone
      needs to have the following files available and then create the
        Installation Floppy Kit described below to install Red Hat.
                                      
                    Creating the Installation Floppy Kit
                                      
       To create the Installation Floppy Kit, you need to obtain the
                                 following:
    1. The Red Hat Boot diskette, boot.img which is available via:
       ftp://ftp.redhat.com/pub/redhat/current/i386/images/boot.img or in
       the
$\backslash$images
   directory on a properly laid out Red Hat CD-ROM. Obviously, this is
       required for all installation methodologies.
    2. The Red Hat Supplemental Diskette, supp.img, which is available
       via: ftp://ftp.redhat.com/pub/redhat/current/i386/images/supp.img
       or in the
$\backslash$images
   directory on a properly laid out Red Hat CD-ROM. This diskette is
       required if you are method of install is not CD-ROM based or you
       need PCMCIA support for any devices such as a CD-ROM on the laptop
       to install properly. This diskette can also be used with the Boot
       Diskette for an emergency start disk for an installed system.
    3. The program RAWRITE.EXE which is available via:
       ftp://ftp.redhat.com/pub/redhat/current/i386/dosutils/rawrite.ext
       or in the
$\backslash$DOS
   directory on a properly laid out Red Hat CD-ROM. This program is run
       from and existing DOS or Windows 95 system to create usable
       diskettes from the boot.img and supp.img described above If you
       have an existing Linux/Unix system, the
dd
   command can be used instead. This is described later in the document.
    4. DOS and Windows 95 users installing Red Hat Linux for the first
       time on a machine that will have Linux installed as a second
       operating system should also obtain:
       ftp://ftp.redhat.com/pub/redhat/dos/fdips11.zip and unzip into
C:$\backslash$FIPS
   if you need to free space on your hard drive. This utility can
       non-destructively shrink and existing DOS 16-bit FAT (Please see
       Using FIPS for compatibility notes). This will achieve will unpack
       into the program files FIPS.EXE and RESTORB.EXE which are to be
       placed on the emergency boot disk made below. Your should also
       read FIPS.DOC (part of the package fips11.zip) for information on
       using FIPS not covered in this document.
    5. Create an Emergency Boot Diskette for an existing operating system
       on the target machine that Linux will be installed on as a second
       operating system must be created. This diskette should contain
       basic tools for trouble shooting. For example, a DOS or Windows 95
       emergency boot diskette should include a copy of FDISK.EXE,
       SCANDISK.EXE (or CHKDSK.EXE), DEFRAG.EXE and RESTORB.EXE as a
       minimum. This diskette is also used to back up an existing
       partition table for those that will use FIPS.EXE to
       non-destructively shrink existing partitions. By backing up the
       partition table, you can restore it with RESTRORB.EXE if the need
       arises.
       
                Creating the Boot and Supplemental Diskettes
                                      
   A note about creating the Boot and Supplemental Diskettes: If you are
                re-formating existing diskettes, DO NOT use
format /s A:

                     to format the diskettes, just use
format A:

     . The diskette images need the entire capacity of the diskette and
/s

   switch seems to prevent the diskette images from being properly copied
   to the floppies. For the emergency diskette below, you will of course
                         want to use the /s switch.
                                      
    One blank DOS formatted floppy is needed to create the Boot Diskette
    and one blank DOS formatted diskette is needed for the Supplemental
    Diskette. This diskette set is used for both installing or upgrading
    Red Hat Linux. Starting with Red Hat 4.0, a "one boot diskette fits
   all" strategy is employed to install or upgrade Red Hat Linux from the
   CD-ROM, FTP, NFS or Hard Drive medium. Other distributions (and older
   RHS distributions require you to match a boot image to your hardware,
    RHS v4.0 and higher do not). The Boot Diskette is made from the file
                      "boot.img" and is located in the
\images

         directory on the Red Hat CD-ROM or can be downloaded from:
   ftp://ftp.redhat.com/pub/redhat/current/i386/images/boot.img or one of
   Red Hat's mirror sites. If you are installing to a laptop with PCMCIA
   hardware, or from a Hard Drive, NFS or FTP you will need to create the
    Supplemental Diskette made from the file "supp.img" which is located
                                   in the
\images

         directory on the Red Hat CD-ROM or can be downloaded from:
        htp://ftp.redhat.com/pub/redhat/current/i386/images/boot.img
                     or one of Red Hat's mirror sites.
                                      
    The Boot Diskette image contains the bootable kernel and the module
       support for most combinations of hardware and the Supplemental
   Diskette contains additional tools for non CD-ROM installs. You should
   make the Supplemental Diskette even if you are installing from CD-ROM
   because the Boot and Supplemental Diskette can be used as an emergency
   boot system if something should go wrong with the install or with your
       system after it is installed and allow to examine the system.
                                      
     NOTE: some will notice the size of the boot.img and supp.img being
     1.47MB which is larger than 1.44MB. Remember that the unformatted
    capacity of a 1.44MB is really 1.47MB and that boot.img and supp.img
     are exact byte for byte images of a floppy diskette. They will fit
                       using one of the tools below:
                                      
        Using RAWRITE to Create the Boot and Supplemental Diskettes
                                      
                                The utility
RAWRITE.EXE

      may be used from DOS, Windows 95 or OS/2 to create the Boot and
                          Supplemental Diskettes.
RAWRITE

                            can be found in the
\DOSUTILS

       directory on the Red Hat CD-ROM or it can be downloaded from:
    ftp://ftp.redhat.com/pub/redhat/current/i386/dosutils/rawrite.ext or
       one of Red Hat's mirror sites. Once you have obtained it, copy

RAWRITE.EXE

                                     to
C:\DOS

                                     or
C:\WINDOWS

    directory (or other system directory in the command path) which will
                                 place the
RAWRITE

   utility in your command path. From the CD-ROM (presuming it is the D:
   drive or which ever drive and directory you downloaded RAWRITE.EXE to
                           on the system) to use
RAWRITE

                , copy it to one of your system directories:
D:\DOSUTILS> copy RAWRITE.EXE C:\WINDOWS

        Once rawrite has been copied to a system directory (such as
C:\DOS

                                     or
C:\WINDOWS

   , change to the images directory on the CD-ROM or to the directory you
     copied boot.img and supp.img to and do the following to create the
                               Boot Diskette:
C:\> D:
D:\> cd \images
D:\images> rawrite
Enter disk image source file name: boot.img
Enter target diskette drive: a:
Please insert a formatted disk into drive A: and press -Enter-:

    Once rawrite is done creating the Boot Diskette, remove the diskette
   from the floppy drive and label it "Red Hat Boot Diskette". Remember,
     Red Hat Linux 4.X uses a "one boot disk fits all" strategy so you
    don't have to worry about matching a boot image to your hardware as
                 earlier distributions of Red Hat required.
                                      
   To create the Supplemental Diskette, follow the instructions above but
   substitute "supp.img" for "boot.img". Remember to label this diskette
                      "Red Hat Supplemental Diskette".
                                      
                        Using dd Under Linux or Unix
                                      
      If you are creating the Boot and Supplemental Diskettes from and
      existing Linux or Unix box, make sure it has a 1.44-3.5" floppy
   available and you know how your system refers to the floppy device. If
     you don't know how the system accesses the floppy device, ask you
    system administrator. For Linux, Floppy Drive A: is called /dev/fd0
   and Floppy Drive B: is called /dev/fd1. To create the diskettes under
      Linux, `cd` to the system directory containing the boot.img and
    supp.img image files, insert a blank formatted diskette and type the
                              following enter
dd if=boot.img of=/dev/fd0

    to make the Boot Diskette. Once dd is done, remove the diskette from
    the floppy drive, label it "Red Hat Boot Diskette" and set it aside.
              Then insert a second formatted diskette and type
dd if=supp.img
of=/dev/fd0

   . Once dd is done, remove the diskette from the floppy drive, label it
             "Red Hat Supplemental Diskette" and set it aside.
                                      
                    Creating an Emergency Boot Diskette
                                      
       If you are installing Linux to a machine that has an existing
     operating system, make sure you create an emergency start diskette
     with useful diagnostic and recovery tools. Exactly how you want to
     create such a diskette various from operating system to operating
    system. However, MS-DOS 6.X and Windows 95 will be covered here and
          should give you some ideas for other operating systems.
                                      
    Windows 95 users should press "Start---Settings---Control---Panel---
      Add/Remove Software" and select the "Startup Disk" tab. Insert a
   blank, DOS formatted disk and press "Create Disk". When Windows 95 is
     done, you will have a boot diskette for Windows 95 containing use
       tools such as FDISK.EXE, SCANDISK.EXE and DEFRAG.EXE. Once the
                   diskette is created, you need to copy
C:FIPS\RESTORB.EXE

     (obtained and unpacked above) to the Windows 95 Boot Diskette you
   made. When you are done, remove the diskette and label it "Windows 95
           Emergency Boot Diskette and Partition Table Back Up".
                                      
   MS-DOS 6.X users need to place a blank MS-DOS formatted diskette into
    floppy drive A: and do the following to create their emergency boot
                                 diskette:
C:\> format A:\
C:\> copy C:\DOS\FDISK.EXE A:\
C:\> copy C:\DOS\SCANDISK.EXE A:\
C:\> copy C:\DOS\DEFRAG.EXE A:\
C:\> copy C:\DOS\SYS.COM A:\
C:\> copy C:\FIPS\RESTORB.EXE A:\

     Once you are done creating the diskette, remove it from the floppy
     drive and label it "MS-DOS Emergency Boot disk and Partition Table
                                 Back Up".
                                      
                         You are ready to continue!
                                      
                     Setting Up Your Installation Media
                                      
    Once you have created the Installation Floppy Kit, you should ensure
      your installation method is properly setup for using the Red Hat
        installation diskettes. For CD-ROM, NFS, FTP and Hard Drive
          installation methods, the medium must have the directory
\RedHat

                  on the "top level" with the directories
\base

                                    and
\RPMS

                                underneath:
RedHat
   |----> \RPMS (contains binary the .rpm s to be installed)
   |----> \base (contains a base system and files to setting up the hard drive)

      CD-ROMs will, of course have additional directories but the key
                directories needed for the installation are
\RedHat

                    on the top level of the CD-ROM with
\base

                                    and
\RPMS

    underneath on third party CD-ROMs. Obviously, Red Hat Software will
      ensure their Official Red Hat Linux CD-ROM will have the proper
   directory structure. So, if you are installing from CD-ROM, you may go
     to Preparing Your System for Installation. For the other types of
    installs, read the section appropriate section for your installation
                                  medium:
                                      
                     Setting Up for an NFS Installation
                                      
    For NFS installs, you will either need a Red Hat CD-ROM on a machine
       (such as an existing Linux box) that can support and export an
    ISO-9660 file system with Rockridge Extensions or you need to mirror
    one of the Red Hat distribution with the directory tree organized as
   indicated above. And of course the proper files in each directory. The
                                 directory
\RedHat

    then needs to be exported to the appropriate machines on the network
     that are to have Red Hat Linux installed or upgraded. This machine
   must be on a Ethernet, you can not do an NFS install via dialup link.
                                      
                  Setting Up For a Hard Drive Installation
                                      
                    Hard Drive installs need to have the
\RedHat

   directory created relative to the root directory of the partition (it
       doesn't matter which partition) that will contain the Red Hat
   distribution obtained either from CD-ROM or an FTP site. For example,
                  on the primary DOS partition the path to
\RedHat

                                 should be
C:\RedHat

       . On a DOS 16-bit FAT file system, it does not matter that the
package.rpm

            names get truncated. All you need to do is make sure
\RedHat\base

           contains the base files from a CD-ROM or FTP site and
\RedHat\RPMS

                              contain all the
package.rpm

   files from the CD-ROM or FTP site. The you can install or upgrade from
   that partition. If you have an existing Linux partition not needed for
   an installation or upgrade, you can set it up as outlined here as well
                                and use it.
                                      
    TIP: NFS and Hard Drive installs can provide more flexibility in the
    packages available to install. NFS and Hard Drive installs/upgrades
    implied that you can be selective about which packages are placed in
   the RPMS directory. For example, if you only want a text based system,
   then the X-based packages may be excluded. Also, if there are updates
   for the Red Hat system you wish to install, they may be placed in the
    RPMS directory in place of the distributions original packages. The
    only caveat for customizing the available packages for installing or
    upgrading Red Hat Linux is that package dependencies are meet. That
    is, if package A needs package B to be installed, both packages must
    be present to meet the interdependencies. This may, however, take a
    little experimenting to ensure all package dependencies are met. For
      more information, please see "Customizing Your NFS or Hard Drive
                            Installation" below.
                                      
                             FTP Installations
                                      
   For FTP installs over the Internet, all you need is the IP address of
    your nearest FTP server and the root directory path for the Red Hat
    Linux system you wish to install. If you don't know the nearest FTP
    site, consult with your system administrator or your ISP. If you are
      intending to do an FTP install over a low band width connection
       (defined as anything slow than a 128K ISDN link), it is highly
   recommend that you FTP the file files to a hard drive with an existing
   DOS partition and then do the hard drive install install described in
    this chapter. The total size of the binary packages available in the
/RedHat/RPMS

     directory is currently around 170MB which will take many hours to
   install. If anything goes wrong with the installation such as the link
     goes down, you will have to start over again. If you ftp the files
     first, setup your hard drive for installing Linux, it is then less
    work and less flustering to recover from a failed install. You don't
                   even have to download all the files in
/RedHat/RPMS

      to successfully install a minimal system that can grow with your
   needs. Please see Customizing Your NFS or Hard Drive Installation for
                                  details.
                                      
              Customizing Your NFS or Hard Drive Installation
                                      
       One of the interesting things you can do with Red Hat Linux is
    customize the install process. However, this is not for the faint of
   heart. Only those already familiar with Red Linux or Linux in general
      should attempt customizing the install. As of Red Hat v4.X, the
/RedHat/RPMS

   directory contains approximately 170MB of rpm files. RPM does compress
   these packages and can assume the package will need and average 2-3MB
                    of hard drive space for every 1MB of
package .rpm

              available for installation. For example, if the
package .rpm

     is 6MB in size, you will need between 12 to 18MB of free space to
     install the package. If you know what software you want and don't
      want, much of the software provided will not have value for the
   installation, and for for low band width connects, it is not feasible
   to download the entire tree. With this in mind, an installation can be
                  customized to remove unwanted software.
                                      
    Customizing the packages to install is an advantage and possible for
     the following types of installs: FTP, NFS and Hard Drive methods.
    CD-ROM cannot be written to (but you can copy the files to the hard
    drive and do a hard drive install with the customized package list).
    FTP and NFS installs can only be designed if you have administrator
    access to the server(s) on your network or your system administrator
     is willing to work with you. The following installation situations
    make customizing the installation desirable: Obtaining Red Hat Linux
      via FTP over a low band width connection or designing a suite of
   software to be used by all installation of a network of Red Hat Linux
                                   boxes.
                                      
             To customize the installation, you must obtain the
/base/comps

      file which will provide you with the list of packages the a full
    install would normally have. Then then packages you actually want to
                                install from
/base/comps

                        need be download. Then, the
/base/comps

   needs to be edited to reflect the packages you obtained and are going
   to install. (NOTE: if you have local package.rpms you can add them to
                          the comps file as well).
                                      
                        Understanding the COMPS file
                                      
               The Red Hat installation program uses the file
/RedHat/base/comps

   (the file here is an example from RHS v4.0) to determine what packages
                            are available in the
/RedHat/RPMS

   directory for each category to be installed. The file is organized by
       category and each category contains a list of packages Red Hat
     believes are the minimum required for that section. NOTE: only the
package

                          part of a packages name
package-version-build.rpm

    is listed in the file. This means the comps file is generally usable
    from one version of Red Hat to the next. A section in this file has
                               the structure:
number category
package
...
end

   That is a tag to identify the category number, the category, a list of
   the package names in the category and the tag "end" to mark the end of
                               the category.
                                      
     Without exception, everyone needs the all of the software packages
    listed in the Base section of the file. The other sections, though,
    generally can be customized or eliminated to suit a particular need.
     For example, there are three types of Networked Stations: "plain",
    Management, and Dialup. An examination of these sections shows that
   many of the software packages are listed in all three categories, but
      some software packages are specific to the category. If you are
   creating a Dialup Networked Station, then you can safely eliminate the
     "Plain" and "Management" sections and any software unique to those
    categories. Conversely, if you only need basic networking capability
    for a networked work stations, the other sections can be eliminated
       from the file as well as the unique software to each of those
    sections. All you need do is make sure you have the all the software
      packages listed in that category. If you have some local custom
   packages (those not provided by Red Hat Software), you should add them
   to an existing category that is appropriate rather than creating a new
                                 category.
                                      
    Because the list of packages in each category only contains the name
                    of the package, i.e., not the entire
package-name-version-build.rpm

     , you can substitute any updates Red Hat has made available in the
updates

       directory on: ftp://ftp.redhat.com/pub/redhat/current/updates
   or one of Red Hat's mirror sites for the original package found in the
                          distribution's original
/RedHat/RPMS

      directory. The means installation program is relatively version
    insensitive. The only caveats here are that package dependencies are
    met . When an rpm'd package is built, RPM itself tries to determine
    what packages must be installed for another package to work (the rpm
       developer also has direct control of this as well---he can add
    dependencies that rpm might not ordinarily detect). This is where a
    little experimentation, or research may be needed. For example, one
   way to determine package dependencies (if you have user access to your
     NFS server on an existing Red Hat Linux Box) is to telnet or login
         into it or if you have the CD-ROM, mount it and cd to the
RedHat/RPMS

           directory and query the packages for its dependencies:
[root@happy RPMS] rpm -q -p -R bash-1.14.7-1.i386.rpm
libc.so.5
libtermcap.so.2

      The "-q" puts RPM in query mode, the "-p" tells RPM to query an
       uninstalled package and the "-R" tells RPM to list the target
   package's dependencies required. In this example, we see libc.so.5 and
    libtermcap.so.2 are required. Since libc and termcap are part of the
     base of required software (as is bash really), you must insure the
   libc and libtermcap packages (the dependency packages) are present to
   be able to install bash (the target). Overall, as long as you get the
    entire base packages installed, you will be able to boot the system
      when the Installation Program completes. This means you can add
    additional packages to Red Hat as required even if the Installation
   Program reports a package failed to install because dependencies were
   not met. The following table describes the categories of software are
                                  found in
/base/compsin

                              of Red Hat v4.0:
                                      
                      RPM Category Required? Comments
                     BASE Yes Should not be customized.
    C Development Highly Recommend Need the minimal system to compile a
                                   kernel
   Development Libs Highly Recommend Need the minimal system to compile a
                                   kernel
                  C++ Development Optional C++ Development
     Networked Workstation Recommend; Required & Whether you are on an
   Ethernet or for other network software going to dialup networking, you
      need to install this package suite You shouldn't customize this.
     Anonymous FTP/Gopher Server Optional If your Linux box is going to
                       serve files via FTP or Gopher
    Web Server Optional Useful for Web Developers for local development,
                      required if you serve web pages.
    Network Management Workstation Optional Has additional tools useful
                   for dialup as well as Ethernet network
     Dialup Workstation Recommended Required if you are going to dialup
   Game Machine Optional Need I say more? Fortunes are required for humor
                                    :-)
         Multimedia Machine Optional If you have supported hardware
               X Window System Optional If you want to run X
        X Multimedia Support Optional If you have supported hardware
            TeX Document Formatting Optional Customize as needed
              Emacs Recommend The One True Editing Environment
                     Emacs with X Recommend Requires X
                   DOS/Windows Connectivity Optional Huh?
   Extra Documentation Required Man pages and should ALWAYS be installed.
                          Other packages optional.
                                      
                              Recommendations
                                      
    It is difficult to determine exactly what any one installation will
      require. However, someone installing via FTP should get the Base
      system and the Dialup Networked Station and install these. Then
    additional software can be obtained and added as the need arises. Of
    course if you want to do C programming, you should get the relevant
               packages and edit the comps file appropriate.
                                      
      One last caveat: If you encounter a file during the install that
      requires another package you don't have available, or you make a
    mistake in the comps file, you can generally finish the install and
       have a bootable working system. You can correct the problem by
     manually adding the failed packages and their dependencies later.
    Overall, get the entire Base system and a Networked Station packages
         installed and you can add anything you need or want later.
                                      
                      Preparing Your System to Install
                                      
   Before continuing, if you have an existing operating system, and have
    not yet backed up your data, you must back it up now. While most of
     the time installing Linux will not result in the loss of data, the
   possibility exists, and the only way to guarantee a recovery in such a
                catastrophic event is to back up your data.
                                      
   At this point with the information collected above and having decided
   on an installation method above, preparing your system should offer no
      obstacles. Essentially, you need to make sure you have free and
   unpartitioned space on one the system's hard drives. (NOTE: there is a
     file system type known as UMSDOS that some distributions use as an
    optional way to install Linux onto an existing DOS file system; Red
     Hat Linux does not support this type of installation.) If you are
       installing on a system that will only have Linux and does not
     currently have an operating system installed, then you are set to
   partition your hard drive and can go to the next section. If you have
     an existing operating system, such as DOS/Windows 3.1, Windows 95,
   OS/2 or another operating system, then things are a bit more complex.
    The following should help determine what you need to do to free hard
                                drive space:
     * DOS or Windows 95 using DOS 16-bit FAT: You may use utility
       FIPS.EXE that is part of the Installation Floppy Kit described
       above that will allow you to non-destructively make a single DOS
       16-bit file allocation table (FAT) into two or more DOS 16-bit
       FATs. These new, empty partitions can be deleted, creating free
       space to be used for Linux partitions. See FIPS.EXE below. If you
       have a CD-ROM containing Red Hat, there should be a directory
       called
\dosutils
   containing a copy of FIPS.EXE. Otherwise, the FIPS package can be
       downloaded from: ftp://ftp.redhat.com/pub/redhat/dos/fips11.zip
       or one of Red Hat's mirror sites.
       NOTE: Microsoft has introduced a new 32-bit FAT system with recent
       Windows 95 releases. This 32-bit FAT system cannot be shrunk by
       the current version of FIPS.EXE. In Windows 95, if you check under
       My Computer | Control Panel | System and your Windows 95 kernel
       version ends in a "B", Windows 95 is likely to be using a 32-bit
       FAT.
     * OS/2, Windows NT, DOS 32-bit FAT and Other Users: You will need to
       either back up existing partitions and delete them, or if using a
       single partition, delete the partition and re-install the
       operating system into a smaller partition, leaving free space to
       be used for Linux partitions.
       
                  Planning to Partitioning The Hard Drive
                                      
                  Linux has its own version of the program
fdisk

   used to create native Linux and swap partitions. However, the details
    of its use are described later in this guide. However, discussion of
   the concepts on how to partition your hard drive are important now so
     reasonable decisions can be made on how much and how to make free
                   space available on the target system.
                                      
     One way of installing Linux is to use two partitions---one for the
    operating system and one for the swap file in the free space on your
        hard disk. However, this is not an ideal way for Linux to be
   installed. While some hardware configurations may only allow this type
       of organization, the recommend method is to use a minimum four
                       partitions for Linux: One for
/

                      (the "root" partition), one for
/var

                                 , one for
/home

   and one for swap. Unlike logical DOS drives which are assigned a drive
       letter, Linux partitions are "glued" together into one virtual
    directory tree. This scheme takes advantage of how Linux operates in
    the real world. Essentially, each file system reflects the life time
                        of a file: the files on the
/

         partition have the longest "time to live" because they are
    infrequently updated and often last as long as the operating system
                      itself does on the hardware; The
/home

    partition represents medium file life times that can be measured in
                   weeks or days, such as user documents;
/var

     represents files with the shortest life time (such as log files),
   measured in minutes or even seconds. This type of setup also suggests
     a backup strategy: the root file system only needs to be backed up
    when a new program is added or configuration files are changed. The
/home

       partition can be put on some sensible full/incremental back up
   schedule while /var never needs to be backed up, with the exception of
/var/spool/mail

        . A more through discussion of this can be found in Kristian
     Koehntopp's Partition mini-HOWTO and Stein Gjoen's Multiple Disks
                             Layout mini-HOWTO.
                                      
   A PC can have either have a maximum of four primary partitions or have
       three primary partitions and 1 extended which can contain many
    "logical" drives. One model in which to understand this are Russian
      Stacking Dolls. Basically, Russian Stacking Dolls are containers
    within containers but each container is a discrete doll. A partition
    is a mechanism describing a container within the master container of
    the hard drive that an operating system does not leave the confines
     of. A normal PC hard drive can have up to four primary containers
     (Primary Partitions) or three primary containers and one extended
      container (Extended Partition) that contains Logical containers
      (Logical Drives/Partitions). This means you can have one primary
     partition for DOS/Windows, one primary partition for the root file
    system, one primary partition for a swap partition, and one Extended
                  partition containing logical drives for
/var

                         and one logical drive for
/home

   (as well as other "optionally" defined partitions). However, Linux can
     and it is often prudent to have more than the partitions outlined
    here. Due to some design limitations in typical PCs BIOS, there are
        limitations on how partitions can be setup and still be boot
                                partitions.
                                      
    Overall, IBM designers didn't think that a PC would ever have 1 GIG
   drives 15 years ago when the PC was originally designed. As a result,
    a PC BIOS is limited to a 10-bit address for describing the initial
     geometry of a hard drive. This happens to correspond to one of the
    values used in calculating the location of a piece of data on a hard
   disk known as cylinders. A 10-bit number is sufficient to describe the
       numbers 0 through 1023 in decimal notation. A drive with 1024
   cylinders, 16 heads and 63 sectors per track, is approximately 504MB.
     This is important for 2 primary reasons: Most boot loaders have to
    depend on BIOS to get a drives initial geometry for calculating the
     beginning of a partition and the average drive size on the market
    these days is 1.2 GIG which contain 2,000+ cylinders. Luckily, most
     newer system (usually those with a BIOS designed in 1994 or later)
   have a BIOS that supports Logical Block Addressing (LBA). LBA mode is
    a means of supporting Large Hard Drives by 1/2 or 1/4 the number of
     cylinders and doubling (or quadrupling) the number of heads. This
     allows for the proper calculation of drive geometry while working
     within the constraints of BIOS. So a drive with 2048 cylinders, 16
       heads and 63 sectors per tract will, under LBA mode, have 1024
   cylinders, 32 heads, and 63 sectors per tract. Now, we can potentially
               use any primary partition as a boot partition.
                                      
   Now, with all this theory and practical advice, it is time to provide
   some example of how this can be put together; the first example is an
         850MB drive with LBA mode enabled which might be divided:

Partition       File System Type        Use     Size
/dev/hda1       MS-DOS  DOS/Win95       400MB
/dev/hda2       Linux Native (ext2)     /       325MB
/dev/hda3       Linux Swap      Swap    32MB
/dev/hda4       Extended        N/A     93MB
/dev/hda5       Linux Native (ext2)     /var    40MB
/dev/hda6       Linux Native (ext2)     /home   53MB

     This table might be useful for a machine used by a single person.
       There a couple of things to note here. First, the labeling of
                          partitions by Linux. The
/dev

       is the Linux directory where "device files" are kept (this is
    different than a device driver but it is related to device drivers)
    that Linux uses to identify devices by user programs. The next part,
hda

   , means "hard disk A" used to designate "Fixed Disk 1" as it is called
     under DOS. But it also means that the drive is an IDE drive. SCSI
                              drives would use
sda

                      for "SCSI Disk A. The whole line
/dev/hda1

     means the 1st partition on hard disk A. As for the sizes that are
      being used here, they are a little arbitrary, but fall under the
    following guidelines: A virtual decision was made to use half of the
   drive for DOS or Windows 95 and roughly half for Linux. So, 400MB was
    allocated for DOS and it is presumed that is enough for those needs.
                                    The
/

    root file system is 325MB which is enough for the base Linux system
    (usually about 50MB), programming tools such as C, C++, perl, python
    and editors such as vi and EMACS as well as the X Window System and
   some additional space for extra useful packages you might find in the
   future. If you do not plan to run X, you can subtract 150MB from this
   total. The swap partition is determined by multiplying 2x physical ram
        installed on our virtual machine (which has 16MB of core RAM
    installed). If you are tight on space or have less than 16MB of ram,
   you should have at least a 16MB swap partition. However, you must have
                 a swap partition defined. 40MB is used for
/var

    which includes enough space for log files and email handling for one
                        or two people. and 53MB for
/home

           provides plenty of space for a user or two to work in.
                                      
                     How Much Space Do You Really Need?
                                      
     By now, an installation method has been chosen and a view of what
     partitioning for Linux has been discussed. But how much space do I
    really need? The answer is: "It depends." To make a decision on how
     much space is needed, This a the goal(s) of why you are installing
    Linux must be reviewed because it has a direct bearing on the space
   needed to meet these goal(s). If you install everything, you will need
     about 550MB for all the binary packages and supporting files. This
    does not include swap space or space for your own files. When these
   are factored in, a minimum of 650MB or more is needed. If your goal is
   more modest such as having a text only system with the C compiler, the
   kernel source tree, EMACS, basic Internet dialup support, then 125 to
      150MB of hard drive space is sufficient. If your plans are more
    demanding such as having a web development platform and X then 450MB
      or so described in the model above should be enough. If you are
   planning to start and ISP or commercial web site, then 2 or more GIGs
    of hard drive space may be needed depending on the scope of services
   being offered. The overall rule of thumb is having to much real estate
   is a good thing, not having enough is bad. To help you decide how much
     space is enough, here are some basic formulas/values for different
                                   needs:
                                      
                  Use of Partition Recommend Size Comments
    Swap 2 x Physical RAM If less than 16MB of RAM installed, 16MB is a
    must. If space is tight, and 16MB RAM installed, 1 x Physical RAM is
                          the minimum recommended.
   Root system, no X 100 - 200MB Depends on tools such as compilers, etc.
                                   needed
    Root system, with X & 250-350MB Depends on tools such as compilers,
                                etc., needed
    /home 5 - Infinite MB Depends on being single or multiple users and
                                   needs
         /var 5 - Infinite Depends on news feeds, # of users, etc.
     /usr/local 25 - 200MB Used for programs not in RPM format or to be
                   kept separate from the rest of Red Hat
                                      
                                 Using FIPS
                                      
   Many people installing Linux have one hard drive with a single DOS or
   Windows 95 partition already using the entire hard drive, or they may
   have two drives with 1 DOS or Windows 95 partition per drive. FIPS is
   a utility that can non-destructively shrink a 16-bit DOS FAT in use by
    DOS 3.X or higher and many implementations of Windows 95. (NOTE: if
   you are using revision "B" of the Windows 95 kernel, you may be using
         FAT32 which FIPS currently cannot shrink.) If you meet the
   requirements above, then you can shrink an existing primary partition
     on any drive. NOTE: FIPS cannot shrink logical drives or extended
    partitions. If you have Red Hat on CD-ROM, the utility should be in
                                    the
\dosutils

     directory on the CD-ROM. If you have downloaded Red Hat Linux, you
             should also download FIPS package available from:
               ftp://ftp.redhat.com/pub/redhat/dos/fips11.zip
      or one of the many Red Hat's mirror sites. You should also read
     FIPS.DOC included with this package for details on FIPS operation.
                                      
   A few caveats about using FIPS: As a reminder, you should back up your
    existing data before using it. While it is rare for FIPS to damage a
   partition, it can happen, and backing up your data is the only way to
   recover from such a catastrophe. FIPS can only be used on primary DOS
       16-bit FAT partitions. It cannot be used on any other types of
     partitions, nor can FIPS be used on Extended partitions or Logical
   drives. It can only split primary partitions. Before running FIPS, you
    must run SCANDISK to make sure any problems with your partition are
     fixed. Then you must run DEFRAG to place all the used space at the
   beginning of the drive and all the free space at the end of the drive.
   FIPS will split an existing primary partition into to two primary DOS
    16-bit FAT partitions: One containing your original installation of
    DOS/Windows 95, and one empty, unformatted DOS 16-bit DAT partition
    that needs to be deleted using the DOS or Windows 95 fdisk program.
              The following steps outline how to use FIPS.EXE:
    1. Copy
FIPS.EXE
   to
C:\WINDOWS
   or
C:\DOS
   . This will place
FIPS.EXE
   in your command path.
    2. Create or use the bootable DOS or Windows 95 emergency disk
       described in the Installation Floppy kit above and place the
       program
RESTORB.EXE
   on the disk if you have not already done so. FIPS gives you the
       ability to back up your existing partition table, allowing you to
       return your system to its previous state using
RESTORB.EXE
   .
    3. Run
scandisk
   and
defrag
   (included with DOS 6.X and higher). This makes sure there are no
       errors on your hard drive and places all the free space at the end
       of the drive.
    4. Make sure you are in DOS mode (i.e., not running Windows 3.X or
       Windows 95).
    5. Type
fips
   . An introductory message will appear and you will be prompted for
       which hard drive on which to operate (if you have more than 1).
       Most people will choose "1" for the first hard disk to shrink.
    6. After confirming that you wish to continue, you will be asked to
       make a backup copy of your existing boot and root sectors on the
       bootable disk made above. This will allow you to restore the hard
       drive if needed.
    7. FIPS will ask if all the free space on your existing partition
       should be used to create a second partition, with an initial
       partition table if you accept the defaults. If this isn't
       acceptable, say "no" and then use the up and down arrow keys to
       adjust the amount of space used for the second partition. Once you
       are happy with the division, hit Enter to stop editing. If the
       sizes with the new partition table are acceptable, choose "c" to
       continue. If not, choose "r" to re-edit the table.
    8. One last chance is given to quit FIPS without making changes or
       writing out the new partition table. If you are happy, write it
       out!
    9. Once FIPS is done, re-boot your computer to have FIPS changes take
       effect.
   10. Next, use DOS's
fdisk
   to delete the second DOS partition. This will leave unallocated space
       to be used by Linux's version of f:disk later to create Linux
       native and Linux swap partitions.
       
   With the appropriate things done in this section for installing Linux,
                you are now ready to Install Red Hat Linux!
                                      
                          Installing Red Hat Linux
                                      
    By now, you should have created an Installation Floppy Kit, Prepared
      Your Hard Drive, and Have your Installation Media ready. for the
    install. The details of the installation follow, however, you first
    begin by booting your system and configuring the install program to
   install from your selected medium. Once this is done, the installation
   proceeds with the same steps for each everyone one after that. At this
    point, you need to begin by booting your computer with the diskette
                          labeled "Boot Diskette".
                                      
                       Using Your Installation Media
                                      
   As the boot diskette starts up, the kernel will attempt to detect any
    hardware which the boot diskette has drivers compiled directly in to
     it. Once booting is complete, a message asking if you have a color
       screen appears (if you do, select OK). Next comes the Red Hat
      Introduction Screen welcoming you to Red Hat Linux. Choose OK to
   continue. The next questions asks if you need PCMCIA support which you
     need to say yes to if you are installing to a laptop; say yes and
   insert the Supplemental Diskette when prompted. Once PCMCIA support is
    enabled (if needed), you will be presented with a screen asking what
   type of installation method you will be using. Follow the instructions
    for the installation method you've chosen described in the following
                                 sections.
                                      
                           Installing From CD-ROM
                                      
       If installing from CD-ROM, you should choose "Local CD-ROM" by
    highlighting it from the list of installation types. Once you choose
    "Local CD-ROM" and click "OK", you will be asked if you have a SCSI,
   IDE/ATAPI or Proprietary CD-ROM that you wish to install from. This is
    where some of the hardware research pays off: if you have a recently
    made 4X or faster CD-ROM drive that was bundled with a Sound Blaster
     or other sound card, you most likely have an IDE/ATAPI type drive.
            This is one of the most confusing issues facing you.
                                      
    If you choose SCSI, you will be asked what kind of SCSI card and be
   presented a list. Scroll down the list until you find your SCSI card.
    Once you have choose it, you will be asked if you wish to AUTOPROBE
   for it or SPECIFY OPTIONS. Most people should choose "AUTOPROBE" which
    will cause the setup to scan for your SCSI card and enable the SCSI
                      support for you card when found
                                      
     Once the Installation Program has successfully located the Red Hat
       CD-ROM, you should proceed to "Walking Through the Rest of the
                               Installation."
                                      
                       Installing From The Hard Drive
                                      
    If you are installing from a hard drive, then highlight this option
   and choose "OK". If you have not already choose PCMCIA support, you be
               prompted to insert the Supplemental Diskette.
                                      
                             Installing via NFS
                                      
    If you are installing via NFS, then highlight this option and choose
    "OK". You will next be asked to choose which Ethernet card you have
    installed on the target machine so the Installation Program may load
    the correct Ethernet driver. Highlight the appropriate card from the
       list and then select "OK" allowing the Installation Program to
    AUTOPROBE for you card. However, if you machine hangs, you will need
                                   to do
Ctrl-\Alt-Delete

      to reboot the system. Most of the time, when this happens, it is
     because the probing "touches" a non Ethernet card. If this should
     happen, try again and choose "SPECIFY OPTIONS" and give data about
                         your card in the form of:
ether=IRQ,IO\_PORT,eth0

   This will instruct the probe to look at the location specified by the
                                   values
IRQ

                                    and
IO\_PORT

   for an Ethernet card. For example, if you Ethernet card is configured
             for IRQ 11 and IO\_PORT 0x300, you would specify:
ether=11,0x300,eth0

    Once your card has been successfully found, you will be prompted for
     TCP/IP information about your machine and the NFS server with the
     Linux installation. First, you will be asked to provide the target
      machines IP Address, Netmask, Default Gateway, and Primary Name
                            Server. For example:

IP Address:          192.113.181.21
Netmask:             255.255.255.0
Default Gateway:     192.113.181.1
Primary Nameserver:  192.113.181.2

        Once you press OK, you will prompted for the target machines
        Domainname and Hostname. For example, if you domain name is
                infomagic.com and hostname is vador, enter:

Domainname:               infomagic.com
Host name:                vador.infomagic.com
Secondary nameserver IP:  Enter if needed
Tertiary nameserver IP:   Enter if needed

    The last screen will prompt you for the NFS server and the exported
   directory containing the Red Hat distribution. For example, if you NFS
                   server is redhat.infomagic.com, enter:

NFS Server name:    redhat.infomagic.com
Red Hat Directory:  /pub/mirrors/linux/RedHat

       Again, if you do not know these values, check with you system
      administrator. Once you have entered the values, choose "OK" to
    continue. If the Installation program reports and error locating the
   Red Hat distribution, make sure you have the correct values filled in
    above and that your network administrator has given the above target
                   machine information export permission.
                                      
                             Installing via FTP
                                      
   An FTP install is very similar to the NFS install outlined above. You
      will be prompted for the Ethernet card and your machines TCP/IP
   information. However, you will be asked for the FTP site name and Red
      Hat directory on the Red Hat mirror site. instead of NFS server
   information. There is one caveat about performing an FTP install: find
     the closest and least busy FTP site to your location. If you don't
        know how to do this, check with your network administrator.
                                      
      TIP: If your hardware isn't detected, you may need to provide an
   override for the hardware to be enabled it properly. You may also want
         to check: http://www.redhat.com/pub/redhat/updates/images
      to see if Red Hat has updated boot diskettes for your hardware.
                                      
                Walking Through the rest of the Installation
                                      
    1. Next, you will be asked if you are installing to a New System or
       Upgrading RedHat 2.0 or higher. If you are upgrading, you will not
       be offered the chance to partition your hard drive or configure
       anything with your system except LILO. Press either INSTALL or
       UPGRADE to continue.
    2. If you are upgrading, you will be asked for the root partition of
       your existing Red Hat system. Highlight the appropriate partition
       of your existing Red Hat system and Press "OK". If you are
       installing for the first time, you will need to partition your
       hard disk with free space determined above. The following
       discussion is an example based on Planning to Partition the Hard
       Drive. If you do not have any free space on your hard disk to
       create partitions and are using a 16-bit FAT such as that used by
       DOS or most Windows 95 installations, please review the Using FIPS
       section of this document. To use fdisk, highlight the disk you
       wish to partition from the list presented to you by the
       Installation Program. You will be dropped from the "graphic"
       screen and presented with a black and white screen with the
       following prompt:
Command (m for help):
       This rather mysterious prompt is Linux's fdisk's command prompt.
       If you press `m`, you will get a list of commands with a short
       definition of what each does. However, the most useful one to
       start with is "p". This will print your existing partition on the
       screen. If you have existing partition(s) on the drive they will
       be displayed. Make sure you can create at least one 25-50MB
       partition that starts before cylinder 1024 and ends on or before
       cylinder 1023 as this type of locations is required by LILO to be
       able to boot the root partition which will in turn allow the
       kernel to take over you system which is not restricted in the way
       LILO is. Once the kernel boots your system, it queries the
       hardware directory and ignore BIOS.
       To create a primary root partition of 50MB according to our
       example above, enter "n". First, you will be asked for a partition
       number between one and four. Our example in Planning to Partition
       the Hard Drive suggests two. You will be asked if the partition is
       to be a primary or extended, enter `p` primary. Next you are asked
       to enter the beginning cylinder which should be the first
       available cylinder from the range given. After you hit enter, you
       will be asked for the ending cylinder. Since we want to make this
       partition 50MB, you can enter +50M and fdisk will calculate the
       nearest ending cylinder for a space of about 50MB. Once you have
       done this, enter the "p" command so you can make sure the new
       partition ends on or before cylinder 1023. If the new partition
       doesn't, use the "d" command to delete partition two and try again
       except enter +40MB for the new primary partition and check again
       with the "p" command. Keep doing this until you get a root
       partition below cylinder 1024. Overall, if you cannot create a
       root partition of at least +25M below cylinder 1024, then you will
       either need to free more space below cylinder 1024 or not use
       LILO.
       Next, according to our example, you will want to create a swap
       partition that is 2 x physical ram installed. Creating a swap
       partition requires two steps, first using the "n" command to
       create a primary partition (three in the example). Following the
       instructions above, except enter the value of +(2 x physical RAM)
       MB. For the swap and other partitions, we don't care what there
       beginning and ending cylinders are because they are not crucial
       for LILO to work correctly---only the root partition is. Once you
       have created the Linux native partition to be used as the swap
       partition, you need to use the "t" command to change the partition
       ID to type "82" when prompted. This changes the partition ID so
       Linux well recognize it as a swap partition. When you have
       successfully done this, the "p" command will report that you have
       a native Linux partition and a Linux swap partition.
       Now, since we need two more partition, but the hard drive in a PC
       can only support four primary partitions and three primary
       partitions have been used, we need to create an Extended partition
       that occupies the rest of the drive that will allow the creation
       of Logical drive with end the extended partition. This time, to
       create the Extended partition with the "n" command, enter four for
       the partition number and choose "e" when prompted to create an
       Extended partition. When asked for the beginning cylinder, use the
       first one available and for the last cylinder, enter the last
       available cylinder. You are now ready to create Logical drives for
/var
   and
/home
   according to our example.
       To create a logical drive of 40MB to be used as
/var
   , enter "n" to create a partition. Because there is no longer a choice
       of Primary or Extended, you are not prompted for this information
       but instead asked if this is to be partition five.
       Once you have completed this, you will be asked for the starting
       cylinder which should be the first available cylinder. For the
       ending cylinder, enter +40M for the size as the size was entered
       above. For the
/home
   partition, you may have a choice. If your drive is larger than the
       850MB suggested in the example, you can enter +53Mb as indicated
       above and use the extra space for partition such as
/var/spool/mail
   and
/usr/local
   . Otherwise, just use the last available cylinder to define
/home
   . Once you are done creating partitions, you can use the "p" command
       to print the partition one last time to review it. However, you
       won't modify any thing until you use the "w" command to write the
       partition out to the hard disk. If you decided not to modify the
       partition table at this time, choose "e" to exit without modifying
       the partition table. NOTE: When creating Logical partitions, you
       must reboot the system in order for Logical Partitions to be
       usable. Simply go through the options as you did up to being asked
       to partition you drive. However, say no the second time around and
       proceed to the next step.
    3. Once you have created the necessary Linux Native and Linux Swap
       partitions. You are required to have one swap partition. After the
       swap partition is initialized, you will then be asked which
       partition(s) you intended to install Linux to (if upgrading,
       simply indicate your existing root partition): You must configure
       and choose one partition for the root partition. Highlight the
       root partition. Then (unless you are upgrading) you will be
       presented with a table of other available partitions. Choose the
       appropriate partitions and "EDIT" to indicated which partitions
       will be used for which directories. If you have more than one
       partition for the Linux installation, now is the time to designate
       them as well.
    4. Next is the Software Package Selection. First, a list of software
       categories to install is presented, followed by a chance to
       customize which software packages from each category is to be
       installed. If you have not installed Red Hat or other distribution
       of Linux before, simply choose the category of software you wish
       to install and let the setup program install the defaults for each
       categories. If you find you need a package that wasn't installed
       originally, you can always install it easily later. While the
       software is installing, you will see a progress indicator and
       should get a cup or two of coffee. Installation can take anywhere
       from thirty minutes to an hour or so, depending on software
       choices and hardware configuration.
    5. After the software installation is done, you will be asked to
       configure you mouse. Again, choose what is appropriate for your
       hardware.
    6. Next is the X Window System configuration. We recommend you wait
       until after you boot your system for the first time to configure
       X. If something goes wrong with the X configuration, you may need
       to start the installation procedure from the beginnings the
       Installation Program isn't able to recover.
    7. If you do not have an Ethernet Card, DO NOT configure your network
       at this time. If you do have a network card and didn't configure
       it earlier, you should configure it now. Configuring for a dialup
       network should be done after the installation is complete.
    8. Next, you need to configure the system clock. UTC is a good choice
       if you are on a network and want daylight savings time handled
       properly. Local Time is good if the computer is a stand-alone
       machine.
    9. If you do not have a US Keyboard, you will need to configure for
       the country keyboard you have at this time.
   10. You will now be prompted for the system password for the root
       account. Write it down and don't forget it as it is a non-trivial
       matter to recover the password and you will need it to access the
       system when you first reboot.
   11. Finally, you will be asked to configure LILO. If you have not
       installed a root partition that begins and ends between cylinder
       0-1023, DO NOT INSTALL LILO! If, when you reboot the system for
       the first time LILO does not allow you to boot your system
       correctly, use the Emergency DOS/WINDOWS 95 boot diskette and type
       the following at the
A:\> fdisk /mbr
   . This will allow your system to boot into an existing DOS or Windows
       95 system as it did before LILO was installed. You can then use
       the Red Hat Boot Diskette from v4.1 with the following parameters
       at the boot: prompt to you system on the hard drive:
boot: rescue root=/dev/???? ro load\_ramdisk=0
   
       Where
????
   is the root partition such hda2 in the example used in this document.
       
   Once the installation procedure is completed, you are ready to reboot
                         your system and use Linux!
                                      
                          After Installing Red Hat
                                      
   Now that you have installed Linux and are booting your system for the
     first time, there are some useful things to know about using your
    system such Understanding the LILO prompt, Logging In for the First
                            Time and Using RPM.
                                      
                       Understanding the LILO prompt
                                      
    If you have installed LILO to manage one or more operating systems,
                the following useful things should be known:
                                      
   When you power-on or reboot the system you get the "LILO" prompt which
    you have hopefully configured for a 30 second or so delay before it
   boots the system. When LILO appears on the screen, if you do nothing,
     the default operating system will boot at the prescribed time out
     period. However, from LILO, you can control several aspects of how
   Linux boots, or tell LILO to boot an alternative operating system. If
      you wish to override the default behavior of LILO, pressing the
Shift

    key at the appearance of LILO will cause a "boot:" prompt to appear.
                                  Pressing
Tab

     at this prompt will produce a list of available operating systems:
LILO boot:
dos linux
boot:

    This tells use that "dos" is the default operating system that will
   boot if nothing is typed, or to boot Linux, type "linux" (without the
     quotes). However LILO lets you pass overrides to the Linux kernel
   which will override the kernels default behavior. For example, you may
   have been experimenting with the start-up configuration files and done
     something that prevents the system from coming up properly, so you
   want to boot the system up to the point (but not after which) it reads
        the configuration files. The override for this is "single":
boot: linux single

                                      
   will boot the system into single user mode so you can take corrective
    action. This is also useful if your system won't come all the way up
                  to the login: prompt some other reason.
                                      
                         Logging In the First Time
                                      
   Now that you are faced with the "login:" prompt for the first time you
   may be wondering how to get into the system. At this point on a newly
    installed system, there is only one account to login to which is the
     administrative account "root". This account is used to manage your
    system and doing such things as configuring your system, adding and
   removing users, add/removing software, etc. To login into the account,
    type "root" (without the quotes) at the login: prompt and hit enter.
    You will then be prompted for the password you entered during setup.
       Enter that password at the password: prompt. The system prompt
[root@locahost] #

      will appear once you have successfully negotiated the login. The
   system prompt tells you two things: you are logged in as "root" and in
      this case your machine is called "localhost". If you named your
    machine during the installation process, then your machine name will
   appear instead of "localhost". Now that you are logged in, you can use
                              such commands as
ls

                               to list files,
cd

                          to change directory, and
more

   to look at the contents of ASCII test files. The root account also has
                          its own home directory,
/root

   . A home directory is where a valid system accounts places you in the
   file system hierarchy once you have successfully logged in. Some Unix
                                systems use
/

     instead, so don't be fooled if you don't see any files if you type
             "ls"; there aren't any in the root home directory.
                                      
                          Creating A User Account
                                      
    One of the first things you should do on a newly installed system is
    to create a regular user account for yourself and plan on using the
   root account only for administrative functions. Why is this important?
     Because if you make a critical error in manipulating files you are
   working on, you can damage the system. Another reason is that programs
       run from the root account have unlimited access to the system
    resources. If a poorly written program is run from the root account,
    it may do unexpected things to the system (because a program run as
   root has root access, a program run as a user has restricted resource
   access) which will also damage it. To create a user account, you will
                              want to use the
adduser

                                    and
passwd

                                 commands:
[root@bacchus]# adduser hmp
Looking for first available UID...501
Looking for first available GID...501
Adding login: hmp...done.
Creating home directory: /home/hmp...done
Creating mailbox: /var/spool/mail/hmp...done
Don't forget to set the password.
$[$root@bacchus$]$\# passwd hmp
New password: \textsl{new\_passwd}
New password (again): \textsl{new\_passwd}
Password Changed.
passwd: all authentication tokens updated successfully

    The new account is now be created and is ready to use. Other things
     that may need to be done as root are configuring X Window System,
    configuring dialup services, and configuring printer services. These
                       topics are covered elsewhere.
                                      
                      Accessing the CD-ROM and Floppy
                                      
   One concept under Linux for accessing devices that confuses new users
       is that things like CD-ROM disks and floppy diskettes are not
       automatically made available when inserted in the drive. Linux
   abstracts a device to be file (although in the case a special type of
     file), And much like a word processor, you have to tell the system
   that you want to open a file or close a file. The command used to open
                   (make a device available) a device is
mount

    and the command to close (tell the system you are no longer using a
                                 device) is
umount

       . When you open a device under Linux, you make it part of the
                    directory tree and navigate with the
cd

                                     ,
ls

                                    and
cp

    (copy) command normally. Red Hat Linux suggests making removable or
              temporary devices available under the directory
/mnt

                        . They create the directory
/mnt/floppy

                            by default, but not
/mnt/cdrom

    . So, the first time you want to access the CD-ROM, you will need to
                            create the directory
/mnt/cdrom

                                 by typing:
[root@bacchus]\# mkdir /mnt/cdrom

     Once you have created the directory, you can access the CD-ROM by
                                  typing:
[root@bacchus]\# mount -t iso9660 -r /dev/\textsl{cdrom\_device} /mnt/cdrom

     The break down of the command line above is this: the "-t" switch
    tells the mount command the next argument is a file system type, in
    this case "iso9660" is the format on most computer CD-ROM diskettes.
    The "-r" is a read-only flag since the CD-ROM is read-only. The next
                                 argument,
/dev/{\sl cdrom\_device}

      , is the device file you wish to open. If you performed a CD-ROM
        install, you want to replace \textsl{cdrom\_device} with the
                    designation of your CD-ROM such as:
                                      
                          Device File CD-ROM type
                     hd[a,b,c,d] for IDE/ATAPI CD-ROMs
                    scd[0,1,2,...] for SCSI cdrom drives
                  sbpcd for Sound Blaster 2X speed drives
                     mcd or mcdx for Mitsumi 2X drives
                                      
    There are other drive types as well, but these are the most common.
                         Some literature refers to
/dev/cdrom

       which is a symbolic link. For example, if you have a Secondary
           IDE/ATAPI drive set as the master drive, the command:
ln -sf /dev/hdc /dev/cdrom

                                      
    will create a symbolic link so that the CD-ROM drive can be referred
                                   to as
/dev/cdrom

                                 as well as
/dev/hdc

                                     .
                                      
              Floppy drives are assessed in a similar manner:
mount -t msdos /dev/fd0 /mnt/floppy

    Will make a floppy formatted under DOS in drive "a" available under
    the directory /mnt/floppy. If you want to access the floppy diskette
             in the b drive, substitute /dev/fd1 for /dev/fd0.
                                      
       When you are finished with a device such as a CD-ROM or floppy
    diskette, it is extremely important that you "close" the file before
    removing it from the system. This needs to be done for a variety of
   reasons, but if you don't and try to remove it you can make the system
     unstable and floppies may get erased. To release a device from the
                             file system, type:
umount /dev/fd0 (to un-mount a floppy)
umount /dev/cdrom (to un-mount a cdrom drive)

    For more information on either of these commands, please see the man
                          pages (e.g., by entering
man mount

                                     ).
                                      
                            Shutting Down Linux
                                      
   It is extremely important that the power is not simply shut off while
    Linux is running. You can damage or even make the system un-bootable
   by doing so. The proper way to shutdown Linux is to log in as root and
                                   type:
[root@bacchus]\# shutdown -h now

                                      
    which will cause Linux to write out any files it still has in memory
      and close down active programs cleanly. When you get the message
The system
has halted

   , it is safe to turn the power off. If you want to reboot the computer
                    with out shutting of the power, use:
[root@bacchus]\# shutdown -r now

                                      
       which performs all the insectary shutdown work but directs the
                        computer to restart instead.
                                      
     _________________________________________________________________
                                      
                       Copyright  1997, Henry Pierce
            Published in Issue 18 of the Linux Gazette, May 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
       SQL Server and Linux: No Ancient Heavenly Connections, But...
                                      
                      By Brian Jepson, bjepson@ids.net
                                      
     _________________________________________________________________
                                      
        Prologue: Composite Conversations with Fictional Detractors
     _________________________________________________________________
                                      
    Rain fell on the concrete sidewalk, bringing out that indescribable
   smell of the city. Mr Fiction and I were enjoying the weather, sitting
   at a table under the newly installed awning just outside of the AS220
    cafe. We should have been inside, perhaps building more Linux boxen
    for the AS220 computer lab, or maybe writing the two-way replication
    script between our in-house Linux server and the machine that hosts
      our web pages (http://www.ids.net/~as220). No, instead, we were
   breathing in the Providence air, enjoying the smell and feeling of the
       city before it got too hot, too muggy, before we got too lazy.
                                      
    Mr. Fiction isn't completely convinced about Linux; perhaps he never
   will be. Nevertheless, he dutifully helps me when I'm trying to bring
   up Linux on an old Compaq 386 with the weirdest memory chips, or when
    we need to build the kernel yet again, because I've decided that I'm
      ready to trust ext2fs file system compression or some such whim.
                                      
   This time, Mr. Fiction was baiting me. "Alright, Brian. How can Linux
   help me here? I've got a client who is using SQL Server on Windows NT
   for her company-wide databases. She'd really like to publish this data
     on her Intranet using HTML and CGI. While she's really happy with
     Microsoft for a database server platform, she's not convinced that
    it's good as a web server. We're looking into Unix-based solutions,
   and we really need a platform that allows us to write CGI script that
      can connect to the database server. But since Linux doesn't have
                            connectivity to..."
                                      
   That's when I had to stop him; Linux can connect to Sybase SQL Server.
    What's more, it can also connect to Microsoft SQL Server. Some time
   ago, Sybase released an a.out version of their Client-Library (CT-Lib)
    for Linux. Greg Thain (thain@ntdev1.sunquest.com) has converted the
    libraries to ELF. As a result, anyone using an elf-based Linux later
   than 2.0 should be able to link applications against these libraries.
    There's a nice section on this issue that's available in the Sybase
    FAQ, at http://reality.sgi.com/pablo/Sybase_FAQ/Q9.17.html, and the
                libraries themselves can be downloaded from:

   ftp://mudshark.sunquest.com/pub/ctlib-linux-elf/ctlib-linux-elf.tgz.

    If you are using an a.out system, you can take your chances with the
     libraries that Sybase originally released. These are available at:

   ftp://ftp.sybase.com/pub/linux/sybase.tgz

     _________________________________________________________________
                                      
                         A Neon Joyride with CT-Lib
     _________________________________________________________________
                                      
    If you've read this far, I'm going to assume that you have access to
   an SQL Server. I've used these libraries with the Sybase System 11 we
     have running at work on a Solaris 2.4 system, and the examples for
   this article were developed using Microsoft SQL Server 6.0 running on
      Windows NT 4.0. If you don't have SQL Server, but would like to
      experiment, you can download an evaluation version of SQL Server
                      Professional for Windows NT at:

   http://www.sybase.com/products/system11/workplace/ntpromofrm.html

      If you do this, it goes without saying that you'll need another
    computer (running Windows NT) that's connected to your Linux box via
    TCP/IP. Sadly, there is no version of Sybase or Microsoft SQL Server
    that runs on Linux. However, if you have access to a machine that is
         running SQL Server, then you will likely find this article
                                interesting.
                                      
   In order to make use of these examples, you need to have been assigned
    a user id and password on the SQL Server to which you will connect.
   You should also know the hostname of the server, and most importantly,
     the port on which the server listens. If you installed the server
    yourself, you will know all of this. Otherwise, you will need to get
                this information from your sysadmin or dba.
                                      
   The first thing to tackle is the installation and configuration of the
   Client-Library distribution. The ctlib-linux-elf.tar.gz file includes
      a top-level sybase directory. Before you extract it, you should
      probably pick a permanent home for it; common places are /opt or
   /usr/local. When you extract it, you should be sure that you are root,
     and make sure your working directory is the directory that you've
            chosen. The process might look something like this:

   bash-2.00$ su
   Password:
   bash-2.00# cd /usr/local
   bash-2.00# tar xvfz ctlib-linux-elf.tar.gz

        While you will be statically linking these libraries in with
   application programs, any program that uses the Sybase libraries will
   need to find the directory. There are two ways to deal with this, and
    I usually do both. The first is to create a user named sybase. This
     user's home directory should be the Client-Library directory into
   which you extracted ctlib-linux-elf.tar.gz. The user won't need to log
   in, and I'm not aware of any programs that need to su to that user id.
    I believe the user needs to be there so that ~sybase can be resolved
   to the directory you chose. Here's the relevant line from /etc/passwd
                            for the sybase user:

   sybase:*:510:100:SYBASE:/usr/local/sybase:/bin/true

   Of course, your UID and GID may differ, and you can certainly use the
      adduser utility to add the sybase user. The critical thing is to
            ensure that you've set the home directory correctly.
                                      
      The second thing you can do to help applications find the Sybase
    directory is to create an environment variable called $SYBASE. This
    should simply include the name of the Client-Library home directory:

   bash-2.00$ export SYBASE=/usr/local/sybase

     The interfaces file included in the top of the Client-Library home
    directory (/usr/local/sybase/interfaces in this example) must be set
   up correctly in order for anything to work. The interfaces file allows
   your clients to associate a symbolic name with a given server. So, any
   server you wish to query must be configured in the interfaces file. If
    you've already got an interfaces file in non-TLI format (this is the
   name of the network API used by Sybase on Solaris, and the interfaces
   file differs as well), you should be able to use it or adapt it. Even
    if you don't, you can write your own entries. Here's a sample entry
        (that's a tab on the second line, and it is very important):

   ARTEMIS
           query tcp ether artemis 1433

         The parts of this entry that you are concerned about are:
                                      
    ARTEMIS This is the name by which client programs will refer to the
          server. It doesn't have to be the same as the host name.
                artemis This is the host name of the server.
         1433 This is the TCP/IP socket that the server listens on.
                                      
   Here's an interfaces file that includes entries for both a Sybase SQL
     Server (running on Solaris) and a Microsoft SQL Server, running on
   Windows NT (comments begin with #). Note that the entries ARTEMIS and
                      NTSRV refer to the same server:
                                      
## DEV_SRVR on Sol2-5 (192.168.254.24)
##       Services:
##              query   tcp     (5000)

DEV_SRVR
        query tcp ether Sol2-5 5000

## NTSRV on artemis (192.168.254.26)
##       Services:
##              query   tcp     (1433)

NTSRV
        query tcp ether artemis 1433

## ARTEMIS on artemis (192.168.254.26)
##       Services:
##              query   tcp     (1433)

ARTEMIS
        query tcp ether artemis 1433

     _________________________________________________________________
                                      
                  SQSH - an Excellent Alternative to isql
                  (or is isql a poor alternative to SQSH?)
     _________________________________________________________________
                                      
   SQSH is a freely redistributable alternative to the isql program that
   is supplied with Sybase SQL Server. It's basically a shell that makes
    it easy to send SQL statements to the server. It's written by Scott
   Gray (gray@voicenet.com), a member of the Sybase FAQ Hall of Fame. The
    SQSH home page is at http://www.voicenet.com/~gray/ and includes the
     latest release of SQSH as well as the SQSH FAQ and a lot of other
                                information.
                                      
   SQSH can be compiled on Linux; this should be simple for anyone who is
   familiar with compiling C programs, such as the Linux kernel, Perl, or
   other tools you may have installed from source. The first thing to do
       is to extract the SQSH archive, preferable in some place like
    /usr/src. I usually do installations as root; some people wait until
   just before the 'make install' portion to become root. You can extract
                the distribution with the following command:

   bash-2.00# tar xvfz sqsh-1.5.2.tar.gz

             And then you can enter the source directory with:
   bash-2.00# cd sqsh-1.5.2

   (of course, if you are building a newer version, you will need to use
                    a different file name and directory)
                                      
   There are two files in the source directory that you must read; README
     and INSTALL. If you'd like to compile SQSH with bash-style command
     history editing, you'll need to get your hands on the GNU Readline
   library, unless it's already installed on your system. I believe that
   it's no longer packaged as a separate library, and is now part of the
                      bash distribution, available at:

   ftp://prep.ai.mit.edu/pub/gnu/

    Before you do anything, you'll need to make sure you set the $SYBASE
   environment variable, which I discussed earlier in this article. Then,
     you should run the configure script. This process might look like:

   bash-2.00# export SYBASE=/usr/local/sybase/
   bash-2.00# ./configure
   creating cache ./config.cache
   checking for gcc... gcc
   [etc., etc.]

    If you've installed the GNU Readline library, and you want to use it
      with SQSH (who wouldn't?) you should add the following option to
                                ./configure:

   bash-2.00# ./configure -with-readline

      After you've run configure, you should examine the Makefile, and
      follow the instructions at the top. Generally, ./configure does
     everything right, but you should double-check. If everything looks
                            okay, you can type:

   bash-2.00# make

   And sit back and wait. If everything went fine, you should have a new
                 sqsh executable that you can install with:

   bash-2.00# make install

   In order to run it, you must supply a server name (-S), username (-U),
    and password (-P). The server name corresponds to the name that was
     set up in your interfaces file. Once you've started sqsh, you can
    issue SQL commands. To send whatever you've typed to the server, you
    can type go by itself on a line. To clear the current query, you can
   type reset. If you'd like to edit the current query, you can type vi.
        Among many other features, sqsh features the ability to use
      shell-style redirection after the 'go' keyword. Here's a sample
                                  session:
                                      
bash-2.00# sqsh -Ubjepson -Psecretpassword -SARTEMIS
sqsh-1.5.2 Copyright (C) 1995, 1996 Scott C. Gray
This is free software with ABSOLUTELY NO WARRANTY
For more information type '\warranty'
1> use pubs  /* the pubs sample database */
2> go
1> SELECT au_lname, city
2> FROM authors
3> go | grep -i oakland
 Green                                    Oakland
 Straight                                 Oakland
 Stringer                                 Oakland
 MacFeather                               Oakland
 Karsen                                   Oakland
1> sp_who
2> go
 spid   status     loginame     hostname   blk   dbname     cmd
 ------ ---------- ------------ ---------- ----- ---------- ----------------
      1 sleeping   sa                      0     master     MIRROR HANDLER
      2 sleeping   sa                      0     master     LAZY WRITER
      3 sleeping   sa                      0     master     RA MANAGER
      9 sleeping   sa                      0     master     CHECKPOINT SLEEP
     10 runnable   bjepson                 0     pubs       SELECT
     11 sleeping   bjepson                 0     pubs       AWAITING COMMAND

(6 rows affected, return status = 0)
1>

     _________________________________________________________________
                                      
           CGI, Sybperl and Linux: All the Colours in my Paintbox
     _________________________________________________________________
                                      
       Getting back to Mr. Fiction's problem, we need to answer a big
    question; How can we connect a Linux web server to Sybase? If you've
    done a lot of CGI programming, you've probably, but not necessarily,
        used a little bit of Perl. Perl is an excellent tool for CGI
      development; its modular design makes it easy to extend. In the
       examples which follow, we'll see how to use the CGI module in
     conjunction with Sybperl. Combining these tools, we'll be able to
      easily build CGI applications that can connect to an SQL Server
                                 database.
                                      
   It's probably best to use a Perl that has been installed from source.
    In the past, I have had trouble with binary distributions, and so, I
    always install the Perl source code and build it myself. You should
   obtain and extract the following modules from CPAN (Comprehensive Perl
                             Archive Network):

   CGI.pm:  http://www.perl.com/CPAN/modules/by-module/CGI/CGI.pm-2.36.tar.gz
   Sybperl: http://www.perl.com/CPAN/modules/by-module/Sybase/sybperl-2.07.tar.
gz

     Installing the CGI module is quite simple. You need to extract it,
   enter the directory that's created, and follow the instructions in the
       README file. For most Perl modules, this will follow the form:

   bash-2.00# tar xvfz MODULE_NAME.tar.gz
   bash-2.00# cd MODULE_NAME
   bash-2.00# less README
   [ ... you read the file ...]
   bash-2.00# perl Makefile.PL
   [ ... some stuff happens here...]
   bash-2.00# make
   [ ... lots of stuff happens here...]
   bash-2.00# make test
   [ ... lots of stuff happens here...]
   bash-2.00# make install

      You should double check to make sure that CGI.pm is not already
   installed; if you install it, you should do it as root, since it needs
     to install the module into your site-specific module directories.
   Here's the commands I typed to make this happen for the CGI extension
   (note that there are no tests defined for CGI.pm, so I didn't need to
                              do 'make test'):

    bash-2.00# tar xvfz CGI.pm-2.36.tar.gz
    bash-2.00# cd CGI.pm-2.36
    bash-2.00# perl Makefile.PL
    bash-2.00# make
    bash-2.00# make install

    Once you've installed it, you can use it in your Perl programs; do a
                  'perldoc CGI' for complete instructions.
                                      
     Installing Sybperl is a little more involved. If you don't want to
      build Sybperl yourself, you can download a binary version from:

   ftp://mudshark.sunquest.com/pub/ctlib-linux-elf/sybperl.tar.gz

   If you do want to go ahead and build it yourself, first extract it and
                        enter the source directory:

   bash-2.00# tar xvfz sybperl-2.07.tar.gz
   bash-2.00# cd sybperl-2.07/

   Again, it's really important that you read the README file. Before you
        run 'perl Makefile.PL,' you will need to set up a couple of
    configuration files. The first is CONFIG. This file lets you set the
                           following parameters:
                                      
     DBLIBVS The version of DBlib that you have installed. Under Linux,
            only CTlib is available, so this should be set to 0.
        CTLIBVS This should be set to 100, as indicated in the file.
    SYBASE This is the directory where you installed the Client-Library
         distribution. It should be the same as $SYBASE or ~sybase.
    EXTRA_LIBS These are the names of additional libraries that you need
     to link against. The Sybase Client-Library distribution typically
    includes a library called libtcl.a, but this conflicts with the Tcl
     library installed under many versions of Linux. So, this has been
   renamed libsybtcl.a in the Linux version of CTlib. This option should
      also include libinsck.a. The value for this configuration option
                    should be set to '-lsybtcl -linsck'.
    EXTRA_DEFS It does not appear that this needs to be changed, unless
   you are using Perl 5.001m, in which case you need to add -DUNDEF_BUG.
   LINKTYPE Under Linux, I am not aware of anyone who has managed to get
    a dynamically loadable version of Sybperl to build. I have not been
   able to get it to compile as a dynamic module, so I always set this to
       'static', which results in a new perl executable being built.
                                      
                           Here's my CONFIG file:
                                      
#
# Configuration file for Sybperl
#
# DBlibrary version. Set to 1000 (or higher) if you have System 10
# Set to 0 if you do not want to build DBlib or if DBlib is not available
# (Linux, for example)
DBLIBVS=0


# CTlib version. Set to 0 if Client Library is not available on your
# system, or if you don't want to build the CTlib module. The Client
# Library started shipping with System 10.
# Note that the CTlib module is still under construction, though the
# core API should be stable now.
# Set to 100 if you have System 10.
CTLIBVS=100

# Where is the Sybase directory on your system (include files &
# libraries are expected to be found at SYBASE/include & SYBASE/lib
SYBASE=/usr/local/sybase

# Additional libraries.
# Some systems require -lnsl or -lBSD.
# Solaris 2.x needs -ltli
# DEC OSF/1 needs -ldnet_stub
# See the Sybase OpenClient Supplement for your OS/Hardware
# combination.
EXTRA_LIBS=-lsybtcl -linsck

# Additional #defines.
# With Perl 5.001m, you will need -DUNDEF_BUG.
# With Perl 5.002, none are normally needed, but you may wish to
# use -DDO_TIE to get the benefit of stricter checking on the
# Sybase::DBlib and Sybase::CTlib attributes.
#EXTRA_DEFS=-DUNDEF_BUG
EXTRA_DEFS=-DDO_TIE


# LINKTYPE
# If you wish to link Sybase::DBlib and/or Sybase::CTlib statically
# into perl uncomment the line below and run the make normally. Then,
# when you run 'make test' a new perl binary will be built.
LINKTYPE=static

    The next file that you need to enter is the PWD file. This contains
    three configuration options; UID (user id), PWD (password), and SRV
    (server name). It is used to run the test, after the new perl binary
                       is built. Here's my PWD file:
                                      
# This file contains optional login id, passwd and server info for the test
# programs:
# You probably don't want to have it lying around after you've made
# sure that everything works OK.

UID=sa
PWD=secretpassword
SRV=ARTEMIS

   Now that you've set up the configuration files, you should type 'perl
   Makefile.PL' followed by 'make'. Disregard any warning about -ltcl not
    being found. After this is done, you should type 'make test', which
    will build the new Perl binary and test it. All of the tests may not
    succeed, especially if you are testing against Microsoft SQL Server
                        (the cursor test will fail).
                                      
    When you are ready to install Sybperl libraries, you can type 'make
    install'. You should be aware that the new binary will be statically
     linked to the Client-Library, and will be slightly bigger. If this
   offends you, you can rename the new perl to something like sybperl and
   install it in the location of your choice. The new perl binary is not
     installed by default, so you can install it wherever you want. You
   will not be able to use the Sybperl libraries from your other version
         of Perl; you will have to use the new binary you created.
                                      
    For simplicity's sake, let's assume that you are going to rename the
   new binary to sybperl, and move to /usr/local/bin/sybperl. The README
    file includes alternate instructions for installing the new binary.
   The manual is included in the pod/ directory under the Sybperl source
     code. You can also read the documentation with 'perldoc Sybperl'.
                                      
    Here's a sample Perl program that uses CGI and Sybase::CTlib to give
   the users the ability to interactively query the authors table that is
                  included with the pubs sample database:
                                      

#!/usr/local/bin/sybperl

use CGI;
use Sybase::CTlib;

# This is a CGI script, and it will not have the $SYBASE
# environment variable, so let's help it out...
#
$ENV{SYBASE} = '/usr/local/sybase';

# Get a "database handle", which is a connection to the
# database server.
#
my $dbh = new Sybase::CTlib('bjepson', 'secretpassword', 'ARTEMIS');

# Instantiate a new CGI object.
#
my $query = new CGI;

# Print the header and start the html.
#
print $query->header;
print $query->start_html(-title   => "Sybperl Example",
                         -bgcolor => '#FFFFFF');

# Put up a form, a prompt, an input field, and a submit button.
#
print qq[<h1>Sybperl Example</h1><hr>];
print $query->startform;

print qq[Enter part of an author's name: ];
print $query->textfield( -name => 'query_name' );
print $query->submit;

# End the form.
#
print $query->endform;

# If the user entered an author name, find all authors
# whose first and/or last names match the value.
#
if ($query->param('query_name')) {

    # Use the pubs database.
    #
    $dbh->ct_sql("use pubs");

    # Get the value the user typed
    #
    $query_name = $query->param('query_name');

    # Find all of the matching authors. This search
    # is case-sensitive.
    #
    my $sql = qq[SELECT au_fname, au_lname ] .
              qq[FROM authors ] .
              qq[WHERE au_fname LIKE '%$query_name%' ] .
              qq[OR    au_lname LIKE '%$query_name%' ] .
              qq[ORDER BY au_lname, au_fname];
    my @rows = $dbh->ct_sql($sql);

    # Iterate over each row and display the first
    # and last name in separate table cells.
    #
    print qq[<table border>];
    print qq[<th>First Name</th><th>Last Name</th>];
    my $thisrow;
    foreach $thisrow (@rows) {

        # Each row is a reference to an array, which
        # in this case, contains two elements; the
        # values of the first and last names.
        #
        my $au_fname = ${$thisrow}[0];
        my $au_lname = ${$thisrow}[1];
        print qq[<tr><td>$au_fname</td><td>$au_lname</td></tr>];
    }
    print qq[</table>];

}

# End the html.
#
print $query->end_html;

               And here's an example of the program's output:
                                      
                                  [INLINE]
                                      
     _________________________________________________________________
                                      
             Everything Has Got to be Just Like You Want it To
     (or, things are more like they are now than they ever were before)
     _________________________________________________________________
                                      
    I've found the Sybase libraries for Linux to be quite useful. I find
   myself in a lot of places where either Sybase or Microsoft SQL Server
      sees heavy use. It's nice to be able to connect, especially when
     dialing in over a modem. I've found that sqsh performs much better
      making the connection over dialup than isql running on a remote
               machine, even when I'm connected with rlogin.
                                      
      I hope these ramblings have been enjoyable for you; I think Mr.
   Fiction's head is spinning, but it's all for the best. We've had some
     of the best doctors in the world look at it, and while no one can
    agree on exactly when it will stop spinning, they all agree that it
                        looks much better that way.
                                      
                       Brian Jepson, bjepson@ids.net
                                      
     _________________________________________________________________
                                      
                       Copyright  1997, Brian Jepson
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
     _________________________________________________________________
                                      
           "Linux Gazette...making Linux just a little more fun!"
                                      
     _________________________________________________________________
                                      
                                  [INLINE]
                                      
                   Welcome to The Linux Weekend Mechanic!
                                      
          Published in the June 1997 Edition of the Linux Gazette
                                      
       Copyright (c) 1997 John M. Fisk <fiskjm@ctrvax.vanderbilt.edu>
   The Linux Gazette is Copyright(c) 1997 Specialized Systems Consultants
                                   Inc. 
                                      
     _________________________________________________________________
                                      
               Time To Become... The Linux Weekend Mechanic!
                                      
   [INLINE] You've made it to the weekend and things have finally slowed
       down. You crawl outa bed, bag the shave 'n shower 'cause it's
       Saturday, grab that much needed cup of caffeine (your favorite
   alkaloid), and shuffle down the hall to the den. It's time to fire up
   the Linux box, break out the trusty 'ol Snap-On's, pop the hood, jack
                    'er up, and do a bit of overhauling!
                                      
     _________________________________________________________________
                                      
                             Table of Contents
                                      
     * Welcome to the June 1997 Weekend Mechanic!
     * Wallpapering with XV: A Followup
     * VIM Programming Perks
     * Closing Up The Shop
       
     _________________________________________________________________
                                      
             [LINK] Welcome to the June 1997 Weekend Mechanic!
                                      
                               Hey, c'mon in!
                                      
               Thanks for dropping by! How y'all been doing?
                                      
    So... everyone survive the semester?! I just finished taking my last
     final a day or so ago AND managed to find work (with the folks in
    Biomedical Informatics at Vanderbilt Univ. Medical Center :-) within
             24 hours of finishing up. PHEW!! Nice to be done.
                                      
     Anyway, I'm going to apologize for the potentially "dry" topics in
       this month's WM. I've not been doing much besides programming,
    cramming, and making occasional trips to the 'fridge, restroom, and
   bedroom (pretty much in that order...). I ended up doing a fair amount
     of programming for a couple classes and got VERY acquainted with a
    number of the programming tools available under Linux -- VIM, ctags,
   xxgdb, ddd, and so forth. Since this is what I've been doing of late,
   I thought that this might be an appropriate topic. The proviso is that
   you understand that you take this strictly as a novice's introduction
                        to a couple of these tools.
                                      
                   How's that for being wishywashy... :-)
                                      
      Anyway, I've found a few useful things along the way and thought
                    someone might enjoy my sharing them.
                                      
   Also, I want to continue to thank all of you who've taken the time to
    write and offer comments and suggestions. Believe me, I don't claim
    extensive knowledge or expertise in most of the things I write about
    -- these are mostly discoveries and ideas that I've hit upon and am
         sharing in the hopes that they might be helpful. I welcome
   corrections, clarifications, suggestions, and enhancements! Several of
      you wrote in with regards to wallpapering using XV which I'll be
                               sharing below.
                                      
           Well, thanks again for stopping by! Hope you enjoy :-)
                                      
                                John M. Fisk
                               Nashville, TN
                            Thursday, 8 May 1997
                                      
     _________________________________________________________________
                                      
                   [LINK]Wallpapering with XV: A Followup
                                      
     My sincerest thanks to Brent Olson, Peter Haas, and Bill Lash for
       taking the time to write and offer these suggestions. I tried
   tinkering around with a few of these suggestions and they work like a
                           champ! Here they are:
                                      
       ______________________________________________________________
                                      
     Date: Wed, 02 Apr 1997 09:24:59 -0800
     From: Brent Olson <brent@primus.com>
     To: fiskjm@ctrvax.Vanderbilt.Edu
     Subject: re: reducing the colors in a background
     
     You've probably already been told this, but in the LG article
     relating to reducing the number of colours used in the background,
     there is no need to convert the picture first. It can be done on
     the fly:

xv -root -quit -maxpect -ncols 16 filename.gif

     Works great on my lousy 8-bit NCD display at work.
     
     Brent Olson
     mailto: brent@primus.com
     
       ______________________________________________________________
                                      
     Date: Tue, 08 Apr 1997 08:42:01 +0200 (MET DST)
     From: hap@adv.magwien.gv.at
     To: fiskjm@ctrvax.Vanderbilt.Edu
     Subject: xv - interesting options
     
     There are another two interesting options of xv:
-random filepattern   selects a random picture of given given filepattern
-ncols #colors        to limit number of used colors

     An example out of my .fvwm2rc95:
xv -quit -root -ncols 16 -random /var/X11R6/lib/xinit/pics/*.gif

     Regards, Peter
--
 (~._.~)    From the keyboard of Peter Haas (hap@adv.magwien.gv.at)
 _( Y )_    Located at MA14-ADV, Rathausstr.1, A-1082 Wien, Austria
()_~*~_()   Phone +43-1-4000/91126   FAX +43-1-4000/7141
 (_)-(_)    "Big iron" division

       ______________________________________________________________
                                      
     From lash@tellabs.com Thu Apr 24 21:20:39 1997
     Date: Thu, 24 Apr 1997 17:52:27 -0500
     From: Bill Lash <lash@tellabs.com>
     To: fiskjm@ctrvax.Vanderbilt.Edu
     Subject: Limiting colors with XV
     
     John,
     
     I read your article on wallpapering with XV. You suggest choosing
     images with a limited number of colors. You go on to suggest
     several options, but you missed a simple solution. You can tell XV
     how many colors to use in displaying the picture using the -ncols
     option.
     
     At work, I usually run with a background of 100 colors on an 8-bit
     pseudocolor display with the following command line:

xv -root -quit -max -rmode 5 -ncols 100 image.gif

     Bill Lash
     lash@tellabs.com
     
     _________________________________________________________________
                                      
            Again, guys, thanks for writing. Happy wallpapering!
                                      
                                    John
                                      
     _________________________________________________________________
                                      
                        [LINK]VIM Programming Perks
                                      
    Well, as I mentioned above, I ended up spending a good deal of time
   programming this semester. Our software engineering team designed and
       coded a simple FORTRAN 77 spell checker in C++. Thing was, the
   analysis and design phase consumed 11 of the 14 weeks of the semester
   AND it was done using Structured Analysis. Problem was, we had decided
        to code this thing in C++ and so ended up almost completely
     redesigning it using OO Analysis and Design during the last couple
      weeks (when we were supposed to be doing nothing but coding :-).
                                      
   Anyway, this meant a LOT of late nights -- integrating everyone's code
      got a bit hairy, since none of us had much experience with team
   coding. I was mighty thankful for the development tools under Linux. I
     spent the better part of 13 hours one Saturday debugging our first
   effort at integrating the code -- chasing down Segmentation Faults and
                             infinite loops :-)
                                      
                  Ahhh... the stuff of programming... :-)
                                      
    Along the way I learned a few interesting and nifty things about the
   VIM editor, which has been my 'ol workhorse editor for the past couple
    years now. I wanted to give this thing another plug as I think it's
       one of the best things since sliced bread. I'll admit that the
    emacsen, including the venerable XEmacs, are a LOT more powerful and
       full featured. But, having developed the finger memory for the
     "one-key-vi-commands" I've found that I can get a lot of work done
    fast. I'd like to 'tip the hat at this point to Jesper Pedersen and
   Larry Ayers both of whom have written very nice articles on emacs and
    XEmacs in past issues of the LG and the Linux Journal. I'd encourage
   anyone interested in these to have a look at these articles. I'll also
    be mentioning XEmacs below and give you a screen shot of the latest
                              19.15 iteration.
                                      
   Anyway, here's a few (hopefully) interesting notes and ideas for using
                              the VIM editor!
                                      
                          GVIM -- Going Graphical!
                                      
                  Yup, that's right! VIM has gone GUI :-)
                                      
     I recently downloaded and compiled the latest beta version of VIM
   which is version 5.0e. If you have the Motif development libraries you
   can compile VIM with a Motif interface -- gvim. This rascal is pretty
   good sized and not exactly fleet of foot. It's a bit slow getting out
      of the gate on startup and so it's probably prudent to heed the
    Makefile suggestions and compile separate versions of VIM both with
      and without X support. I tried starting versions of vim (at the
    console) compiled with and without X support and the extra X baggage
                       definitely slows things down.
                                      
    A bit later on in this article I've provided several screen dumps of
        gvim as well as a couple other editors and the xxgdb and ddd
    debuggers. If you're the impatient or curious type, please feel free
   to jump ahead and have a look. Also, I've included a couple links for
                          additional information.
                                      
    Actually, VIM has provided a GUI since around version 4.0. I've been
        using this for some time now and find that it adds a several
                   enhancements over vim at the console:
     * it has a reasonably handsome and functional scrollbar
     * mouse support is automatic and allows VERY easy cursor movement,
       text highlighting, and cut-and-paste operations
     * it provides a customizable menubar
     * it intelligently understands the movement keys -- Home, End, Page
       Up, Page Down, arrow keys -- even in insert mode
     * depending on how you have your .Xmodmap set up, it will
       intelligently handle Back Space, and Delete keys AND you can
       delete backwards over multiple lines!
       
     This last point is wonderful. Anyone who's ever tried to backspace
    onto the end of a previous line and gotten that miserable BEEP! will
   appreciate this. What's particularly nice about the graphical version
      of vim is that it provides several basic features of a GUI style
          editor while retaining the speed and flexibility of vi.
                                      
                     The Big News: Syntax Highlighting!
                                      
        This is truly a godsend and was one of the features that was
       definitely on the 'ol wish list! VIM now provides color syntax
    (lexical) highlighting for a variety of languages including C, C++,
    HTML (which I'm using right now...), Java, Ada95, FORTRAN, Perl, and
                        TeX. But that's not all...!
                                      
   (...this is like the guy hawking the $2.99 Ginzu knives, "they slice,
    they dice, here... I can cut through a cinder block wall, this lamp
   post, a street sign, AND the hood of this guy's car and never loose an
           edge! But that's not all... if you act right now...")
                                      
                             You get the point.
                                      
   What I was going to say was that vim also provides syntax highlighting
     for shell scripts (VERY COOL!), makefiles, and the VIM help files
     (which you'll see here in just a bit). All in all, this is pretty
       nice. I've been tinkering around with this a bit and am really
     starting to like it. Be aware that the highlighting isn't quite as
   "intelligent" as with something like XEmacs -- it doesn't provide the
     same degree of sophistication. Still, it's very good and, being an
     order of magnitude smaller and a good deal more nimble, it's well
                               worth trying.
                                      
    VIM installed the support files for syntax highlighting (at least on
     my system) under /usr/local/share/vim/syntax. There are individual
       files for the various languages and file types as well as the
     syntax.vim file that does a lot of the basic coordination. You can
   tinker around with these to get the "look-n-feel" that you want. Keep
    in mind that to get automatic syntax highlighting you'll need to add
          something like this to your ~/.vimrc or ~/.gvimrc file:

" Enable automatic color syntax highlighting on startup
source /usr/local/share/vim/syntax/syntax.vim

     I have to admit that I wrestled with this for longer than I should
   have trying to figure out how this was done. Hopefully, this will save
                            you some trouble :-)
                                      
   Again, I've included screen dumps below so that you can see what this
    looks like. In addition, the VIM home page has a couple nice screen
   shots that you might want to have a look at. I should add that syntax
    highlighting is individually configurable for the console and the X
    version. Now, before you go dashing off and "rushing in where angels
    fear to tread..." you will probably want to have a look at the help
        files or documentation -- it gives some basic guidelines for
                          customizing this stuff.
                                      
                          And speaking of which...
                                      
                            Help is on the way!
                                      
      One of the coolest and most useful things about VIM is the mind
    numbing amount of documentation that comes with it. There's a small
    library of support documents covering everything from a blow-by-blow
    description of each feature and command to information about showing
                thanks by providing help for a needy Uganda.
                                      
   And what's more, all of this is provided on-line. In command mode you
                              simply type in:

:help

     and the window (under gvim) splits and loads up the top level help
                  file. This is your gateway to knowledge.
                                      
                         "...use the Source, Luke"
                                      
    The help system is set up in a hypertext fashion. If you've enabled
   automatic syntax highlighting then even the help system is colorized.
     To follow a link you can either hit the letter "g" and then single
    click with the mouse on a topic, or you can move the cursor to that
     topic and hit "Ctrl-]" (hold down the control key and hit the left
    square bracket key -- "]"). To get back up to where you started, hit
                                 "Ctrl-t".
                                      
                            It's that simple :-)
                                      
        IMHO, this is one of the most laudable features of VIM. The
   documentation is generally well written and reasonable understandable.
   It is VERY thorough and, since it's available from within the editor,
   provides a high level of usability. It also provides a "Tips" section
     as well as numerous "How Do I...?" sections. It's Must Reading...
                                      
                               Ask "The Man!"
                                      
     Another really useful thing to try is accessing manual pages from
   within vim. Say you were writing a shell script and needed to quickly
     look up something in the bash manual page or you were setting up a
   test condition and couldn't remember the syntax for the "greater than"
                        test, all you have to do is:

:!man test

    and presto!, information. It's instant gratification at its best...
                                    :-)
                                      
    To be honest, I've found that this works a LOT better at the console
    than under gvim, although the exact reason eludes me. Under gvim, I
                          get the following error:

WARNING! Terminal is not fully functional

                           got me on this one...
                                      
    My suspicion is that it has to do with the termcap stuff built into
    the editor. Forward movement down the manual page (hitting the space
     bar) is reasonable smooth, but backward movement is very jerky and
   screen redraws are incomplete. Still, if you can live with that you'll
                         find this VERY convenient.
                                      
                              TAG, You're It!
                                      
   This is another one of those things that makes life SO much easier. If
       you've not used tags before then brother, it's time to start!
                                      
     Basically what tags allow you to do is find that point at which a
   function or variable is declared. For example, suppose you ran across
                       the following snippet of code:

    HashTable HTbl;
    HTbl.Load("hash.dat");
    found = HTbl.Lookup(buf);
    .
    .
    .

         and were interested in finding out how the Load method was
    implemented. To jump to the point in the file where this is defined
             simply move the cursor so that it sits on "Load":

    HTbl.Load("hash.dat");
         ^

    and hit "Ctrl-]" (hold down the control key and hit the right square
   bracket key -- "]"). Beauty of this is, that even if the definition is
     not in the file you're currently working on, vim will load up the
   needed file and position the cursor at the first line of the function
                                definition.
                                      
                          This is seriously cool!
                                      
   When you're ready to move back to your original location, hit "Ctrl-t"
     (which moves you back up the tag stack). I've been using Exuberant
   Ctags, version 1.5, by Darren Hiebert for the past little bit now and
   really like this a lot. As the name implies, it does a pretty thorough
     job of scouring your source files for all sorts of useful stuff --
   function declarations, typedefs, enum's, variable declarations, macro
   definitions, enum/struct/union tags, external function prototypes, and
   so forth. It continues on in the time honored tradition of providing a
   bazillion options, but not to fear: it's default behavior is sane and
                   savvy and provides a very nice OOBE*.
                                      
                          *(Out Of Box Experience)
                                      
   You should be able to find Darren's Exuberant Ctags (ctags-1.5.tar.gz
   was the most recent version on sunsite and its mirrors at the time of
    writing) at any sunsite mirror. I happened across it in the Incoming
       directory. You'll probably find is somewhere under the /devel
   subdirectory now. If you get stuck and really can't find it, drop me a
   note and I'll see what I can do to help. This one is definitely worth
                                  having.
                                      
   Oh, BTW, using ctags is child's play: simple give it the list of files
    that you want it to search through and it'll create a "tags" file in
          your current directory. Usually, this is something like:

ctags *.cc *.h

               if you happen to be doing C++ programming, or:
ctags *.c *.h

    if you're programming in C. That's all there is to it! Keep in mind
   that you can use tags without having to position the cursor on top of
   some function or variable. If you'd defined a macro isAlpha and wanted
              to jump to your definition, then simply type in:

:ta isAlpha

   and vim will take you to that point. How 'bout that for easy? There's
    a good deal more info on using tags in the VIM online documentation.
                             Browse and enjoy!
                                      
                         Using the Real Windows...
                                      
      Another very handy item that gvim (and vim) provides is multiple
    windows. This makes cutting and pasting from one file to another (or
   from one section of a file to another) quite easy. It also is nice if
    you're reading one file and editing another (for example, reading an
            INSTALL file while making changes to the Makefile).
                                      
     To pop up a second (or third, or fourth...) window with a specific
                      file, simply use something like:

:split ctags.README

    This would create a second window and load up the ctags.README file.
     If you want a second window with the current file displayed there,
                              then simply use:

:split

    and a second window will be created and the current file loaded into
     that window. Under gvim, moving the cursor from one window to the
    other is as simple as mouse clicking in the desired window. You can
                          also use the keystrokes

Ctrl-w j (hold down control and hit "w", then hit j)
Ctrl-w k (hold down control and hit "w", then hit k)

     to move to the window below or the window above the current window
         respectively. But, use the mouse... it's a lot easier :-)
                                      
    Resizing the windows is nicely handled using the mouse: simply click
   anywhere on the dividing bar between the two windows and drag the bar
    to whatever size you want. This is really handy if you are using one
   file as an occasional reference but want to edit in a full window. You
     can resize the reference file down to a single line when it's not
                                  needed.
                                      
    Again, there's a lot more information in the online help about using
                             multiple windows.
                                      
                 SHHHHHhhhh.....! Let Me Tell You A Secret!
                                      
     Here's a little something that ought to part of one of those blood
     oath, "cross-my-heart-and-hope-to-die", secret society initiations
    into the "Secret Lodge of Some Large North American Mammal Society"
                                      
       Ready...? (look furtively around with squinty, shifty gaze...)
                                      
      (... the clock ticks loudly in the other room, somewhere in the
       distance a dog barks, the room falls into a stifling hush...)
                                      
       He clears his throat loudly and in a harsh whisper exclaims...
                                      
          "The "%" sign expands to the current buffer's filename!"
                                      
                       Phew! Glad that's over... :-)
                                      
    Yup, with this little tidbit you can do all sorts of cool and groovy
   things. Like what you ask...? (you knew this was coming, didn't you...
                                    :-)
                                      
   RCS checkin and checkout
          
          
          I won't go into using RCS for version control except to say
          that doing checkin's and checkout's from within VIM is VERY
          easily accomplished doing something like:
          

    :w!
    :!ci -l %
    :e %


          So what's going on...? Well, the first line writes the current
          buffer to file, the real good stuff happens on the second line
          in which you use the RCS ci to checkin and lock the current
          file. And finally, since the checkin process may have altered
          the file if you've included "Header", "Id", "Log", etc.,
          reloads the file with the new RCS information (if any).
          
          Now, for all you VIM jockeys out there, the handy thing to do
          is use "map" to bind this sequence to a single keystroke. I've
          bound this to Alt-r and it makes the whole operation smooth and
          painless.
          
   Printing that 'ol file
          
          
          This is another favorite trick. To print the current file from
          within vim simply:
          

    :w!
    :!lpr %


   what could be easier? :-)
          
          Seriously, this is a very convenient means of getting a hard
          copy of your current file. The one caveat to remember is that
          you'll probably want to commit the contents of your current
          editing buffer to file before you try to print it.
          
          I've been using the apsfilter program for last year or so and
          absolutely love it. It is a series of shell scripts that
          automate the process of printing. Basically, it uses the file
          command to determine the type of file to print and then invokes
          lpr with the appropriate print filter. As a backend, it uses
          the a2ps program to format ASCII into Postscript and then uses
          Ghostscript to do the actual printing. Now, using something
          like:
          

    lpr [filename]


   transparently formats the file to Postscript and sends it to the
          printer. I've been quite pleased with this. You should be able
          to find this and similar programs at any of the sunsite FTP
          sites under the /pub/Linux/system/print (printer?, printing?)
          subdirectory (sorry, I'm not connected to the 'net at the
          moment and can't recall the exact subdirectory name off the top
          of my head :-).
          
          Also, I've played with the a2ps program itself and found all
          sorts of cool and nifty options -- single page/double page
          printing, headers, boundary boxes, setting font sizes, and so
          forth. I particularly like being able to set the font size and
          header information. And, as always, IHABO*.
          
          *(It Has A Bazillion Options)
          
   Word Counts...
          
          
          If you hit the Ctrl-g key combo, VIM prints the filename,
          number of line, and the current position in the file on the
          bottom status line. However, if you want a word or byte count,
          simply invoke the wc program on the file:
          

    :w!
    :!wc %


   which will print out the file's line, word, and byte count.
          
      You get the picture. Basically, any command that takes the form
command [-options] filename

             can be used from within VIM doing something like:
:! command [-options] filename

     Note that there are a couple other handy little items you might be
    interested in. If you want to include the contents of a file in the
   current buffer, OR if you want to capture the output of a command into
      the current buffer (for example, a directory listing), then use:

:r a2ps.README
:r! ls /usr/local/lib/sound/*.au

   The first command would insert the contents of the a2ps.README file in
   the current buffer wherever the cursor was located; the second command
   would insert the output of an ls listing for the /usr/local/lib/sound/
   directory. That is, you can use this second form for any command that
                     prints its output to standard out.
                                      
   This discussion leads directly into the question of spell checking the
    current buffer. And the answer that I've got is that I haven't found
      an easy or convenient way to do this. I ran across a key mapping
    definition a while ago that basically copied the current file to the
     /tmp directory, ran ispell on this file, and then copied this file
   back over the original. It worked, but it was clunky. I've also tried,
              with some modest success, to do something like:

:w!
:! ispell %
:e %

   which basically commits the current buffer to file, starts a shell and
     runs ispell on the file, and then reloads that file once the spell
      checking is done. Thing is, this works at least reasonably well
   running vim in text mode; under gvim, ispell gives an error message to
                                the effect:

Screen too small:  need at least 10 lines
Can't deal with non-interactive use yet.

1 returned

                               Ideas anyone?
                                      
                               Running Make!
                                      
     The specifics of setting up a makefile are, in academic parlance,
   "beyond the scope of this article...". (You can, however, find a good
          deal of information about makefiles using info; or, more
    appropriately, O'Reilly & Assoc. put out a very nice little book on
      managing projects using make -- hit up you friendly neighborhood
          librarian or find it at your favorite Linux bookstore!)
                                      
    I've found that gvim, in particular, provides excellent support for
    make. Basically, once you have a working makefile, simply invoke it
                                   using:

:make

   As the build process proceeds, you'll see all sorts of nifty messages
   go whizzing by. If make terminates on an error, gvim will very kindly
    load up the errant file and position the cursor at the line that was
   implicated as being the culprit. This is VERY handy. Also, if multiple
   errors are encountered, you can move from one error to the next using:

:cn

   which advances to the next error. For some reason, the console version
        of vim hasn't worked quite a well as gvim. It doesn't always
    automatically go to the first error encountered, although using the
                     ":cn" command seems to work fine.
                                      
                              And So Forth...
                                      
   Phew! How's everyone doing...? Still hanging in there? I'm almost done
                          here so stay tuned. :-)
                                      
   There are LOTS of other rather nifty features that vim/gvim provides.
    The adventurous will find all sorts of goodies to experiment with in
     the online documentation. Let me call your attention to just a few
     more and we'll wrap this up and have a look at some screen shots!
                                      
                               Unlimited Undo
                                      
      The way vim is (generally) configured, it keeps track of ALL the
    editing changes you've made to a file. So, after an hour's worth of
     editing, should you decide that War And Peace really didn't need a
       another chapter, then you can back out of all your changes by
   repeatedly hitting the "u" key. This reverses the changes you've made
    to the file in a sequential fashion. Now for a major back out, you'd
      have done well to check the original file in under RCS and then
   retrieve this version if you decide not to keep your current changes.
   Still, you can back all the way out if you don't mind hitting "u" for
                               a while... :-)
                                      
                          Compressed File Support
                                      
    One of the other nice things introduced into vim around version 4.0
      was support for editing compressed files. Essentially, what this
    involves is transparent uncompressing of the file upon the start of
   editing and recompressing the file when vim terminates. This is quite
     helpful as it allows you to save a LOT of space if you work with a
     large number of text files that can be compressed. You may also be
   aware of the fact that the pager "less" has this support built in and
                      so do most all of the "emacsen".
                                      
   The support for this is configured in using an entry in your ~/.vimrc
      or ~/.gvimrc. I use the stock vimrc example that comes with the
                               distribution:

" Enable editing of gzipped files
"    read: set binary mode before reading the file
"          uncompress text in buffer after reading
"   write: compress file after writing
"  append: uncompress file, append, compress file
autocmd BufReadPre,FileReadPre      *.gz set bin
autocmd BufReadPost,FileReadPost    *.gz '[,']!gunzip
autocmd BufReadPost,FileReadPost    *.gz set nobin

autocmd BufWritePost,FileWritePost  *.gz !mv <afile> <afile>r
autocmd BufWritePost,FileWritePost  *.gz !gzip <afile>r

autocmd FileAppendPre           *.gz !gunzip <afile>
autocmd FileAppendPre           *.gz !mv <afile>r <afile>
autocmd FileAppendPost          *.gz !mv <afile> <afile>r
autocmd FileAppendPost          *.gz !gzip <afile>r

    I still haven't completely gotten the hang of the autocmd stuff -- I
   suspect that there's all sorts of wild and fun things that you can do
            with this. Ahhh... places to go and things to do...!
                                      
                  Auto-Fill and Auto-Comment Continuation
                                      
     Here's yet another nifty little feature that makes life fuller and
                               richer... :-)
                                      
   You can set a text width variable in your ~/.gvimrc file that will do
    auto-fill (or auto-wrapping) at that line length. Currently, I have
     this set to 78 so that whenever the line exceeds 78 characters the
   line is automagically continued on the next line. This is a Very Nice
    Thing when typing text, although it can be a bit of a nuisance (and
                     can be shut off) when programming.
                                      
                                 However...
                                      
    There's an additional benefit to using this auto-fill thingie... if
   you're inserting a comment in C, C++, a shell script, whatever..., all
   you have to do is start the first line with a comment character ("/*",
    "//", "#") and then start typing. If the comment extends to the text
     width column, it automatically continues this on the next line AND
                  adds the appropriate comment character!
                                      
                              Very Slick! :-)
                                      
                            Choices, Choices...!
                                      
     Well, the recurrent theme of the day is "choices!". VIM comes with
   more options than you can shake a stick at. I'd encourage you to have
   a look at the online docs for a description of these. Not all of them
    will be useful to you but there are a LOT of interesting things that
              you can configure. My own favorite ones include:

set ai                " turn auto indenting on
set bs=2              " allow backspacing over everything in insert mode
set noet              " don't expand tabs into spaces
set nowrap            " disable line wrapping
set ruler             " display row,col ruler
set showmatch         " show matching delimiter for parentheses, braces, etc
set ts=4              " set tab stop width to 4
set tw=78             " always limit the width of text to 78
set sw=4              " set the shift width to 4 spaces
set viminfo='20,\"50  " read/write a .viminfo file, don't store more
                      " than 50 lines of registers

   One thing to call you attention to: the shift width stuff is something
   that you might not have tried yet or come across. Suppose that you've
   coded some horrendous switch statement and then realize that you need
    to add a while loop before it. You code in the while loop stuff and
        then go back and arduously re-indent everything in between.
                                      
                        There's an easier way... :-)
                                      
    Simply highlight the lines that you want to indent, either indent in
         or indent back out, using the mouse or ":v" (for keyboard
   highlighting) and then hit the ">" key to indent the lines in farther
   or the "<" key to indent back out. Now, the nice thing is that you can
    set the amount of indentation using the "sw" (shiftwidth) variable.
                                      
   Also, keep in mind that while you normally set options in the ~/.vimrc
     or ~/.gvimrc configuration files, there's nothing to prevent your
     changing these options on the fly, and in different parts of your
   file. It's pretty common to turn off autoindentation when you're doing
      cutting and pasting. To turn autoindenting off, simply type in:

:set noai

             and off it goes. To turn it back on use ":set ai".
                                      
      Two other options that I particularly like are the ruler and the
   showmatch options. The ruler option puts a row,column indicator in the
     status line at the bottom of the file. Although the documentation
     mentions that this can slow performance a bit, I've found that it
                 works with no noticeable delay whatsoever.
                                      
    The other option is showmatch, which highlights the matching brace,
   bracket, or parenthesis as you type. Be aware that it sounds a warning
      beep if you insert a right brace/bracket/parenthesis without its
   opening mate. This can be a little annoying, but the time it saves you
     a syntax error, you'll be glad for it. I did a little bit of LISP
   programming this Spring in our Theory of Programming Languages course
                     and was mighty happy to use this!
                                      
                        Ahhh! Time For The Pictures!
                                      
      Congrats! If you've made it this far you might be interested in
   finally having a look at all the good stuff that I've been mentioning
                                   here.
                                      
                            Here's the skinny...
                                      
    What I did was create a number of screen dumps of gvim in action --
   editing a *.cc file (show off the syntax highlighting stuff...), using
       the online help system (also shows the multi-window look), and
   displaying a manual page from within gvim ("Look ma! No hands...!"). I
     used the venerable ImageMagick to make the thumbnail prints after
    using a combination of xv, xwpick, and xwd to make the actual screen
                          dumps and crop the pics.
                                      
     Also, for the comparison shoppers out there, I've included similar
     screen dumps of XEmacs, GNU Emacs, NEdit, and XCoral -- other very
   nice and feature-rich editors that some of you will be familiar with.
       All of these provide syntax-highlighting and a set of extended
                                 features.
                                      
       Finally, I've included a couple shots of the xxgdb and the DDD
   debuggers. I've been using both quite a bit lately and found that they
   are absolutely indispensable for tracking down mischievous bugs. I've
   included a couple URL's below as well, but let's start with the Family
                                Photo Album:
                                      
                             gvim Screen Shots
                                      
                    All of these are approximately 20k.
                                      
                            [LINK] [LINK] [LINK]
                                      
                            The "Other Guys..."
                                      
                   All of these are approximately 20-25k
                                      
                        [LINK] [LINK] [LINK] [LINK]
                                      
                             The xxgdb Debugger
                                      
                                   [LINK]
                                      
                              The DDD Debugger
                                      
                   All of these are approximately 20-25k
                                      
                            [LINK] [LINK] [LINK]
                                      
          Let me make just a couple comments about the debuggers.
                                      
   First, I've found both of these to be very usable and helpful in terms
    of making debugging easier. They are both front ends to the GNU GDB
     debugger (and DDD can be used with a variety of other debuggers as
   well). The xxgdb debugger is the simpler of the two and probably is a
      good place to start learning and tinkering if you've not used a
                         graphical debugger before.
                                      
   I ended up having to do a bit of tinkering with the resource settings
     for xxgdb. I'm currently using Fvwm95 with a screen resolution of
      1024x768 and 8-bit color. To get all the windows comfortably in
    1024x768 I messed around with the geometry resources. Also, the file
   selection box was completely whacked out -- I did a bit of adjustment
     to this to provide for a more sane display. If you're interested,
             here's the XDbx resource file I'm currently using:
                                      
                            Xxgdb resource file
                                      
   Also, the DDD debugger shown above is the most current public release
   -- version 2.1 which just recently showed up in the Incoming directory
   at sunsite. I don't know if it'll still be there, but you have have a
    try. If you don't find it there, try the /pub/Linux/devel/debuggers
            subdirectory and see if it hasn't been moved there.
                                      
                      Sunsite Linux Incoming Directory
                                      
     Keep in mind that you probably should be using one of the sunsite
     mirrors. If there's one near you, then use it! :-) There should be
    dynamic and static binaries available as well as the source code. In
   addition, there's an absolutely HUGE postscript manual page with lots
     of nifty pictures included in the /doc subdirectory in the source
                                   file.
                                      
    I've not had a chance to use the new DDD debugger as much as xxgdb,
   but what I've tried I'm been VERY impressed with. You'll see from the
     screen shots above that it has a much improved GUI as compared to
      xxgdb. Also, a number of new features have been added since the
     previous 1.4 release. One feature that I really like is setting a
    breakpoint, running the program, and then, by positioning the mouse
    pointer over a variable or data structure, getting a pop up balloon
               with the current value of that data structure.
                                      
                          This is huge. It rocks!
                                      
    I really don't have time to talk about this, so you'll have to do a
   bit of exploring on your own! Also, note that the folks working on DDD
   are encouraging the Motif-havenot's to either use the static binaries
   or give the LessTif libraries a try. Apparently, there have been some
   successes using this toolkit already. I'm sorry that I don't have the
    URL for LessTif, but a Yahoo, Alta Visa, etc., search should turn up
                               what you need.
                                      
   And lastly (and this really is the last... :-), here's some URL's for
                         the editors listed above:
                                      
                               VIM Home Page
                                      
                              XEmacs Home Page
                                      
                        ftp.x.org FTP site (XCoral)
                                      
                      sunsite.unc.edu FTP site (NEdit)
                                      
    The first two links should put you at the VIM and XEmacs home pages
     which provide a wealth of helpful information about each of these
        excellent editors. The last two I apologetically provide as
    approximate FTP links. The first will drop you into ftp.x.org in its
    /contrib subdirectory. You should be able to find the latest version
   of XCoral there, probably under the /editors subdir. The version shown
   above is version 2.5; the latest version of xcoral is 3.0, which I've
   not had a chance to compile or tinker with. The last link will put you
   at sunsite in the /X11/xapps subdirectory. Have a look in the /editors
            subdir for the latest source or binaries for NEdit.
                                      
          Phew! That was a tour de force! Glad you hung in there!
                                      
    I'd be happy to try to field questions about this stuff or hear back
   from anyone with comments or suggestions about any of these excellent
                                 programs.
                                      
                             Hope you enjoyed!
                                      
                                    John
                                      
     _________________________________________________________________
                                      
                         [LINK]Closing Up The Shop
                                      
    Well, I apologize again for the brevity of this month's column. I'd
        hoped to do a bit more writing on a couple different things,
   particularly one of the topics that's near and dear to my heart: shell
     scripting. I'm absolutely convinced that learning even basic shell
    scripting will forever sour you to DOS and will make you think twice
   even about the Windows stuff. Shell programming opens up a tremendous
   world of possibilities and, probably most importantly, it puts you in
    control of your system. It let's you do all sorts of cool and groovy
    things that would be difficult or impossible under a DOS/Win system.
                                      
   As a quick example, I'd recently had an occasion in which I needed to
   format a stack of 30-40 floppies (I was planning to do an afio backup
    of the XEmacs distribution I'd spent several hours downloading) and
     decided to use superformat to do this. Now superformat is a great
   little program that has the typical bazillion options. Since I needed
      only a few of these options for my particular system, I whipped
   together a shell script to help automate this process. It's no marvel
                   of programming genius, but here it is:

#!/bin/sh
#
# fdformt.sh   formats 1.44 HD floppies in the fd0 drive
#
# Author:      John M. Fisk <ctrvax.vanderbilt.edu>
# Date:        6 May 1997

FORMAT_CMD="superformat -v 3 "
FLOPPY_DEV="dev/fd0"

while : ; do
    echo -n "Format floppy [y,n]? "
    read yesno
    if [ "yesno" = "y" -o "yesno" = "Y" ]; then
        echo -n "Insert floppy and hit any key to continue..."
        read junk
        ${FORMAT_CMD} ${FLOPPY_DEV}
    else
        break
    fi
done

    Now, I'm sure that this could easily be improved upon, but the point
   was that it took me all of about 3 minutes to write this, it's easily
        maintained, and the logic is simple enough that it needs no
                               documentation.
                                      
                             Why bring this up?
                                      
    Well, I think this points to one of the larger issues with using and
    running Linux: the sense of control. Thing is, under a Linux system,
    you have an impressive arsenal of powerful and mature tools at your
   disposal that allow you to do things with you system. You can make it
                    do what you need and want it to do.
                                      
    Don't get me wrong, I enjoy many of the features of the OS/2, Win95,
    and MacOS OS's and I hope that the day will come when office suites
   and productivity tools of the highest caliber exist for Linux as they
     do under these other OS's. The thing that sets Linux apart is the
    freely available set of powerful tools that provide an unparalleled
              measure of freedom and control over your system.
                                      
                             Think about it...
                                      
     Shell scripting, Perl, Tcl/Tk, the entire range of GNU development
    tools and libraries, and a suite of immensely powerful utilities and
                                 programs.
                                      
                             That's impressive.
                                      
                 Anyway, I'm preaching to the choir... :-)
                                      
   Also, this is something of "old news", but I wanted to thank the folks
   at RedHat Software, Inc., the LUG at North Carolina State University,
    and the myriad participants in this year's Linux Expo '97. It was a
                                   blast!
                                      
   A bunch of us from MTSU headed East and managed to get to most of the
    two day affair. All in all, with the minor exception of some parking
    problems, the whole affair when smoothly and was VERY professionally
    done. The talks were delightful, the facilities very nice, and there
    were lots of great displays and vendor booths to visit and check out
    the latest Linux offerings. The book tent out front cleaned out more
   than one person's wallet, sending them home laden down with all sorts
                                of goodies.
                                      
                      All in all, it was a great trip.
                                      
   For anyone who went, I was, in fact, the annoying short nerdy looking
            fella in the front row with the camera. Sorry... :-)
                                      
    But, I just got the prints back and have sent a stack of them off to
    Michael K. Johnson at RedHat. Since I don't have a scanner or my own
   web site, I figured the Right Thing To Do would be to send the doubles
      to the guys at RedHat and let them put up anything they thought
     worthwhile. If you're interested in seeing who some of the various
   Linux folks are, drop Michael a note and I'm sure that he'll help out.
                                      
   Well, guess it's time to wrap this up. I had a great year this year at
    MTSU and am looking forward to finishing up school here one of these
   years :-). I'm also looking forward to having a summer of nothing more
    than Monday through Friday, 9:00 - 5:00. I don't know about you, but
    I've always got a long list of projects that I want to work on. I'm
   really looking forward to this. I've finally started learning emacs --
    actually, I've just gotten the most recent public release of XEmacs
    and have been having all sorts of fun trying to figure this one out.
   My wife and I will be leaving tomorrow for a couple weeks in Africa --
   actually, Zimbabwe and Zambia. Her parents are finishing up work there
     and will be returning this Fall. After a busy year for both of us,
     we're excited about a vacation and the chance to see them again. I
   should be back by the time this month's LG "hits the stands", although
   if you wrote during much of May, be aware that I'm definitely going to
                         have a mail black-out! :-)
                                      
      So... trust y'all are doing well. Congrats to all of this year's
                                   grads!
                                      
                Take care, Happy Linux'ing, and Best Wishes,
                                      
                                John M. Fisk
                               Nashville, TN
                             Friday, 9 May 1997
                                      
     _________________________________________________________________
                                      
                 [INLINE] If you'd like, drop me a note at:
                                      
                                      
    John M. Fisk <fiskjm@ctrvax.vanderbilt.edu>
    
     _________________________________________________________________
                                      
                       Copyright  1997, John M. Fisk
           Published in Issue 18 of the Linux Gazette, June 1997
                                      
     _________________________________________________________________
                                      
              [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
                                      
                          Linux Gazette Back Page
                                      
           Copyright  1997 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
                              Copying License.
                                      
     _________________________________________________________________
                                      
                                 Contents:
                                      
     * About This Month's Authors
     * Not Linux
       
     _________________________________________________________________
                                      
                         About This Month's Authors
                                      
     _________________________________________________________________
                                      
                                Larry Ayers
                                      
    Larry Ayers lives on a small farm in northern Missouri, where he is
   currently engaged in building a timber-frame house for his family. He
   operates a portable band-saw mill, does general woodworking, plays the
      fiddle and searches for rare prairie plants, as well as growing
    shiitake mushrooms. He is also struggling with configuring a Usenet
                       news server for his local ISP.
                                      
                                 Jim Dennis
                                      
     Jim Dennis is the proprietor of Starshine Technical Services. His
      professional experience includes work in the technical support,
      quality assurance, and information services (MIS) departments of
   software companies like Quarterdeck, Symantec/ Peter Norton Group, and
     McAfee Associates -- as well as positions (field service rep) with
    smaller VAR's. He's been using Linux since version 0.99p10 and is an
      active participant on an ever-changing list of mailing lists and
    newsgroups. He's just started collaborating on the 2nd Edition for a
    book on Unix systems administration. Jim is an avid science fiction
     fan -- and was married at the World Science Fiction Convention in
                                  Anaheim.
                                      
                                John M. Fisk
                                      
       John Fisk is most noteworthy as the former editor of the Linux
   Gazette. After three years as a General Surgery resident and Research
    Fellow at the Vanderbilt University Medical Center, John decided to
        ":hang up the stethoscope":, and pursue a career in Medical
     Information Management. He's currently a full time student at the
     Middle Tennessee State University and hopes to complete a graduate
      degree in Computer Science before entering a Medical Informatics
     Fellowship. In his dwindling free time he and his wife Faith enjoy
   hiking and camping in Tennessee's beautiful Great Smoky Mountains. He
        has been an avid Linux fan, since his first Slackware 2.0.0
                    installation a year and a half ago.
                                      
                                 Guy Geens
                                      
   One of Guy Geens's many interests is using Linux. One of his dreams is
   to be paid for being a Linux geek. Besides his normal work, he is the
       (rather inactive) maintainer of his research group's web pages
                     http://www.elis.rug.ac.be/~ggeens.
                                      
                                Ivan Griffin
                                      
   Ivan Griffin is a research postgraduate student in the ECE department
       at the University of Limerick, Ireland. His interests include
   C++/Java, WWW, ATM, the UL Computer Society (http://www.csn.ul.ie) and
        of course Linux (http://www.trc.ul.ie/~griffini/linux.html).
                                      
                             Michael J. Hammel
                                      
   Michael J. Hammel, is a transient software engineer with a background
        in everything from data communications to GUI development to
   Interactive Cable systems--all based in Unix. His interests outside of
    computers include 5K/10K races, skiing, Thai food and gardening. He
    suggests if you have any serious interest in finding out more about
   him, you visit his home pages at http://www.csn.net/~mjhammel. You'll
            find out more there than you really wanted to know.
                                      
                                 Mike List
                                      
      Mike List is a father of four teenagers, musician, printer (not
      laserjet), and recently reformed technophobe, who has been into
             computers since April,1996, and Linux since July.
                                      
                               Dave Phillips
                                      
      Dave Phillips is a blues guitarist & singer, a computer musician
      working especially with Linux sound & MIDI applications, an avid
    t'ai-chi player, and a pretty decent amateur Latinist. He lives and
                        performs in Findlay OH USA.
                                      
                                Henry Pierce
                                      
   Henry graduated from St. Olaf College, MN where he first used BSD UNIX
     on a PDP-11 and VAX. He first started to use Linux in the Fall of
   1994. He has been working for InfoMagic since June of 1995 as the lead
          Linux technical person. He is now an avid Red Hat user.
                                      
                               Michael Stutz
                                      
   Michael lives the Linux life. After downloading and patching together
       his first system in '93, he fast became a Linux junkie. Long a
     proponent of the GNU philosophy (publishing books and music albums
    under the GPL), he sees in Linux a Vision. Enough so that he spends
      his time developing a custom distribution (based on Debian) and
   related documentation for writers and other "creative" types and have
      formed a consulting firm based on GNU/Linux. His company, Design
    Science Labs, does Linux consulting for small-scale business and art
   ventures. He has written for Rolling Stone, 2600: The Hacker Quarterly
     and Alternative Press. He's a staff writer for US Rocker, where he
                    writes about underground rock bands.
                                      
                                Josh Turial
                                      
    Josh Turiel is the IS Manager of a small advertising agency South of
         Boston. He also runs the Grater Boston Network Users Group
    (http://www.bnug.org/). He also writes and does consulting work, as
   well. Since he has no life whatsoever as a result, his rare home time
         is spent sucking up to his wife and maintaining his cats.
                                      
     _________________________________________________________________
                                      
                                 Not Linux
                                      
     _________________________________________________________________
                                      
   Thanks to all our authors, not just the ones above, but also those who
    wrote giving us their tips and tricks and making suggestions. Thanks
                       also to our new mirror sites.
                                      
    My assistant, Amy Kukuk, did ALL the work this month other than this
   page. If this keeps up, I may have to make her the Editor. Thanks very
                      much for all the good work, Amy.
                                      
   These days my mind seems to be fully occupied with Linux Journal. As a
    result, I've been thinking I need a vacation. And, in fact, I do. I
   had been planning to take off a week in June to visit my grandchildren
    in San Diego, California, but just learned that their current school
   district is year round -- no summers off. Somehow this seems anti-kid,
    anti-freedom and just plain hideous. I remember the summers off from
    school as a time for having fun, being free of assignments and tests
   -- a time to sit in the top of a tree in our backyard reading fiction,
   while the tree gently swayed in the breeze (I was fairly high up). It
   was great. I wouldn't want to ever give up those summers of freedom. I
   wish I still had them. Ah well, no use pining for "the good ol' days".
   The grandkids will get some time off from school in August, and I will
               just have to put off the vacation until then.
                                      
                              Stop the Presses
                                      
   Be watching the Hot Linux News (link on The Front Page) on June 7 for
         an important announcement concerning the trademark issue.
                                      
                                 Have fun!
                                      
     _________________________________________________________________
                                      
                           Marjorie L. Richardson
                   Editor, Linux Gazette gazette@ssc.com
                                      
     _________________________________________________________________
                                      
                 [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back 
                                      
     _________________________________________________________________
                                      
         Linux Gazette Issue 18, June 1997, http://www.ssc.com/lg/
      This page written and maintained by the Editor of Linux Gazette,
                              gazette@ssc.com