File: HOWTO.FAQ.DO-DONT

package info (click to toggle)
afbackup 3.3.6pl4-1
  • links: PTS
  • area: main
  • in suites: woody
  • size: 3,872 kB
  • ctags: 3,143
  • sloc: ansic: 44,316; tcl: 4,189; sh: 2,263; csh: 2,077; makefile: 566; sed: 93; perl: 80
file content (3736 lines) | stat: -rw-r--r-- 165,699 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736

                AF's Backup HOWTO
                =================


Index
-----

1. How to optimize the performance to obtain a short backup time ?

2. How to start the backup on several hosts from a central machine ?

3. How to store the backup in a filesystem instead of a tape ?

4. How to use several streamer devices on one machine ?

5. How to recover from a server crash during backup ?

6. How to port to other operating systems ?

7. How to provide recovery from hard crashes (disk crash, ...) ?

8. How to make differential backups ?

9. How to use several servers for one client ?

10. How can i automatically make copies of the written tapes after a backup ?

11: How to redirect network backups through a secure ssh connection ?

12: What's the appropriate way to eject the cartridge after backup ?

13: How to encrypt the stored files and not only compress them ?

14: How to use the multi-stream server ? Anything special there ?

15: How many clients can connect the multi-stream server ?

16: How to get out of the trouble, when the migration script fails ?

17: How to use built-in compression ?

18: How to save database contents ?

19: How to use the ftape driver ?

20: How to move a cartridge to another set due to it's usage count ?

21: How to make backups to different cartridge sets by type or by date ?

22: How to achieve independence from the machine names ?

23: How to restrict the access to cartridges for certain clients ?

24: How to recover from disaster (everything is lost) ?

25: How to label a tape, while the server is waiting for a tape ?

26: How to use a media changer ?

27: How to build Debian packages ?

28: How to let users restore on a host, they may not login to ?

29: How to backup through a firewall ?

30: How to configure xinetd for afbackup ?

31: How to redirect access, when a client contacts the wrong server ?

32: How to perform troubleshooting when encountering problems ?

33: How to use an IDE tape drive with Linux the best way ?

34: How to make afbackup reuse/recycle tapes automatically ?

35: How to make the server speak one other of the supported languages ?


D. Do-s and Dont-s

F. The FAQ


--------------------------------------------------------------------------

1. How to optimize the performance to obtain a short backup time ?

Basically since version 2.7 the client side tries to optimally adapt
to the currently maximum achievable throughput, so the administrator
doesn't have to do much here.
The crucial point is the location of the bottleneck for the throughput
of the backup data stream. This can be one of:

- The streamer device
- The network connection between backup client and server
- The CPU on the backup client (in case of compression selected)

What usually is not a problem:

- The CPU load of the server

The main influence the administrator has on a good backup performance
is the compression rate on the client side. In most cases the bottleneck
for the data stream will be the network. If it is based on standard
ethernet, the maximum throughput without any other network load will be
around 1 MB/sec. With 100 MBit ethernet or a similar technology about
10 MB/sec might be achieved, so the streamer device is probably the
slowest part (with maybe 5 MB/sec for a Exabyte tape). To use this
capacity it is not clever to plug up the client side CPU with heavy
data compression load. This might be inefficient and thus lead to a
lousy backup performance. The influence of the compression rate on the
backup performance can be made clear with the following table. The
times in seconds have been measured with the (unrepresentative)
configuration given below the table. The raw backup duration gives the
pure data transmission time without tape reeling or cartridge loading
or unloading.

 compression program   |  raw backup duration
-----------------------+----------------------
  gzip -1              |    293 seconds         |
  gzip -5              |    334 seconds         |
  compress             |    440 seconds         | increasing duration
  <no compression>     |    560 seconds         |
  gzip -9              |    790 seconds         V


Configuration:
Server/Client machine:
  586, 133/120MHz (server/client), 32/16 MB (server/client)
Network:
  Standard ethernet (10 MB, 10BASE2 (BNC/Coax), no further load)
Streamer:
  HP-<something>, 190 kByte/sec

Obviously the bottleneck in this configuration is the streamer.
Anyway it shows the big advantage compression can have on the
overall performance. The best performance is here achieved with
the lowest compression rate and thus the fastest compression
program execution. I would expect, that the performance optimum
shifts towards a somewhat better compression with a faster client
CPU (e.g. the latest Alpha-rocket).

So to find an individual performance optimum i suggest to run some
backups with a typical directory containing files and subdirectories
of various sizes. Run these backups manually on the client-side machine
with different compression ratios using the "client"-command as
follows:

/the/path/to/bin/afclient -cvnR -h your_backuphost -z "gzip -1" gunzip \
                             /your/example/directory

Replace "gzip -1" and "gunzip" appropriately for the several runs.


--------------------------------------------------------------------------

2. How to start the backup on several hosts from a central machine ?

For this purpose serves the remote startup utility. To implement this
as fast as possible, a part of the serverside installation must be
made on the client side, where it is requested to start the backup from
a remote site. Choose the appropriate option when running the Install-
script.

To start a backup on another machine, use the -X option of the
client-program. A typical invocation is

/the/path/to/client/bin/afclient -h <hostname> -X incr_backup

This starts an incremental backup on the supplied host. Each
program on the remote host lying in the directory configured
as Program-Directory in the configuration file of the serverside
installation part of the remote host (default: $BASEDIR/server/rexec)
can be started, but no other. The entries may be symlinks, but
they must have the same filename like the programs, they point to.

The machine, where this script is started may be any machine in
the network having the client side of the backup system installed.


--------------------------------------------------------------------------

3. How to store the backup in a filesystem instead of a tape ?

There are several ways how to accomplish that. Two options are
explained here. I personnally prefer option 2, but they are
basically equivalent.

* Option 1 (using symbolic links)

Assumed the directory, where you'd like to store the backup, is
/var/backup/server/vol.X with X being the number of the pseudo-
cartridge, change to the directory /var/backup/server and create
a symbolic link and a directory like this:

 ln -s vol.1 vol ; mkdir vol.1

Then create the file `data.0' and a symlink `data' to it with

 touch vol/data.0
 ln -s data.0 vol/data

The directories and symlinks /var/backup/server/vol* must be owned
or at least be writable for the user, under whose ID the backup server
is running. The same applies for the directory /var/backup/server.
If this is not root, issue an appropriate chown command, e.g.:

 chown backup /var/backup/server /var/backup/server/vol*

At least two pseudo-cartridges should be used. This is achieved by
limiting the number of bytes to be stored on each of them. So now
edit your serverside configuration file and make e.g. the following
entries (assuming /usr/backup/server/bin is the directory, where the
programs of the server side reside):

Backup-Device:          /var/backup/server/vol/data
Tape-Blocksize:         1024
Cartridge-Handler:      1
Number Of Cartridges:	1000
Max Bytes Per File:     10485760
Max Bytes Per Tape:     104857600
Cart-Insert-Gracetime:  0
SetFile-Command:        /bin/rm -f %d;touch %d.%m; ln -s %d.%m %d; exit 0
SkipFiles-Command:      /usr/backup/server/bin/__inc_link -s %d %n
Set-Cart-Command:       /bin/rm -f /var/backup/server/vol; mkdir -p /var/backup/server/vol.%n ; ln -s vol.%n /var/backup/server/vol ; touch %d.0 ; /bin/rm -f %d ; ln -s data.0 %d;exit 0
Change-Cart-Command:    exit 0
Erase-Tape-Command:     /bin/rm -f %d.[0-9]* %d ; touch %d.0 ; ln -s %d.0 %d ; exit 0

If the directory /var/backup/server/vol/data is on a removable media,
you can supply the number of media you would like to use and an
eject-command as follows:

Number Of Cartridges:   10
# or whatever

Change-Cart-Command:    your_eject_command

If a suitable eject-command does not exist, try to write one yourself.
See below for hints.

Furthermore you can add the appropriate umount command before the eject-
command like this:

Change-Cart-Command:    umount /var/backup/server/vol/data; your_eject_command

To get this working the backup serverside must run as root. Install the
backup stuff supplying the root-user when prompted for the backup user.
Or edit /etc/inetd.conf and replace backup (or whatever user you configured)
(5th column) with root, sending a kill -1 to the inetd afterwards.
Actually you must mount the media manually after having it inserted into
the drive. Afterwards run the command /path/to/server/bin/cartready to
indicate, that the drive is ready to proceed. This is the same procedure
like having a tape drive.

Each media you will use must be prepared creating the file "data.0" and
setting the symbolic link "data" pointing to data.0 like described above.


* Option 2 (supply a directory name as device)

Like with option 1 several pseudo-cartridges should be used, at
least two. Like above create a directory to contain the backup data
and a symlink, then chown them to the backup user:

 mkdir -p /var/backup/server/vol.1
 ln -s vol.1 /var/backup/server/vol
 chown backup /var/backup/server/vol*

Using one of the serverside configuration programs or editing the
configuration file, supply a directory name as the backup device.
The directory must be writable for the user, under whose ID the
server process is started (whatever you configured during
installation, see /etc/inetd.conf). The backup system then writes
files with automatically generated names into this directory.
The rest of the configuration should e. g. be set as follows:

Backup-Device:          /var/backup/server/vol
Tape-Blocksize:         1024
Cartridge-Handler:      1
Number Of Cartridges:   100
Max Bytes Per File:     10485760
Max Bytes Per Tape:     104857600
Cart-Insert-Gracetime:  0
SetFile-Command:        exit 0
SkipFiles-Command:      exit 0
Set-Cart-Command:       /bin/rm -f %d ; mkdir -p %d.%n ; ln -s %d.%n %d ;  exit 0
Change-Cart-Command:    exit 0
Erase-Tape-Command:     /bin/rm -f %d/* ; exit 0

A SetFile-Command is mandatory, so this exit 0 is a dummy.
For the further options (using mount or eject commands) refer
to the explanations under * Option 1.


(
   How to write an eject command for my removable media device ?

If the information in the man-pages is not sufficient or you don't
know, where to search, try the following:
Do a grep ignoring case for the words "eject", "offline" and
"unload" over all system header-files like this:

egrep -i '(eject|offl|unload)' /usr/include/sys/*.h

On Linux also try /usr/include/linux/*.h and /usr/include/asm/*.h.
You should find macros defined in headers with names giving hints
to several kinds of devices. Look into the header, whether the
macros could be used with the ioctl system call. The comments
should tell details. Then you can eject the media with the
following code fragment:

#include <sys/ioctl.h>
#include <your_device_related_header>

{
  int   res, fd;
  char  *devicefile = "/dev/whatever";

  fd = open(devicefile, O_RDONLY);

  if(fd < 0){
    /* catch error */
    ...
  }

  res = ioctl(fd, YOUR_EJECT_MACRO);

  if(res < 0){
    /* catch error */ ...
  }

  close(fd);
}

You might want to extend the utility obtainable via ftp from:
ftp://ftp.zn.ruhr-uni-bochum.de/pub/Linux/eject.c and related
files. Please send me any success news. Thanks !


--------------------------------------------------------------------------

4. How to use several streamer devices on one machine ?

Run an installation of the server side for each streamer device,
install everything into a separate directory and give a different
port number to each installed server. This can be done giving each
server an own service name. For the default installation, the
service is named "afbackup" and has port number 2988. Thus, entries
are provided in files in /etc:

/etc/services:
afbackup  2988/tcp

/etc/inetd:
afbackup stream tcp nowait ...

For a second server, you may add appropriate lines, e.g.:

/etc/services:
afbackup2 2989/tcp

/etc/inetd.conf:
afbackup2 stream tcp nowait ...

Note, that the paths to the configuration files later in the inetd.conf-
lines must be adapted to each installation, respectively. To get the
services active, send a Hangup-Signal to the inetd.
(ps ..., kill -HUP <PID>)

It is important, that every server of several running on the same
host has it's own lock file. So e.g. configure lockfiles, that
are located in each server's var-directories. If they all share
one lockfile, several servers cannot run at the same time, what
is usually not, what you want.

The relations between backup clients and streamer devices on the
server must be unique. Thus the /etc/services on the clients must
contain the appropriate port number for the backup entry, e.g.:

afbackup  2990/tcp

Note, that on the clients the service name must always be "afbackup"
and not "afbackup2" or whatever.

As an alternative, you can supply the individual port number in
the clientside configuration. If you do so, no changes must be
made in any clientside system file, here /etc/services.

Do not use NIS (YP) for maintaining the afbackup-services-entry, i.e.
do not add the entry with "afbackup" above to your NIS-master-services-file.
It is anyway better not to use the files /etc/passwd ... as sources
for your NIS-master-server, but to use a copy of them in a separate
directory (as usually configured on Solaris and other Unixes).


--------------------------------------------------------------------------

5. How to recover from a server crash during backup ?

With some devices there will be the problem, that the end-of-tape mark
is not written on power-down during writing to the tape. Even worse,
when power is up again, the position, where the head is currently placed,
gets corrupt, even if no write access has been applied at power-down.
Some streamers are furthermore unable to start to write at a tape
position, where still records follow, e.g. if there are 5 files on tape,
it is e.g. impossible to go to file 2 and start to write there. An
I/O-error will be reported.

The only way to solve this is to tell the backup system to start to
write at the beginning of the next cartridge. If the next cartridge
has e.g. the label-number 5, log on to the backup server, become root
and type:

  /your/path/to/server/bin/cartis -i 5 1


--------------------------------------------------------------------------

6. How to port to other operating systems ?


* Unix-like systems *

This is not that difficult. The GNU-make is mandatory, but this is
usually no problem. A good way to start is to grep for AIX or sun
over all .c- and .h-files, edit them as needed and run the make.
You might want to run the prosname.sh to find out a specifier for
your operating system. This specifier will be defined as a macro
during compilation (exactly: prepocessing).

An important point is the x_types.h-file. Here the types should be
adapted as described in the comments in this file, lines 28-43.
Insert ifdef-s as needed like for the OSF 1 operating system on alpha
(macros __osf__ and __alpha). Note, that depending on the macro
USE_DEFINE_FOR_X_TYPES the types will be #define-d instead of
typedef-d. This gives you more flexibility, if one of those
possibilities is making problems.

The next point is the behaviour of the C-library concerning the
errno-variable in case the tape comes to it's physical end. In most
cases errno is set to ENOSPC, but not always (e.g. AIX is special).
This can be adapted modifying the definition of the macro
END_OF_TAPE (in budefs.h). This macro is only used in if-s as shown:
  if(END_OF_TAPE) ...
Consult your man-pages for the behaviour of the system calls on
your machine. It might be found under rmt, write or ioctl.

The next is the default name of the tape device. Define the macro
DEFAULT_TAPE_DEVICE (in budefs.h) appropriately for your OS.

A little pathological is the statfs(2) system call. It has a different
number of arguments depending on the system. Consult your man-pages,
how it should be used. statfs is only used in write.c

There may be further patches to be done, but if your system is close
to POSIX this should be easy. The output of the compiler and/or the
linker should give the necessary hints.

Please report porting successes to af@muc.de. Thanks.

Good luck !



* Win-whatever *

This is my point of view:

Porting to Microsoft's Features-and-bugs-accumulations is systematically
made complicated by the Gates-Mafia. They spend a lot of time on taking
care, that it is as difficult as possible to port to/from Win-whatever.
This is one of their monopolization strategies. Developers starting to
write programs shall have to make the basic decision: "Am i gonna hack
for Micro$oft's "operating systems", or for the others ?" Watching the
so-called market this decision is quite easy: Of course they will program
for the "market leader". And as few as possible of what they produce
should be usable on other ("dated") platforms. Companies like Cygnus
are providing cool tools (e. g. a port of the GNU-compiler) to make
things easier but due to the fact, that M$ are not providing so many
internals to the public, in my opinion porting is nonetheless an
annoying job. Thank Bill Gates for his genious strategies.

In short, at the moment i'm not gonna provide information how to port
to Micro$oft-platforms. If somebody will do a port, i don't hinder him
but will not provide any support for it. As this software (like the most
on Unix) heavily relies on POSIX-conformance and Mafia$oft has announced,
that the "POSIX-subsystem for NT" will not be shipped anymore in the near
future (BTW they discourage to use it at all "cause of security problems"
(Bullshit) - see the Microsoft web pages), the porting job will either
substitute all POSIX-calls by Win32-stuff (super-heavy efforts), or bring
only temporary fun (see above).


--------------------------------------------------------------------------

7. How to provide recovery from hard crashes (disk crash, ...) ?

A key to this is the clientside StartupInfoProgram parameter. This
command should read the standard input and write it to some place
outside of the local machine - to be more precise - not to a disk
undergoing backups or containing the clientside backup log files.
The information written to the standard input of this program is
the minimum information required to restore everything after a
complete loss of the saved filesystems and the client side of the
backup system. Recovery can be achieved using the restore-utility
with the -e flag (See: PROGRAMS) and supplying the minimum recovery
information to the standard input of restore. Several options exist:

- Write this information to a mail-program (assumed the mail folders
  are outside of the filesystems undergoing backup) and sending this
  information to a backup-user. Later the mailfile can be piped into
  the restore-utility (mail-related protocol lines and other unneeded
  stuff will be ignored). For each machine, that is a backup client,
  an individual mail user should be configured, cause the minimum
  restore information does NOT contain the hostname (to be able to
  restore to a different machine, what might make perfect sense in
  some situations)

- Write the information into a file (of course: always append),
  that resides on an NFS-mounted filesystem, eventually for security
  reasons exported especially to this machine only. To be even more
  secure, the exported directory might be owned by a non-root-user,
  who is the only one, who may write to this directory. This way it
  can be avoided to export a directory with root-access. Then the
  StartupInfoProgram should be something like:
   su myuser -c "touch /path/to/mininfo; cat >> /path/to/mininfo"
  The mininfo-file should have a name, that allows to deduce the
  name of the backup-client, that wrote it. E.g. simply use the
  hostname for this file.

- Write the information to a file on floppy disk. Then the floppy
  disk must always be in the drive, whenever a backup runs. The
  floppy could be mounted using the amd automounter as explained in
  ftp://ftp.zn.ruhr-uni-bochum.de/pub/linux/README.amd.floppy.cdrom
  or using the mtools usually installed for convenience. In the
  former case the command should contain a final sync. In the
  latter case the file must be first copied from floppy, then
  appended the information, finally copied back to floppy e.g. like
  this:
   mcopy -n a:mininfo /tmp/mininfo.$$; touch /tmp/mininfo.$$; \
       cat >> /tmp/mininfo.$$; mcopy -o /tmp/mininfo.$$ a:mininfo; \
       /bin/rm -f /tmp/mininfo.$$; exit 0
  Note, that the whole command must be entered in one line using
  the (x)afclientconfig command. In the configuration file lineend
  escaping is allowed, but not recognized by (x)afclientconfig. An
  alternative is to put everything into one script, that is started
  as StartupInfoProgram (Don't forget to provide a good exit code
  on successful completion)

My personal favourite is the second option, but individual preferences
or requirements might lead to different solutions. There are more
options here. If someone thinks, i have forgotten an important one,
feel free to email me about it.

It might be a good idea to compile afbackup linked statically with
all required libraries (building afbackup e.g. using the command
make EXTRA_LD_FLAGS=-static when using gcc), install it, run the
configuration program(s), if not yet done, tar everything and
put it to a floppy disk (if enough space is available).

To recover from a heavy system crash perform the following steps:
- Replace bad disk(s) as required
- Boot from floppy or cdrom (the booted kernel must be network-able)
- Add the backup server to /etc/hosts and the following line to
  /etc/services: afbackup 2988/tcp
- Mount your new disk filesystem(s) e.g. in /tmp/a and in a way, that
  this directory reflects your original directory hierarchy below
  / (like usually most system setup tools do)
- Untar your packed and statically linked afbackup-distribution, but
  NOT to the place, where it originally lived (e.g. /tmp/a/usr/backup),
  cause it will be overwritten, if you also saved the clientside
  afbackup-installation, what i strongly recommend.
- Run the restore-command with -e providing the minimum restore
  information saved outside of the machine to stdin:
  /path/to/staticlinked/afrestore -C /tmp/a -e < /path/to/mininfo-file

Bootsector stuff is NOT restored in this procedure. For Linux
you will have to reinstall lilo, but this is usually no problem.


--------------------------------------------------------------------------

8. How to make differential backups ?

A differential backup means for me: Save all filesystem entries modified
since the previous full backup, not only those modified since the last
incremental backup.

This task can be accomplished using the -a option of the incr_backup
command. It tells incr_backup to keep the timestamp. If -a is omitted
one time, another differential backup is no longer possible since the
timestamp is modified without -a. So if differential backups are required,
you have to do without incremental backups.


--------------------------------------------------------------------------

9. How to use several servers for one client ?

Several storage units can be configured for one client. A storage unit
is a combination of a hostname, a port number and a cartridge set number.
Several servers can be configured on one machine, each operating an own
streamer device or directory for storing the data.

The storage units are configured by the first three parameters of the
client side. These are hostnames, port numbers and cartridge set numbers,
respectively. Several entries can be made for each of these parameters.
The port numbers and/or cartridge set numbers can be omitted or fewer
than hostnames can be supplied, then the defaults will apply. If more
port or cartridge set numbers than hostnames are given, the superfluous
ones are ignored. The lists of hostnames and numbers can be separated
by whitespace and/or commas.

When a full or incremental backup starts on a client, it tests the
servers, one after the other, whether they are ready to service them.
If none is ready, it waits for a minute and tries again.

With each stored filesystem entry, not only the cartridge number and
file number on tape is stored, but now also the name of the host,
where the entry is stored to, and the appropriate port number. Thus
they can be restored without the necessarity, that the user or adminis-
trator knows, where they are now. This all happens transparently and
without additional configuration efforts. For older backups, the first
entry of each list (hostname and port) is used. Therefore, in case of
an upgrade, the first entries MUST be those, that applied for this
before the upgrade.

If there are several clients, the same order of server entries should
not be configured for all of them. This would probably cause most of
the backups to go to the first server, while the other(s) are not
exploited. The entries should be made in a way, that a good balancing
of storage load is achieved. Other considerations are:

- Can the backup be made to a server in the same subnet, where the
  client is
- Has this software been upgraded ? Then the first entry should be
  the same server as configured before (see above)
- The data volume on the clients to be saved (should be balanced)
- The tape capacity of the servers
- other considerations ...


--------------------------------------------------------------------------

10. How can i automatically make copies of the written tapes after a backup ?

For this purpose a script has been added to the distribution. It's name
is autocptapes and it can be found in the /path/to/client/bin directory.
autocptapes should read the statistics output and will copy all tapes
from the first accessed tape through the last one to the given destination.
Copying will begin at the first written tapefile, so not the whole tape
contents are copied all the time again.

The script has the following usage:

autocptapes [ -h <targetserver> ] [ -p <targetport> ] \
                   [ -k <targetkeyfile> ] [ -o cartnumoffset ]

targetserver    must be the name of the server, where to copy the tapes to.
                (default, if not set: the source server)
targetport      must be the appropriate target server port (default, if not
                set: the source port)
targetkeyfile   the file containing the key to authenticate to the target
                server (default: the same file as for the source server)
cartnumoffset   the offset to be added to the source cartridges' numbers
                to get the target cartridge numbers (may be negative,
                default: 0). This is useful, if e.g. copies of tapes 0-5
                shall be on tapes 6-10, then simply an offset of 5 would
                be supplied.

The script can be added to the client side configuration parameter
ExitProgram, so that it reads the report file containing the backup
statistics. This may e.g. look as follows:

ExitProgram:		/path/to/client/bin/autocptapes -o 5 < %r

Note, that this is a normal shell interpreted line and %r can be used
in several commands separated by semicolon, && or || ...

WARNING: If several servers are configured for the client, this
automatic copying is severely discouraged, cause cartridge numbers
on one server do not necessarily have something to do with those on
another server. It should be carefully figured out, how a mapping of
source and target servers and cartridge numbers could be achieved.
This is subject of future implementations.


--------------------------------------------------------------------------

11: How to redirect network backups through a secure ssh connection ?

ssh must be up and working on client(s) and server(s). On the
server, an sshd must be running. Then port forwarding can be
used. As afbackup does not use a privileged port, the forwarding
ssh needs not to run as root. Any user is ok. To enable afbackup
to use a secure ssh connection, no action is necessary on the
server. On the client, the following steps must be made:

- Configure the client itself as the server in the clientside
  configuration file as localhost (the ssh forwarder seems to
  accept connections only from the loopback interface). No
  afbackup server process should be running on this client. If
  an afbackup server is running, a different port than the default
  2988 must be configured. This different port number should be
  passed to ssh forwarder, when started.

- Start the ssh forwarder. The following command should do the job:

   ssh -f -L 2988:afbserver:2988 afbserver sleep 100000000

Explanations: -f makes the ssh run in the background, & is not
 necessary. -L tells the ssh to listen locally at port 2988.
 This(first) port number must be replaced, if a different port
 must be used due to an afbackup server running locally or other
 considerations. afbserver must be replaced with the name of the
 real afbackup server. The second port number 2988 is the one,
 where the afbackup server really expects connections and that
 was configured on the client before trying to redirect over ssh.
 The sleep 100000 is an arbitrary command that does not terminate
 within a sufficient time interval.

Now the afbackup client connects to the locally running ssh, who
in turn connects the remote sshd, who connects the afbackup server
awaiting connections on the remote host. So all network traffic is
done between the ssh and sshd and is thus encrypted.
A simple test can be run (portnum must only be supplied if != 2988)
on the client:

 /path/to/client/bin/client -h localhost -q [ -p portnum ]

If that works, any afbackup stuff should.

If it is not acceptable, that the ssh-connection is initiated from
the client side, the other direction can be set up using the -R
option of ssh. Instead of the second step in the explanations above
perform:

- On the server start the command:

   ssh -f -R 2988:afbserver:2988 afbclient sleep 100000000


--------------------------------------------------------------------------

12: What's the appropriate way to eject the cartridge after backup ?

In my opinion it is best to exploit the secure remote start option
of afbackup. Programs present in the directory configured as the
Program-Directory on the server side can be started from a client
using the -X option of afclient. Either write a small script, that
does the job and put the script into the configured and created (if
not already present) directory. Don't forget execute permission. Or
simply create a symbolic link to mt in that directory (e.g. type
ln -s `which mt` /path/to/server/rexec/mt). Then you can eject the
cartridge from any client eject running 

/.../client/bin/afclient -h backupserver -X "mt -f /dev/whatever rewoffl"


--------------------------------------------------------------------------

13: How to encrypt the stored files and not only compress them ?

A program, that performs the encryption is necessary, let's simply call
it des, what is an example program for what we want to achieve here. The
basic problem must be mentioned here: To supply the key it is necessary
to either type in the key twice or to supply it on the command line using
the option -k. Typing the key in is useless in an automated environment.
Supplying the key in an option makes it visible in the process list, that
any user can display using the ps command or (on Linux) reading the
pseudo-file cmdline present in each process's /proc/<pid>/ directory.
The des program tries to hide the key overwriting the 8 significant bytes
of the argument, but this does not always work. Anyway the des program
shall serve as example here. Note, that the des program will usually
return an exit status unequal to 0 (?!?), so the message "minor errors
occurred during backup" does not have special meanings.

Another encryption program comes with the afbackup distribution and is
built, if the libdes is available and des-encrypted authentication is
switched on. The program is called __descrpt. See the file PROGRAMS
for details on this program. The advantage of this program is, that
no key has to be supplied on the command line visible in the process
list. The disadvantage is, that the program must not be executable by
intruders, cause they would be able to simply start it and decrypt.
To circumvent this to a certain degree, a filename can be supplied to
this program, that the key will be read from. In this case this key
file must be access restricted instead of the program itself.

If only built-in compression is to be used, everything is quite simple.
The BuiltinCompressLevel configuration parameter must be set > 0 and the
en- and decrypt programs be specified as CompressCmd and UncompressCmd.
If an external program should be used for compress and uncompress, it
is a little more difficult:

Cause the client side configuration parameter CompressCommand is NOT
interpreted in a shell-like manner, no pipes are possible here. E.g. it
is impossible to supply something like:  gzip -9 | des -e -k lkwjer80723k
there.

To fill this gap the helper program __piper is added to the distribution.
This program gets a series of commands as arguments. The pipe symbol |
may appear several times in the argument list indicating the end of a
command and the beginning of the next one. Standard output and standard
input of the following command are connected as usual in a shell command.
No other special character is interpreted except the double quotes, that
can delimit arguments consisting of several words separated by whitespace.
The backslash serves as escape character for double quotes or the pipe
symbol. The startup of a pipe created by the __piper program is expected
to be much faster compared to a command like  sh -c "gzip | des -e ...",
where a shell with all it's initializations is used.

Example for the use of __piper in the client side configuration file:

CompressCommand:  /path/to/client/bin/__piper gzip -1 | des -e -k 87dsfd

UncompressCommand: /path/to/client/bin/__piper des -d -k 87dsfd | gunzip


--------------------------------------------------------------------------

14: How to use the multi-stream server ? Anything special there ?

The multi-stream server should be installed properly as described in the
file INSTALL or using the script Install. It is heavily recommended to
configure a separate service (i.e. TCP-port) for the multi-stream server.
Thus backups can go to either the single-stream server or to the multi-
stream server. The index mechanism of the client side handles this
transparently. The information, where the data has been saved, has not
to be supplied for restore.

The single stream server might be used for full backups, because it is
generally expected to perform better and provide higher throughput. The
multi-stream server has advantages with incremental backups, because
several clients can be started in parallel to scan through their disk
directories for things, that have changed, what may take a long time.
If there are several file servers with a lot of data it might be desired
to start the incremental backups at the same time, otherwise it would
take too much time. Having configured the single stream server as default
in the client side configuration, the incr_backup program will connect
to the multi-stream server using the option -P with the appropriate port
number of the multi-stream server.

As it is not possible, that several single stream servers operate on the
streamer at the same time, it is not possible, that a multi-stream server
and a single-stream server do in parallel. This is only the multi-stream
server's job.

The clients must be distinguishable for the multi-stream server. It puts
the data to tape in packets prefixed with a header containing the clients'
identifiers. Dispatching during read it must have an idea, which client
is connected and what data it needs. Default identifier is the official
hostname of the client or the string "<client-program>", if the program
"afclient" is used. It is not allowed, that several clients with the same
identifier connect, cause that would mix up their data during read, what
is obviously not desirable. A client identifier can be configured in the
client side configuration file using the parameter ClientIdentifier or
using the option -W (who), that every client side program supports.
It might be necessary to do this, e.g. if a client's official hostname
changes. In this case the client won't receive any data anymore, cause
the server now looks for data for the client with the new name on tape,
which he won't find.

To find out and store the client's identifiers easily it is included
into the statistics report, that can be used (e.g. sent to an admin
via E-mail) in the client side exit program.


--------------------------------------------------------------------------

15: How many clients can connect the multi-stream server ?

This depends on the maximum number of filedescriptors per process on the
server. On a normal Unix system this number is 256. The backup system
needs some file descriptors for logging, storing temporary files and so
on, so the maximum achievable number of clients is something around 240.
It is not recommended to really run that many clients at the same time,
this has NOT been tested.
Anyway the number of filedescriptors per process can be increased on
most systems, if 240 is not enough.


--------------------------------------------------------------------------

16: How to get out of the trouble, when the migration script fails ?

This depends, where the script fails. If it says:
"The file .../start_positions already exists."
there is no problem. You might have attempted migration before.
If this is true, just remove this file or rename it. If it does
not contain anything it is anyway useless. When the script tells,
that some files in .../var of your client installation contain
different (inconsistent) numbers, then it is getting harder.
Locate the last line starting with ~~Backup: in you old style
minimum restore info and take the number at the end of it.
The file `num' in your clientside var directory should contain
the same number. If it does not, check the current number of the
File index files, also in the clientside var directory. Their
name is determined by the configuration parameter IndexFilePart.
The file `num' should contain the highest number found in the
filenames. If not, edit the file num, so it does. Nonetheless
this number must also match the one noted earlier. If it does
not, this is weird. If your minimum restore info contains only
significantly lower numbers, you have a real problem, cause
then you minimum restore info is not up to date. In this case
migration makes no sense and you can skip the migration step
starting anew with fingers crossed heavily.
If the file `num' in the var directory is missing, then you
must check your configuration. If you have never made a backup
before, then this file is indeed not there and migration makes
not too much sense.
If the full_backup program you supply is found not being
executable, please double-check your configuration and make
sure, that you are a user with sufficient power.


--------------------------------------------------------------------------

17: How to use built-in compression ?

The distribution must be built selecting the appropriate
options to link the zlib functions in. When using the Install
script you are asked for the required information. Otherwise
see the file INSTALL for details.

The zlib version 1.0.2 or higher is required to build the
package with the built-in compression feature. If the zlib is not
available on your system (on Linux it is usually installed by
default), get it from some ftp server and build it first before
attempting to build afbackup.

The new clientside configuration parameter BuiltinCompressLevel
turns on built-in compression. See FAQ Q27, what to do when the
compression algorithm is to be changed.


--------------------------------------------------------------------------

18: How to save database contents ?

There are several ways to save a database. Which to choose,
depends on the properties of the database software. The
simplest way is to

1.) Save the directory containing the database files

This assumes, that the database stores the data in normal
files somewhere in the directory structure. Then these
files can be written to tape. But there is a problem here,
cause the database software might make use of caching or
generally keep necessary information in memory as long as
some database process is running. Then just saving the
files and later restoring them will quite sure corrupt the
database structure and at least make some (probably long
running) checks necessary, if not make the data unusable.
Thus it is necessary to shut down the database before
saving the files. This is often unacceptable, cause users
can not use the database while it is not running. Consult
the documentation of your database, whether it can be
saved or dumped online and read on.

2.) Save the raw device

This assumes, that the database software stores the data
on some kind of raw device, maybe a disk partition, a solid
state disk or whatever. Then it can be saved prefixing the
name with /../, no space between the prefix and the raw
device name. Instead of /../ the option -r can be used in
the client side configuration file. By default the data is
not compressed, because one single wrong bit in the saved
data stream might make the whole rest of the data stream
unusable during uncompression. If compression is nonetheless
desired, the prefix //../ can be used or the option -R .
For online/offline issues the same applies here, as if the
data were kept in normal files.

3.) Save the output of a dump command

If your database has a command to dump all it's contents,
it can be used to directly save the output of this command
to the backup. In the best case, this dump command and it's
counterpart, who reads, what the dump command has written
and thus restores the whole database or parts of it, is able
to do the job online without shutting down the database.
Such a pair of commands can be supplied in the client side
configuration file as follows: In double quotes, write a
triple bar ||| , followed by a space character and the dump
command. This may be a shell command, maybe a command pipe
or sequence or whatever. Then another triple bar must be
written, followed by the counterpart of the dump command
(also any shell-style command is allowed). After all that,
an optional comment may follow, prefixed with a triple
sharp ###. Example:

 ||| pg_dumpall ||| psql db_tmpl ### Store Postgres DBs


--------------------------------------------------------------------------

19: How to use the ftape driver ?

There's nothing very special here. All mt commands in the
server side configuration must be replaced with appropriate
ftmt versions. The script __mt should be obsolete here, as
it only handles the special case when the count value is 0
e.g. for skipping tape files with  mt fsf <count> . ftmt
should be able to handle count=0, so simply replace __mt
with ftmt in the default configuration. For the tape device,
supply /dev/nqftX with X being the appropriate serial number
assigned to the device by your OS (ls /dev/nqft* will tell
all available devices, try ftmt ... to find out the correct
one).


--------------------------------------------------------------------------

20: How to move a cartridge to another set due to it's usage count ?

This can be done automatically configuring an appropriate
program as Tape-Full-Command on the server side. An example
script has been provided and installed with the distribution.
It can be found as /path/to/server/bin/cartagehandler. As is,
it maintains 3 cartridge sets. If a tape has become full more
than 80 times and it is in set 1, it is moved to set 2. If
it became full more than 90 times and it is in set 1 or 2,
it is moved to set 3. If the number of cycles exceeds 95, the
cartridge is removed from all sets.
To accomplish this task, the script gets 3 arguments:
The number of the cartridge currently getting full, the number
of it`s complete write cycles up to now and the full path to
the serverside configuration file, which is modified by the
script. If the Tape-Full-Command is configured like this:

 TapeFull-Command:  /path/to/server/bin/cartagehandler %c %n %C

then it will do the job as expected. Feel free to modify this
script to fit your needs. The comments inside should be helpful.
Look for "User configured section" and the like in the comments.
This script is not overwritten, when upgrading i.e. installing
another version of afbackup. Please note, that the configuration
file must be writable for the user, under whose id the server
starts. The best way is to make the configuration file be owned
by this user.
See also the documentation for the program __numset, it's very
helpful in this context.


--------------------------------------------------------------------------

21: How to make backups to different cartridge sets by type or by date ?

Sometimes people want to make the incremental backups to other sets
of cartridges than the full backups. Or they want to change the
cartridge set weekly. Here the normal cartridge set mechanisms can
be used (client side option -S). If the difference is the type
(full or incremental), the -S can be hardcoded into the crontab
entry. If the difference is the date, a little simple script can
help. If e.g. in even weeks the backup should go to set 1 and in
odd weeks to set 2 the following script conveys the appropriate
set number, when called:

#!/bin/sh

expr '(' `date +%W` % 2 ')' + 1

This script can be called within the crontab entry. Typical crontab
entries will thus look as follows, assuming the script is called
as /path/to/oddevenweek:

# full backup starting Friday evening at 10 PM
0 22 * * 5  /path/to/client/bin/full_backup -d -S `/path/to/oddevenweek`
# incremental backup starting Monday - Thursday at 10 PM
0 22 * * 1-4 /path/to/client/bin/incr_backup -d -S `/path/to/oddevenweek`


--------------------------------------------------------------------------

22: How to achieve independence from the machine names ?

- Use a host alias for the backup server and use this name in the
  clients' configuration files. Thus, if the server changes, only
  the hostname alias must be changed to address the new server

- Configure a ServerIdentifier, e.g. reflecting the hostname alias
  on the server side

- Use the client identifiers in the clientside configuration files.
  Set them to strings, that can easily be remembered

Notes:

Performing the steps above no hostname should appear in any index
file, minimum restore info or other varying status information
files any more.
If now the server changes, the server identifier must be set to
the value the other server had before and the client will accept
him after contacting. To contact the correct server the client
configurations would have to be changed to the new hostname. Here
the hostname alias serves for making things easier. No client
configuration must be touched, just the hostname alias assigned
to a different real hostname in NIS or whatever nameservice is
used.
If restore should go to a different client, the identifier of the
original client, the files have been saved from, must be supplied
to get the desired files back. Option -W will be used in most cases.


--------------------------------------------------------------------------

23: How to restrict the access to cartridges for certain clients ?

Access can restricted on a cartridge set base. For each cartridge
set a check can be configured, whether a client has access to it
or not. Refer to the afserver.conf manual page under Cartridge-Sets
how to specify the desired restrictions.


--------------------------------------------------------------------------

24: How to recover from disaster (everything is lost) ?

There are several stages to recover. First for the client side:

* Only the data is lost, afbackup installation and indexes are still
  in place

Nothing special here. To avoid searching the index the -a option of
afrestore is recommended. Instead, afrestore '*' can be used, but
this will search the index and might take longer.

* Data, afbackup installation and indexes are gone, minimum restore
  information is available

Install afbackup from whatever source. Then run afrestore -e. If you
haven't configured afbackup after installing, pass the client's unique
identifier to the program using the option -W. After pressing <Return>
to start the command, you are expected to enter the minimum restore
info. It is necessary, that it is typed in literally like written by
the backup system. The easiest way is to cut and paste. The line, that
is containing this information needs not to be the first one entered
and there may be several lines of the expected format, also from other
clients (the client identifier is part of the minimum restore info).
The lastest available one from the input and coming from the client
with the given or configured identifier will be picked and used. Thus
the easiest way to use the option -e is to read from a file containing
the expected information. If you have forgotten the identifier of the
crashed client, look through your minimum restore infos to find it.
To restore only the indexes use option -f instead of -e.

* Data, afbackup installation and indexes are gone, minimum restore
  information is also lost

Find out, which tape(s) has been written to the last time the backup
succeeded for the crashed client. Possibly see the mails sent by the
ExitProgram for more information about this. Install afbackup on the
client. Now run afrestore with option -E, pass it the client identifier
with option -W and one or more tape specifiers with the hostname and
port number (if it's not the default) of the server, where the client
did it's backup to. Examples:

 afrestore -E -W teefix 3@backupserver%3002
 afrestore -E -W my-ID 4-6,9@buhost%backupsrv
 afrestore -E -W c3po.foodomain.org 3@buserv 2@buserv

The third example will scan tapes 3 and 2 on the server buserv using
the default TCP-service to retrieve the minimum restore information.
The first will scan tape 3 on host backupserver, using port number
3002 (TCP). The second one will scan tapes 4 through 6 and 9 on the
server buhost connecting the TCP-service backupsrv. This name must
be resolvable from /etc/services, NIS or similar. Otherwise this
command will not work.
While scanning the tapes all found minimum restore informations (for
any client) will be output, so another one than the one with the
latest timestamp can be used later with option -e. If only the tapes
should be scanned for minimum restore informations without restoring
everything afterwards, option -l can be supplied. Then operation will
terminate having scanned all the given tapes and having printed all
found minimum restore informations.


For the server side:

The var-directory of the server is crucial for operation, so it is
heavily recommended to save it, too (see below under Do-s and Dont-s).
The afbackup system itself can be installed from the latest sources
after a crash.
To get the var-directory back, run afrestore -E or -e, depending on
the availability of the minimum restore information, as explained
above, and pass it a directory to relocate the recovered files. Then
make sure, that no afserver process is running anymore (kill them,
if they don't terminate voluntarily), and move all files from the
recovered and relocated var-directory to the one, that is really
used by the server. If you are doing this as root, don't forget to
chown the files to the userid, with that the afbackup server is
started. If the server's var directory has been stored separately
as explained in Do-Dont, the different client-ID must be supplied to
the afrestore command using the options -W like when having run the
full_backup, e.g.
 afrestore -E -W serv-var -V /tmp/foo -C /tmp/servvar 2@backuphost%backupport
The directory /tmp/foo must exist and can be removed afterwards.
See the man-pages of afrestore for details of the -E mode.


--------------------------------------------------------------------------

25: How to label a tape, while the server is waiting for a tape ?

Start the program label_tape with the desired options, furthermore
supplying the option -F, but without option -f. Wait for the program
to ask you for confirmation. Do not confirm now, first put the tape,
you want to label, into the drive. (The server does not perform any
tape operation, while the label_tape program is running) Now enter
yes to proceed. If the label is the one expected by the server and
the server is configured to probe the tape automatically, it will
immediately use it, otherwise eject the cartridge.


--------------------------------------------------------------------------

26: How to use a media changer ?

To use a media changer, a driver program must be available. On many
architectures mtx can be used. On the Sun under Solaris-2 the stctl
package is very useful. On FreeBSD chio seems to be the preferred
tool. Another driver available for Linux is the sch driver coming
together with the mover command (See changer.conf.sch-mover for a
link). Check the documentation of either package how to use them.
Changer configuration files for these four are coming with the
afbackup distribution (changer.conf.mtx, changer.conf.stctl,
changer.conf.chio and changer.conf.sch-mover), they should work
immediately with the most changers. mtx and stctl can be obtained
from the place, afbackup has been downloaded from.

Very short:
mtx uses generic SCSI devices (e.g. /dev/sg0 ... on Linux), stctl
ships a loadable kernel module, that autodetects changer devices
and creates device files and symlinks /dev/rmt/stctl0 ... in the
default configuration. With stctl it is crucial to write enough
target entries to probe into the /kernel/drv/stctl.conf file.
Note, that the attached mtx.c is a special implementation i was
never able to test myself. It is quite likely, that it behaves
differently than the official mtx, so it will not work with the
attached changer.conf.mtx file. The mover command also comes with
a kernel driver called sch.

If the driving command is installed and proven to work (play around
a little with it), the configuration file for it must be created.
It should reside in the same directory like the serverside config
file, but this is arbitrary. The path to the file must be given
in the server configuration file as parameter like this example:

Changer-Configuration-File:     %C/changer.conf

%C will be replaced with the path to the confdir of the server side.
See the manual pages of the cart_ctl command about what this file
must contain.

Now the device entry in the server configuration must be extended.
The new format is:

<streamerdevice>[=<drive-count>]@<device>#<num-slots>[^<num-loadbays>]

Whitespace is allowed between the special characters for readability.
An example:

/dev/nst0 @ /dev/sg0 # 20

This means: Streamer /dev/nst0 is attached to media handler at /dev/sg0,
which has 20 slots. The part = <drive-count> is optional. It must be
set appropriately, if the streamer is not in position 1 in the changer.
(Note, that with cart_ctl every count starts with 1, independent of the underlaying driver command. This abstraction is done in the configuration).
^ <num-loadbays> is also optional and must not be present, if the changer
does not have any loadbay. A full example:

/dev/nst1 = 2 @ /dev/sg0 # 80 ^ 2

If is recommended to configure a lockfile for the changer with full
path, too. For example:

Changer-Lockfile:        /var/adm/backup/changer.lock

To check the configuration now the command cart_ctl should be run,
simply with option -l. An empty list of cartridge locations should
be printed, just the header should appear. Now the backup system
should be told, where the cartridges currently are. This is done
using the option -P of cart_ctl. To tell the system, that the tapes
10-12 are in slot 1-3 and tapes 2-4 in slot 4-6, enter:

cart_ctl -P -C 10-12,2-4 -S 1-6

Verify this with cart_ctl -l . To tell the system, that Tape 1 is in
the drive 1, enter:

cart_ctl -P -C 1 -D 1

(The drive number 1 is optional, as this is the default)
Optionally the system can store locations for all cartridges not
placed inside any changer. A free text line can be given with the
-P option, what might be useful, for example:

cart_ctl -C 5-9,13-20 -P 'Safe on 3rd floor'

To test the locations database, one might move some cartridges around,
e.g. cartridge 3 into the drive (assumed tape 6 is in some slot and
the location has been told to the system as explained above):

cart_ctl -m -C 3 -D

Load another cartridge to drive, it will be automatically unloaded to
a free slot, if the List-free-slots-command in the configuration works
properly.

Instead of telling the system, what tapes are located in the slots,
one might run an inventory, what makes them all to be loaded into
the drive and the labels to be read. To do this, enter:

cart_ctl -i -S 1-6

For further information about the cart_ctl command, refer to the
manual pages.

To make the server also use the cart_ctl command for loading tapes,
the SetCartridgeCommand in the server configuration must be set as
follows:

Setcart-Command:  %B/cart_ctl -F -m -C %n -D

The parameter Cartridge-Handler must be set to 1 .

Now the whole thing can be tested making the server load a tape from
a client command:

/path/to/client -h serverhost [ -p serverport ] -C 4

Cartridge 4 should be loaded to drive now. Try with another
cartridge. If this works, the afbackup server is properly
configured to use the changer device. Have fun.


--------------------------------------------------------------------------

27: How to build Debian packages ?

Run the debuild command in the debian subdirectory of the distribution


--------------------------------------------------------------------------

28: How to let users restore on a host, they may not login to ?

Here's one suggestion, how to do that. It uses inetd and the
tcpwrapper tcpd on the NFS-server side, where login is not permitted,
and the identd on the client, where the user sits. It starts the X11-
frontend of afrestore setting the display to the user's host:0.
Furthermore required is the ssu (silent su, only for use by the
superuser, not writing to syslog) program. Source can be obtained
from the same download location, where afbackup had been found.
It is part of the albiutils package.

Perform the following steps:

* Add to /etc/services:

remote-afbackup		789/tcp
(or another unused service number < 1024)


* Add to or create the tcpd configuration file /etc/hosts.allow (or similar,
  man tcpd ...):

in.remote-afbackup : ALL : rfc931 : twist=/usr/sbin/in.remote-afbackup %u %h


* Add to /etc/inetd.conf and kill -HUP the inetd:

remote-afbackup   stream tcp  nowait  root  /usr/sbin/tcpd  in.remote-afbackup

(if the tcpd is not in /usr/sbin, adapt the path. If it's not
installed: Install it. It makes sense anyway)


* create a script /usr/sbin/in.remote-afbackup and chmod 755 :
#!/bin/sh
#
# $Id: HOWTO.FAQ.DO-DONT,v 1.2 2002/02/27 10:17:09 alb Exp alb $
#  
# shell script for starting the afbackup X-frontend remotely through
# inetd, to be called using the 'twist' command of the tcp wrapper.
# Note: on the client the identd must be running or another RFC931
# compliant service
#

if [ $# != 2 ] ; then
   echo Error, wrong number of arguments
   exit 0
fi

remuser="$1"
remhost="$2"

if [ "$remuser" = "" -a "$remuser" = "" ] ; then
   echo Error, required argument empty
   exit 0
fi

# check for correct user entry in NIS
ushell=`/usr/bin/ypmatch "$remuser" passwd 2>/dev/null | /usr/bin/awk -F: ' {print $7}'`
if [ _"$ushell" = _ -o _"$ushell" = "_/bin/false" ] ; then
   echo "You ($remuser) are not allowed to use this service"
   exit 0
fi

gr=`id "$remuser"| sed 's/^.*gid=[0-9]*(//g' | sed 's/).*$//g'`

# check, if group exists
ypmatch $gr group.byname >/dev/null 2>&1
if [ $? -ne 0 ] ; then
  echo "Error: group $gr does not exist. Please check"
  exit 0
fi

DISPLAY="$remhost":0
export DISPLAY

/path/to/ssu "$remuser":$gr -c /usr/local/afbackup/client/bin/xafrestore

####### end of script ######

* Edit the last line with ssu to reflect the full path to ssu, that you
  have built from the albiutils package.

Now a user can start the xafrestore remotely by simply:

telnet servername 789

(or whatever port has been chosen above).
For user-friendlyness, this command can be put into a script
with an appropriate name.

Thanks to Dr. Stefan Scholl at Infineon Techologies for this
concept and part of the implementation


--------------------------------------------------------------------------

29: How to backup through a firewall ?

Connections to port 2988 (or whatever port the service is assigned)
must be allowed in direction towards the server (TCP is used for all
afbackup connections). If the multi stream service is to be used,
this port must also be open (default 2989, if not changed) in the
same direction.
If the remote start option is desired (afclient -h hostname -X ...),
connections to the target port 2988 (i.e. afbackup) of the client
named with option -h must be permitted from the host, this command
is started on.
If the encryption key for the client-server authentication is kept
secret and protected with care on the involved computers, the server
port of afbackup is not exploitable. So it may be connectable by the
world without any security risk. The only non desirable thing, that
might happen, is a denial of service attack opening high numbers of
connections to that port. The inetd will probably limit the number
of server programs to be started simultaneously, but clients will
no longer be able to open connections to run their backup.
The connections permitted through the firewall should in any case be
restricted from and to the hosts participating in the backup service.
If initiating connections from outside of the firewall is unwanted,
an ssh tunnel can be started from the inside network to a machine
outside thus acting as kind of a proxy server. The outside backup
clients must be configured to connect the proxy machine for backup,
where the TCP port is listening, i.e. the other side of the ssh
tunnel sees the light of the outside world. It should be quite clear,
that ssh tunneling reduces throughput because of the additional
encryption/decryption effort. See the ssh documentation and HOWTO Q11
for more information.


--------------------------------------------------------------------------

30: How to configure xinetd for afbackup ?

Here are the appropriate xinetd.conf entries. As long as the convenient
way of configuration like with inetd is not included into afbackup,
the entries have to be made manually, followed by a kill -USR2 to the
xinetd.

For the single stream service:

service afbackup
{
        flags           = REUSE NAMEINARGS
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = backup
        server          = /usr/local/afbackup/server/bin/afserver
        server_args     = /usr/local/afbackup/server/bin/afserver /usr/local/afbackup/server/lib/backup.conf
}

For the multi stream service:

service afmbackup
{
        flags           = REUSE NAMEINARGS
        socket_type     = stream
        protocol        = tcp
        wait            = yes
        user            = backup
        server          = /usr/local/afbackup/server/bin/afmserver
        server_args     = /usr/local/afbackup/server/bin/afmserver /usr/local/afbackup/server/lib/backup.conf
}

Replace the user value with the appropriate user permitted to operate
the device to be used (see: INSTALL).

--------------------------------------------------------------------------

31: How to redirect access, when a client contacts the wrong server ?

This situation might arise, when localhost has been configured and
restore should be done on a different client, but the same server.
Or it might happen, that the backup service has moved, no host alias
has been used during backup and it is not possible to rename the
machine cannot be renamed.

Here the xinetd can help, cause it is able to redirect ports to
different machines and/or ports. On the machine, that does not have
the service, but is contacted by a client, put an entry like this
into the xinetd configuration file (normally /etc/xinetd.conf) and
(re)start xinetd (sending the typical kill -HUP):

service afbackup_redirect
{
        flags           = REUSE
        socket_type     = stream
        protocol        = tcp
        port            = 2988
        redirect        = backupserver 2988
        wait            = no
}

Replace backupserver with the real name of the backup server host.
If the multi stream service is to be used, add another entry:

service afmbackup_redirect
{
        flags           = REUSE
        socket_type     = stream
        protocol        = tcp
        port            = 2989
        redirect        = backupserver 2989
        wait            = no
}


--------------------------------------------------------------------------

32: How to perform troubleshooting when encountering problems ?

Here are some steps, that will help narrowing the search and probably
even solve the problem:

Start on the client side:

If full_backup or incr_backup report cryptic error messages, probably
in the client side logfile (check this file out, maybe cleartext error
messages can be found here), try to run the low level afclient command
querying the server. Don't forget to supply the authentication key file,
if one is configured, with option -k, because afclient is a low level
program, that can be run standalone and does NOT read the configuration
file. An afclient call to check basic functionality can be:

/path/to/afclient -qwv -h <servername> [ -p <service-or-port> ] \
                      [ -k /path/to/keyfile ]

After a short time < 2 seconds it should printout something like this:
Streamer state: READY+CHANGEABLE
Server-ID: Backup-Server_1
Actual tape access position
Cartridge: 8
File:      1
Number of cartridges: 1000
Actual cartridge set: 1

If afclient does not finish within half a minute or so and later prints
the error message 'Error: Cannot open communication socket', then there
is a problem on the server side or with the network communication. Try
to telnet to the port, where the afbackup server (i.e. usually inetd)
is awaiting connections:

  telnet <servername> 2988

(or whatever your afbackup service portnumber is). You should see some
response like this:

Trying 10.142.133.254...
Connected to afbserver.mydomain.de.
Escape character is '^]'.
afbackup 3.3.4
 
AF's backup server ready.
h>|pρ(O

Type return until the afserver terminates the connection or type
Ctrl-] and on the >-prompt enter quit to terminate telnet.

If you don't see a response like indicated above, but instead
'Connection refused', then the service is not properly configured on the
server host. Please check the /etc/inetd.conf or /etc/xinetd.conf file
for proper afbackup entries and make sure, the service name is known
either in the local /etc/services file or from NIS or NIS+ or whatever
service is used. Send a kill -HUP <PID> with the PID of inetd or -USR2
with the PID or xinetd (if that one is used) to make the daemon reread
it's configuration. If afterwards the connection is still not possible,
see the syslog of the server for error messages from the (x)inetd. They
will indicate, what the real problem is. The syslog file is usually one
of the following files:
 /var/adm/messages
 /var/adm/SYSLOG
 /var/log/syslog
 /var/log/messages
 /var/adm/syslog/syslog.log

On AIX use the errpt command, e.g. with option -a to get recent syslog
output (see man-page).

If you don't get any connection response starting the telnet command,
there is a network problem. If you can ping the remote machine, but
can't telnet to the afbackup port, try to connect any other port, e.g.
the real telnet port (without 3rd argument) or the daytime port (type
telnet <remotehost> 13). If they work, there is probably a firewall
between the afbackup client and the server, that is blocking connections
to the afbackup port. Then check the firewall configuration and permit
the afbackup and afmbackup connections, if you want to remote start by
afbackup means, in both directions.

The error message 'An application seems to hold a lock ...' indicates,
that there is already an afbackup program like full_backup or afverify
running on the same host. Use ps to find out, what that process is. If
you need to know, what this program is doing, see the client side log
for hints. If that doesn't give any clue, try to trace that program or
the subprocess afbackup, that is running in most cases, when one of the
named programs is also running. To trace a program use:
 truss     on Solaris
 strace    on Linux, SunOS-4, Free-BSD
 par       on IRIX
 trace     on HP-UX

For AIX a system tracer is announced. Until now there can only scripts
be used, that are in turn running trace -a -d -j <what-you-want-to-get>,
trcon, trcstop and trcrpt, but this must be done with real care, cause
changes are high, that the filesystem, where the trace is written (normally
/tmp) will be plugged up. See the manpages for the named commands for
details.

Very useful is lsof, what helps to find out, what the filedescriptors
in system calls like read, write, close, select etc. are meaning. Run
lsof either with no arguments to grep for something specific or with
the arguments -p <PID>, with <PID> being e.g. the process id of afbackup
or afserver.

If there is something wrong on the server, e.g. the server starts up, but
immediately terminates with or without any message in the serverside log,
it might help to trace the (x)inetd using the flag -f with strace (or
truss or ...) and -p with the pid of the inetd. The -f flags makes the
trace follow subprocess forks and execs. So one can probably see, why
the server terminates. If this does not help, one can try to catch the
server in a debugger after startup. This requires the server to be built
debuggable. To achieve this the easiest way, after building afbackup run

 make clean

in the distribution directory and then run

 make afserver DEBUG=-g [ OPTIMIZE=-DORIG_DEFAULTS ]

the ORIG_DEFAULTS stuff is needed, if you built afbackup using the Install
script. Now do NOT run make install, but copy the files over to the
installation directory using cp thus overwriting the files in there. If
you moved the original binaries out of the way, don't forget to chown
the copied files to the user configured in the /etc/(x)inetd.conf file.
Otherwise they can't be executed by (x)inetd.
Then add the option -D to the afserver or afmserver configured in the
(x)inetd.conf file. The inetd.conf entry will then e.g. look like this:

afbackup stream tcp nowait backup /usr/local/afbackup/server/bin/afserver /usr/local/afbackup/server/bin/afserver -D /usr/local/afbackup/server/lib/backup.conf

or the xinetd.conf entry as follows:

service afbackup
{
        flags           = REUSE NAMEINARGS
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = backup
        server          = /usr/local/afbackup/server/bin/afserver
        server_args     = /usr/local/afbackup/server/bin/afserver -D /usr/local/afbackup/server/lib/backup.conf
}

Send a kill -HUP <PID> to the PID of the inetd or -USR2 to xinetd.
Now, when any client connects the server, the afserver or afmserver
process is in an endless loop awaiting either an attach of a debugger
or the USR1 signal causing him to continue. Please note, that during
a full_backup or incr_backup, the server will probably contacted not
only once during a backup, but several times. Furthermore the afmserver
starts the afserver in slave mode as subprocess passing it also the -D
flag, so this process must also kill -USR1 'ed or caught in a debugger.
Attaching the debugger gdb works passing the binary as first argument
and the process ID as second argument, e.g.:

 gdb /path/to/afserver 2837

Now you see lines similar to those ones:

0x80453440 in main () at server.c:3743
3743:     while(edebug);   /* For debugging the caught running daemon */
(gdb)

on the gdb prompt set the variable edebug to 0:
(gdb) set edebug=0

Enter n to step through the program, s to probably step into subroutines,
c to continue, break <functionname> to stop in certain functions, c to
continue, finish to continue until return from the current subroutine etc.
See the man-page of gdb or enter help for more details. With dbx and
graphical frontends it's quite similar. It is possible to first start the
debugger and then attach a process. Supply only the binary to the debugger
when starting, then e.g. with gdb enter  attach 2837  (if that's the PID).
This works also with xxgdb or ddd (very fine program !)
The named calling structure and the possible several server startups can
make the debugging a little complicated, but that's the price for a system
comprising of several components running concurrently or being somewhat
independent from each other. But it makes development and testing easier
and less error prone.

Debugging the client side is not as complicated. To build the client side
debuggable works the same way as explained, except that the make step must
have afclient as target:

 make afclient DEBUG=-g [ OPTIMIZE=-DORIG_DEFAULTS ]

For the installation the same applies like above: Do NOT run make install,
but copy the files to the installation directory using cp.


--------------------------------------------------------------------------

33: How to use an IDE tape drive with Linux the best way ?

As the IDE tape driver on Linux seems to have problems to work well,
the recommendation is to use the ide-scsi emulation driver. Here's how
Mr. Neil Darlow managed to get his HP Colorado drive to work properly:

The procedure, for my Debian Woody system with 2.4.16 kernel, was
as follows:

1) Disable IDE driver access to the Tape Drive in lilo.conf
   append="hdd=ide-scsi"

2) Ensure the ide-scsi module is modprobe'd at system startup by
   adding it to /etc/modules

3) Install the linux mt-st package for the SCSI ioctl-aware mt
   program

4) Modify the Tape Blocksize parameter in server/lib/backup.conf
   Tape Blocksize: 30720

After all this, you can access the Colorado as a SCSI Tape Drive
using /dev/nst0. Then full_backup and afverify -v work flawlessly.


--------------------------------------------------------------------------

34: How to make afbackup reuse/recycle tapes automatically ?

There are two parameters in the client side configuration, that affect
reusing tapes. One of them is NumIndexesToStore. A new index file is
started with each full backup. For all existing indexes the backup data
listed inside of them is protected from being overwritten on the server.
This is achieved by telling the server, that all tapes, the data has
been written to, are write protected. The parameter NumIndexesToStore
tells the client side, how many indexes in addition to the current one
that is needed in any case are kept. More i.e. older index files are
removed and the related tapes freed. A common pitfall is, that the
number configured here is one too high. If the number is e.g. 3, the
current index file plus 3 older indexes are kept, not 3 in total. Note
furthermore, that afbackup only removes an older index, when the next
full backup has succeeded.

The other parameter DaysToStoreIndexes can be configured the number of
days, how old index file contents may become. Still a new index file
is created on every full backup. That is, an index file may contain
references to tapes and data, that are in fact older than configured by
this parameter. Nonetheless the index file is kept to be able to restore
a status completely, that has the given age, what requires also older
data. E.g.: To restore a status, that is 20 days old, the previous full
backup is also needed that is e.g. 25 days old together with data from
following incremental, level-X or differential backups.

The server side also keeps track, what tapes are needed by what client.
When a client tells the server a new current list of tapes, that are to
be write-protected, the server overwrites the previously stored list
for that client. The lists are lines in the file .../var/precious_tapes.
It may be desired, that a client is no longer in backup, but was before.
Then the associated tapes must be freed manually on the server(s) either
removing the appropriate line in the precious_tapes file (not while a
server is running !) or issuing a server message using a command like
that:
 /path/to/afclient -h <server> [ -p <service> ] [ -k /path/to/keyfile ] \
                      -M "DeleteClient:  <client-identifier>"
The setting for the <client-identifier> can be taken from the outdated
client's configuration file (default: the official hostname) or from
the precious_tapes file on the server: it's the first column. Using
the command makes sure the file remains in a consistent state as the
server locks the files in the var-directory during modification.

When a server refuses to overwrite tapes, but there is no obvious reason
for this behaviour, the precious_tapes file on the server should be
checked like mentioned above, furthermore the readonly_tapes file.
Probably tapes have been set to read-only mode some time ago, but one
doesn't remember, when or why. Note, that afbackup never sets tapes to
read-only by itself. This can only be done manually.


--------------------------------------------------------------------------

35: How to make the server speak one other of the supported languages ?

If your system's gettext uses the settings made by the setlocale
function or supports one of the functions setenv or putenv, then
the option -L of af(m)server can be used to set a locale on the
command line in the /etc/(x)inetd.conf file. GNU gettext in most
cases is not built to use setlocale due to compatibility problems.
Fortunately the glibc supports both setenv and putenv, so the
option is usually available. If supplying the commandline option
does not work, environment variables can be used:

The environment variable LANG must be set to it in the server's
environment. To achieve that the command from the inetd.conf file
can be put into a script, where the LANG environment variable is
set before e.g.

#!/bin/sh
#
# this is a script e.g.
#    /usr/local/afbackup/server/bin/afserverwrapper
#
LANG=it
export LANG

exec /usr/backup/server/bin/afserver /usr/backup/server/bin/afserver /usr/backup/server/lib/backup.conf

# end of script


Do the same for afmserver. Then replace the command in
inetd.conf with

/usr/local/afbackup/server/bin/afserverwrapper afserverwrapper

When using the xinetd, environment settings can be made by adding
a line to the appropriate section in the configuration file, e.g.:

   env  =   LANG=de

so a complete xinetd entry for afserver would be:

service afbackup
{
        flags           = REUSE NAMEINARGS
        env             = LANG=de
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = backup
        server          = /usr/local/afbackup/server/bin/afserver
        server_args     = /usr/local/afbackup/server/bin/afserver /usr/local/afbackup/server/lib/backup.conf
}

If the multi-stream server is configured to run permanently, the
LANG setting can be simply be done in the start script like in
the script above.


--------------------------------------------------------------------------


D.        Do-s and Dont-s

ALWAYS
------

        - configure a Startup-Info-Command on the client side
          to save the crucial information for the emergency
          restore after a hard crash losing the filename logfiles
          i.e. the filelists and probably more ...
	- use more than 1 tape (per cartridge set), i recommend
	  at least 3
	- also save the server side's .../var directory after the
	  regular clients' backup, cause it contains crucial files
	  for recovery, i.e. configure the server as client, too.
	  If not only the .../var directory of the server is saved,
	  but also other data residing on the server, the .../var
	  directory should be saved separately after any client's
	  backup. Use the full_backup command with a different
	  client ID supplied with option -W and a different var
	  directory given with option -V and supply the path to
	  the real server's var directory as argument. Example:
	  /path/to/full_backup -W serv-var -V /tmp/foo .../server/var
	  The directory /tmp/foo must already exist. In fact any
	  directory can be used as argument for -V, it may even be
	  removed afterwards. This saving of the server's var
	  directory serves for being able to restore it in case of
	  disaster, i.e. complete server crash. Remember, that the
	  files in this var directory are crucial for restore of
	  any client, especially the file cartridge_order. If this
	  saving is done, you are able to do disaster recovery even
	  without any minimum restore information. See HOWTO Q24
	  what to do exactly in case of disaster.
        - run a verify after the first backup with afbackup to 
          make sure, that everything is working correctly
        - write the numbers of the cartridges onto their cases
          or use adhesive labels
	- use a host alias name for the backup server, so it can
	  easily be moved to another machine
        - be happy and forgive me the bugs
	- try to keep things simple. Making things complicated
	  unnecessarily will strike back mercilessly sooner or
	  later, according to Murphy sooner, if not ASAP ... or
	  even AYAP


        ... to be continued


NEVER
-----

        - kill the following client-side afbackup-related programs
          with signal -9 (== -KILL): full_backup, incr_backup,
          update_indexes. It may take a while until the program has
          cleaned up but it does clean up and it will terminate.
          If this is taking more than 10 minutes, then you might
          THINK of kill -9. If you can see a process afclient or
          afbackup, that is a subprocess of one of the named ones,
          kill this one brutally. This is quite safe as it doesn't
          maintain persistent data. But first watch the processor
          time consumed by the processes ! kill -9 on the server is
          save for all programs, but the client side is a different
          story ! Be patient !
        - use fewer cartridges than with a capacity of at least
          two times the size required for one full backup including
          subsequent incremental backups. Otherwise you will receive
          mails from the backup system complaining, that no more
          space is available on the configured tapes, requesting
          to mark tapes for reuse i.e. overwrite or increase the
          number of available tapes.
	- change the process and/or unprocess program, if not
	  enough hidden files with the same name like the indexes
	  (with a leading dot) have been created by afbackup-3.2.7
	  or higher. These files contain the command to unprocess
	  the related index. If it is missing, the program in the
	  configuration file is tried, that might fail, if it has
	  been modified.
          See also FAQ Q27 for more information.

        ... to be continued


--------------------------------------------------------------------------


F.              AF's Backup FAQ
                ===============

Index
-----

Q1: How do I tell how much space is left on the tape?

Q2: Why is the mtime used for deciding what files to save during incremental
    backup and not the ctime or both ?

Q3: Do my current configurations get overwritten during an upgrade ?

Q4: Why should I and how do I use sets of cartridges ?

Q5: How many cartridges should I use ?

Q6: I have a robot with n cartridges. Can i use more than n tapes ?

Q7: Can ordinary users restore their own files and directories ?

Q8: Why does afbackup not have a GUI ?

Q9: What does the warning mean: "Filelist without user-ID information ..." ?

Q10: The whole backup systems hangs in the middle of a backup, what's up ?

Q11: Tape reels back and forth, mail sent "tape not ready ...", what's up ?

Q12: The server seems to have a wrong tape file count, what's wrong ?

Q13: When using crond, the client seems not to start correctly ... ?

Q14: What does AF mean ?

Q15: Though client backup works, remote start does not. Why ?

Q16: My server does not work, tape operates, but nothing seems to be written ?

Q17: I have a ADIC 1200G autoloading DAT and no docs. Can i use it with HPUX ?

Q18: What is a storage unit and how and why should i use it ?

Q19: Why should i limit the number of bytes per tape ?

Q20: What are backup levels and why should i use them ?

Q21: What do all the files in the var-directories mean ?

Q22: Help ! My (multi stream server's) client receives no data, why ?

Q23: My DLT writes terribly slowly and changes cartridge too early, why ?

Q24: When should i use the multi stream server and when not ?

Q25: Why is my 2 GB capacity DAT tape full having written about 1.3 GB ?

Q26: Tape handling seems not to work at all, what's wrong ?

Q27: How can i change the compression configuration ?

Q28: Why does my Linux kernel Oops during afbackup operation ?

Q29: Why does afbackup not use tar as packing format ?

Q30: How to recover directly from tape without client/server ?

Q31: Why do files truncated to multiples of 8192 during restore ?

Q32: What is the difference between total and real compression factor ?

Q33: How does afbackup compare to amanda ?

Q34: How to contribute to I18N/L10N ?

Q35: Why does I18N not work in my environment ?

Q36: Is there a mailing list or a home page for afbackup ?

Q37: I have trouble using the multi stream server. What can i do ?

Q38: On AIX i get the warning: decimal constant is so large ... what's that ?

Q39: What about security ? How does this authentication stuff work ?

Q40: Why does remote start of backups not work, while local start does ?

Q41: What is the architecture of afbackup ?

Q42: Why are new files with an old timestamp not saved during incr_backup ?

Q43: What do the fields in the minimum restore info mean ?



Questions and Answers
=====================

Q1: How do I tell how much space is left on the tape?

A1: This is hard to tell due to the problem to determine exactly, how many
    bytes can be written to a certain physical tape. Since version 3.1 the
    server counts the bytes written to each tape. The sums are written into
    the file /path/to/server/var/bytes_on_tape , one entry per line in the
    format cartridge-number colon number-of-bytes-on-tape . If a tape is
    full and the next one is automatically inserted, the number tells you
    the number of bytes, the server was able to write to tape, but here the
    streamer device might have applied compression, so the real number of
    bytes on tape may be smaller. If clientside compression is turned on,
    it is quite unlikely, that the streamer was able to even further pack
    the data, so in this case the number logged to the file should be close
    to reality.
     One method to find out the tape capacity is as follows:
    - tape a gzip-ped file (with -9), that is larger than 1 MB compressed
    - put an empty and unused tape into the streamer device, all data on
      it will be lost during this test
    - in a csh run the command:
        repeat 100000 cat filename | dd of=/dev/st0 obs=1048576
      (replace filename and /dev/st0 appropriately with the name of the
       gzip-ped file and the real streamer device)
    - the command will write the tape until end of media is reached and
        will output something like:
           4036+1 records in
           4036+1 records out
        This tells you, that 4036 * 1048576 i.e. 4036 MB were successfully
        written to tape.
    It must be kept in mind, that this test does not consider the space
    between tape files. Each time, a new tape file starts, a file space
    is written to tape, that wastes tape capacity of about 2 MB each. Refer
    to the documentation of your streamer device.


Q2: Why is the mtime used for deciding what files to save during incremental
    backup and not the ctime ?

A2: First: The ctime changes any time a chmod, a chown, or other operations
      modifying the inode are performed. A change like this is not worth
      selecting this file for backup, cause the file itself did not change.
      BTW the ctime can be evaluated additionally to the mtime setting the
      client side parameter UseCTime . But then the access time (atime) is
      not restored to the previous value after backup.
    Second: After backup of a file, afbackup restores the atime, because
      i found the atime a quite worth information. A restore of the atime
      changes the ctime, no way around this. If the ctime was evaluated
      for choosing the files for incremental backup, a file stored once
      would be saved again all the following backups, cause at any
      backup time the ctime changes. Incremental backup would be senseless,
      cause all files would be saved all the time a backup runs.


Q3: Do my current configurations get overwritten during an upgrade ?

A3: No. Nothing gets overwritten or lost. Newly introduced parameters have
    the old non-configurable behaviour as default. The defaults are applied,
    if the appropriate parameters are not given explicitly in the
    configuration files.


Q4: Why should I and how do I use sets of cartridges ?

A4: The question, why, is not that easy to answer. Maybe you have groups
      of hosts you would like to save to distinguished cartridges, maybe
      you would like to make the full backups to other cartridges than the
      incremental backup. Maybe you have the requirement, that you want to
      use an infinite number of cartridges for the full backup and reuse
      the ones for the incremental backup each time another full backup
      has finished. Or you might want to restrict the access to sets of
      cartridges to certain machines. Then you can configure access lists.
      Maybe you have more exotic requirements ...
    The answer to the how is easy: Set the serverside parameter
      Cartridge-Sets. The specifiers for the sets must be separated by
      whitespace and each may consist of digits, commas and dashes, e.g.
      3,6-9 . A set may be a single cartridge, but i do not recommend
      this, cause writing to the beginning of that cartridge destroys the
      rest stored on it and making these data unaccessible. The last
      number is usually the number of cartridges you have, but not
      necessarily. Cartridges at the upper end of the numbers might be
      omitted. If the last number is not equal to the number of cartridges,
      this number is NOT automatically added. The many numbers are given
      with this parameter, the many cartridge sets you have. The default,
      if this parameter is not present, is one cartridge set with all
      available cartridges. Enter man afserver.conf for details, how to
      configure client access restrictions for each set individually.


Q5: How many cartridges should I use ?

A5: The cartridges should be have enough capacity for at least two times
    a full backup including subsequent incremental backups. Otherwise
    files could get lost due to an unsuccessful backup overwriting
    previously stored data.


Q6: I have a robot with n cartridges. Can i use more than n tapes ?

A6: This question os obsolete as of afbackup version 3.3. This version
    maintains a cartridge database and allows to configure commands
    for media changers. Cartridge numbers and slot numbers need not to
    have anything to do with each other. See the HOWTO Q26.

    This is obsoleted text:
    Yes, you can use any number of tapes, if your robot is in the
    sequential mode. Simply fake a higher number to the backup system
    in the serverside configuration file. The only point is, that you
    have to change the cartridges in the robot manually in time. If
    you have e.g. a robot with 10 cartridges and would like to use 20,
    then you have to watch, when it is time to insert other cartridges
    to the appropriate positions. E.g. when cartridge number 8 is in
    the drive, take out cartridges 1-7 and insert number 10-16 into
    the appropriate slot. Later, when they are in use, you can replace
    8-10 by 17-19 and so on.
     When you want to do a restore, the restore-program tells you,
    where it wants to read from like this:

    Going to restore from cartridge X, file Y ...

    Insert manually the right cartridges into slots, the robot will
    access next time. The system will automatically recognize by the
    label on the tape, that it has found the right cartridge now. A
    warning is written to the serverside log telling, that another
    cartridge was found than expected, but this is just a warning and
    we know, how this happened ...

    The patterns %b, %c, %m and %n might be helpful in the server's
    Change-Cart-Command. They are replaced as follows:

     %c   The number of the cartridge currently in the drive
     %b   The number of the cartridge currently in the drive minus 1
     %n   The number of the cartridge expected to be put into the
          drive after ejecting. The cartridge handler must be in
          sequential mode. If no cartridge handler is present, %n
          will not be replaced.
     %m   like %n, but 1 is subtracted

    So e.g. if you have groups of 10 cartridges each to be put into
    the cartridge handler and want to be informed each time the 10th,
    20th, ... cartridge is ejected and you might want to change the
    cartridges, you can write a small script as a wrapper for the mt
    command. Let's call this script eject_check_10 with the device,
    the current cartridge number, the expected next cartridge number
    and a user to be E-mailed as arguments. Configure this command
    as the Change-cart-command like this:
     /your/path/to/eject_check_10 %d %c %n Backupmaster
    The script itself might look like this:

#!/bin/sh
#
# Usage: eject_check_10 <device> <current-cartno> <next-cartno> <mailaddr>
#

DEVICE="$1"
CURRENTCART="$2"
NEXTCART="$3"
MAILADDR="$4"

TMPFILE=/tmp/cart_group_markerfile.$$
/bin/rm -f $TMPFILE

CURRENTGROUP=`expr '(' $CURRENTCART - 1 ')' / 10`
NEXTGROUP=`expr '(' $NEXTCART - 1 ')' / 10`

if [ $CURRENTGROUP -ne $NEXTGROUP ] ; then
  touch $TMPFILE

  mail "$MAILADDR" << END_OF_MAIL
    Hello,

    Please insert new cartridges into your cartridge handler.
    The current cartridge is $CURRENTCART and the expected next
    one is $NEXTCART. Remove the file $TMPFILE, when done.

    Regards, your automatic backup service

END_OF_MAIL

  while [ -f $TMPFILE ] ; do
    sleep 5
  done

else

  # simply perform the eject
  #
  exec mt -f $DEVICE rewoffl

fi

exit 0

# End of eject_check_10


Q7: Can ordinary users restore their own files and directories ?

A7: Yes, they can, but this feature must be enabled. The restore-
    utility must be installed executable for all users and setuid
    root. Also some more stuff must be readable. This all can be
    achieved entering as administrator root:

    rm -f $BASEDIR/client/bin/afrestore
    cp $BASEDIR/client/bin/full_backup $BASEDIR/client/bin/afrestore
    chmod 4755 $BASEDIR/client/bin/afrestore
    chmod 755 $BASEDIR/client/lib $BASEDIR/client/bin
    chmod 755 $BASEDIR/client/bin/__packpats
    chmod 644 $BASEDIR/client/lib/aftcllib.tcl

    Also the configuration file (wherever this resides) must be
    readable for the user who wants to restore stuff.
    Thus ordinary users can run this program. Built-in safety checks
    provide, that they can only restore or list their own files and
    directories. Changing the restore-directory using the option -C
    allows them only to restore to directories owned by themselves.


Q8: Why does afbackup not have a GUI ?

A8: My ideal imagination of a backup system is, that i do not have
    to care about it at all, once it is installed and configured
    properly. It should do it's job in the background, only noticing
    me, if something goes wrong. Thus i would not want any icons,
    clocks or meters pop up on my workspace plugging me up with
    unnecessary and unimportant stuff. The installation procedure
    is simple enough and would not get better having a graphical
    frontend. My opinion.
     BTW there is a GUI frontend for the restore utility. Don't
    use it, it's terrible.


Q9: What does the warning mean: "Filelist without user-ID information ..." ?

A9: You are running restore with the list-option as non-root and
    the filelists are of the pre-version-2.9-format. Thus they do
    not contain the user-ID of the owners of the files. The program
    does not know, whether it is permittable to show you the names
    of the files. For security reasons it is hardcoded not to show
    them. With the new format containing the user-IDs, you will see
    the matching names of the files owned by you.


Q10: The whole backup systems hangs in the middle of a backup, what's up ?

A10: This phenomenon has been reported only on Linux, Kernel Version
     2.0.30 and seems to be the result of a bug in this kernel. I
     never experienced this problem on my 2.0.29 Kernel or on other
     platforms.
      Rumors told me, that the 2.0.30 kicks out about every 10000th
     to 20000th Process, i.e. the process is started, appears in the
     process list, but does not do anything and never terminates.
     Thus parent processes wait forever, when this happens. Afbackup
     compresses each saved file separately i.e. starts the
     configured compression program for each file. When the problem
     described above arises, this compression program hangs and
     thus the whole chain up to the server process, that waits for
     requests until eternity.

     Solutions: (i'm aware, these are no real solutions)
      - switch off compression for the saved files or
      - change your Linux Kernel


Q11: Tape reels back and forth, mail sent "tape not ready ...", what's up ?

A11: The current state of investigations is, that this is probably
     a problem of a dirty read-write head. This may sound weird,
     but i'll try to explain.
      I experienced this problem without any warning. One day when
     starting a new backup i watched the tape reeling back and forth,
     later sending an email to the person in charge telling, that
     the device is not ready for use, and requesting to correct this
     problem. Compiling everything with debugging turned on i caught
     the server process during the initiliazation phase and found
     the Set-File-Command (mt -f ... fsf ...) failing. Then i found
     out, that there were fewer files on tape than the backup system
     expected (1 too few) and thus the mt failed. I had no idea, how
     this could happen. I corrected everything manually by decrementing
     the writing position on tape. The next backup, that i started,
     worked fine and another one immediately following, too. A verify
     also succeeded. So i took out the tape and decided to ignore the
     fault for the moment. Before i ran the next backup, i started a
     verify to see, what has changed within the meantime, but now
     again: tape reels back and forth endlessly. Looking onto the
     tape manually using mt and dd once again: too few files on tape.
     Seemed like some file (not the last one on tape !) was lost.
     Strange. The only thing i could imagine causing all the trouble
     was an error of the tape drive, e.g. a dirty read-write-head.
     So i put in the next tape and started all over at the beginning
     of the new tape. Everything worked perfectly from now on. The
     phenomenon has been reported to me on Linux with DAT-streamers
     from HP. This could mean a correlation and/or a problem of a
     Linux driver, but the reported number of 2 is in my opinion too
     small for a conclusion like this. Furthermore i guess, the com-
     bination Linux + HP-DAT-drive is very common, so the probability,
     that problems might arise in such an environment is quite high
     simply due to the number of installations of this kind.
     Admittedly i had been too lazy for a notable while to use any
     cleaning cartridge, so i guess this had been the problem.
     A similar phenomenon has been reported to me on Solaris on a
     Sun with a 'Sun' DLT drive (AFAIK the drives labeled as Sun
     products are often Quantum), but there the cause was a damaged
     read/write head.

     Solution (to get out of the temporary inconvenience):
      - Use a new cartridge and tell the backup system to use it with
        the serverside command
                    /path/to/cartis -i <nexttape> 1
        where <nexttape> is the number of the next cartridge

     Conclusion:
      - Feel urged to use cleaning cartridges regularly

     Reportedly another problem may be heat. When the device and/or
     the media temperature is too high, it seems the streamer can't
     read/write the tape correctly anymore. In the reported case
     cooling down all the stuff recovered proper operation, but i
     wouldn't expect this generally.
     In any case: if the tape temperature gets too high, the magnetic
     patterns on the media might weaken irreversably and thus data
     can't be read anymore. Avoid to let your cartridges to become
     too hot !!!

     See also Q25


Q12: The server seems to have a wrong tape file count, what's wrong ?

A12: Probably you experienced the following: The last filename in
     the filename log is preceded by a different pair of cartridge
     number / tape file number than the pair named in the report
     email, written to the tape-position file on the server or
     queried with the client-program option -Q.
      This is perfectly possible. The last saved file can make the
     tape file exceed the configured maximum length. Then one or
     more further tape files are opened appropriately.


Q13: When using crond, the client seems not to start correctly ... ?

A13: You probably get the message "Connection to client lost ..."
     in the clientside logfile. This is a weird problem i only
     experienced on IRIX. The program gets a SIGPIPE and i have no
     clue, why. You might start full_backup or incr_backup with the
     option -d, which causes the program to detach from the terminal
     and to wait for 30 Seconds before continuing. Maybe this solves
     your problem.


Q14: What does AF mean ?

A14: Another F......


Q15: Though client backup works, remote start does not. Why ?

A15: The problem is in most cases, that during the remote start
     the configured (un)compression programs, usually gzip and the
     corresponding gunzip, are not found in the search path. Cause
     the remotely started backup is some child of the inetd, it
     gets of course the inetd's command search path. If this does
     not contain the path to gzip, the start fails.


Q16: My server does not work, tape operates, but nothing seems to be written ?

A16: There seems to be a problem on some platforms. Try to start the
     server with the -b option: Edit /etc/inetd.conf and add -b before
     the last argument of afserver in the line starting with afbackup.
     Then send a hangup signal to the inetd (ps ... |grep inetd -> PID,
     kill -HUP <PID>). Then try again. If it works, be happy, but be
     aware, that the performance is reduced in this mode. This problem
     is worked on.


Q17: I have a ADIC 1200G autoloading DAT and no docs. Can I use it with HPUX ?

A17: Thanks to Gian-Piero Puccioni (gip@ino.it) you can. You will find
     the mtx.c program he wrote helpful. Check out this file how to
     build and use his mtx command. It enables you to load/unload/handle
     certain cartridges.


Q18: What is a storage unit and how and why should i use it ?

A18: See in the HOWTO, Q9


Q19: Why should i limit the number of bytes per tape ?

A19: This is particularly useful, if you first write the backup into a
     filesystem and then copy that `disk cartridge' to a real tape with
     the copy_tape command or the autocptapes script. In this case the
     space used in the filesystem must be limited to the capacity of the
     appropriate tape, otherwise loss of data may occur as the data is
     copied 1:1 to tape and auto-continuation to the next tape does not
     make sense. Also see FAQ Q1, how to determine the capacity of a
     tape.


Q20: What are backup levels and why should i use them ?

A20: Backup levels allow backups, that store more than an incremental
     backup, but fewer than a full backup. How is this achieved ?
     More than one timestamp is used, each associated with a certain
     "level". A good way to explain levels is to explain a certain
     scenario. A level is first of all just a number. When an incre-
     mental backup is started with a given level (option -Q), all
     files will be saved, that have not been saved since the most
     previous backup with the same or higher associated level. The
     highest level is owned by the full backup. A "picture" for
     clarifying:

      T full backup
      |
      |
      | - - - - - - - - - - -T level 3
      | - - - - -T level 2   | - - - - -T level 2 - T level 2
      |          |           |          |           |
      |          |           |          |           |
     -+----------+-----------+----------+-----------+---------> time
      T1         T2          T3         T4          T5

     At the date T2 anything not saved since T1 is saved. This is
     basically an incremental backup compared to the full backup.
     At date T3 also anything not saved since T1 is saved, because
     the associated backup level is higher. At date T4 anything
     not saved since T3 is saved again, cause now the level is
     lower then at T3. At T5 everything not saved since T4 is
     again saved. If only one backup level is used, this has the
     same effect as simply incremental backups. Note, that not
     every level must really be used, the numbers are only compared
     with each other to decide, what timestamp will apply.
     With afbackup the incremental backup without a certain level
     has the implicit level 0, the full backup has level MAXINT
     (the value of this macro depends on the machine, where it has
     been compiled, on most Unix-machines MAXINT has a value of
     2147483647). With option -Q any value inbetween can be used.
     The timestamps are stored in the file
     /path/to/client/var/level_timestamps and can be read as clear
     text (just in case you used that many levels, that you get
     confused and don't know any more, what levels you used and
     which are still unused ... ;-) )


Q21: What do all the files in the var-directories mean ?

A21: Serverside:

     status        This file is updated, whenever a notable server
                   status change occurs. The file is always removed
                   and created again as status changes occur often
                   and they are not worth keeping. This file only
                   serves the purpose to get an information about
                   what is currently going on. While reading or
                   writing the current throughput is reported here
                   about every 5 seconds. Logging of errors or
                   warnings goes to the configured logfile.

     pref_client   This file is maintained to prevent colliding
                   client accesses. The clients should have
                   a chance to get the server always again, when
                   querying several times within a certain interval.
                   The previously served client and a timestamp is
                   saved here to grant this client preferred service
                   within a certain interval. Actually since version
                   3.3.5 this file is obsolete

     bytes_on_tape The persistent counters of the server side. A
                   maximum number of bytes per tape can be configured
                   and the server must remember, how much he had
                   written to all of the tapes. It makes no sense to
                   count them all each time a cartridge is loaded.
                   The format of each line is (backslash indicates
                   a continuation line and is no syntax element):
                    <cartridge-number>: <number-of-bytes-on-tape> \
                            <number-of-files-on-tape> <tape-full-flag> \
                            <last-writing-timestamp>

     tapepos       The name of this file can be configured in the
                   serverside configuration file, but i think, noone
                   will ever change it. This file contains entries,
                   that specify tape positions in different contexts.
                   Lines starting with a number followed by a colon
                   specify the writing position for the cartridge set
                   specified by the leading number. Lines starting
                   with a device name field indicate, what tape in
                   which position is currently in that drive. Each
                   pair of numbers specifying a position consists of
                   a cartridge number and a file number.

     precious_tapes  This file contains a line for each client, listing
                     which cartridges the client needs for restoring
                    everything it saved and it wants access to. All
                   cartridges listed here are considered read-only, if
                   they have no more space on tape to write to. If they
                   have free space, new data is appended at the end of
                   the last file on tape during write

     readonly_tapes  This file contains lists of cartridge numbers,
                     that should not be written to anymore. This file
                    can be edited or modified sending an appropriate
                   server message (See: afclient, option -M). The
                   format of this file is simply numbers, ranges or
                   comma-separated numbers of cartridges. A range can
                   be given as [<start-number>]-[<end-number>], e.g.
                   2-4, -2 or 8-. In the last example the number of
                   cartridges configured in the server configuration
                   file will apply for the end of the list.

     cartridge_order  The server must remind, what tape follows which
                      other one, because their order no longer follows
                     the number of the cartridge and the server no
                    longer starts writing the first one after the last
                   one is full. Tapes can be set read-only or marked
                   crucial for restoring some client. So it may occur,
                   that the server must skip one or more tapes to find
                   a writable one. Also in full append mode it might
                   happen, that it is not the first file on tape, who
                   follows the last one on a full tape. In this file
                   the order is saved, what file on which tape must be
                   read, when a certain tape is exhausted. Behind the
                   number of the cartridge in the first column and the
                   arrow characters -> the following numbers name the
                   tape and file to be read next. This file should be
                   saved to some other location, because it is crucial
                   for restore.

     tape_uses     This file contains a list of cartridge numbers in
                   the first column, followed by a colon : . The second
                   column contains a number indicating, how often this
                   tape has become full up to now. This number is supplied
                   to the configured Tape-Full-Command , whenever a tape
                   becomes full.

     cartridge_locations  This file contains the database, where the
                          cartridges currently can be found. The first
                         column is the cartridge number, followed by a
                        colon. A space follows and the rest of the
                       line either contains three fields: the device
                      name of the media changer, a word to specify the
                     location class (drive, slot or loadport), and a
                    number counting instances of location classes, e.g.
                     /dev/rmt/stctl0 slot 6
                   If the rest of the line is not of this form, it is
                   considered to be a freetext description.

     ever_used_blocksizes  This file contains a list of all the tape
                          blocksizes, that have ever been used on the
                         the server. The list is used to quickly find
                        the correct blocksize for reading, when the
                       tape cannot be read with the configured one. If
                      tapes are used, that come from another server and
                     have a tape blocksize, that this server has never
                    seen, the unknown blocksize should be added to this
                   file manually, one per line.


     Clientside:

     num           Here the current total number of backups is stored.
                   The total number of backups is incremented each time
                   a full backup finishes successfully, if not the append
                   mode (option -a) is selected or files and directories
                   are explicitly supplied as arguments. This case is
                   considered an exceptional storing of files, that should
                   not affect counters or timestamps

     part          If present, it contains the number of the backup part
                   that has recently started. Full backups can be split
                   in pieces if a complete run would take too much time.
                   This can be configured with the parameters
                   NumBackupParts, DirsToBackup1, ...

     oldmark       The Modification time of this empty file serves as
                   memory for the timestamp, when any full or incremental
                   backup has started before. This should be handled in
                   the file explained next, but due to backward compati-
                   bility issues i will not change this (historical error
                   coming from the earlier used scripts for backup and
                   the use of the find-command with option -newer)

     newmark       During backup a file holding the timestamp of the
                   backup starting time. The reason, why this timestamp
                   is kept in the filesystem is safety against program
                   crashes

     level_timestamps  This file contains the timestamps for the backup
                       levels. Each line has the following format:
                    <backup-level>: <incr-backup-starting-time>
                   For each used backup level and the full backup a line
                   will be maintained in this file

     save_entries  This file holds the patterns of all configuration
                   entries in DirsToBackup, DirsToBackup1, ...
                   for use in subsequent backups. If new entries will be
                   configured, this file allows to automatically switch
                   to full backup from incremental backup, when a new
                   entry in the configuration file is found

     needed_tapes  This file contains a list of tapes needed for full
                   restore of all files listed in existing filename list
                   files (i.e. index). The number of these files depends
                   on the clientside parameter NumIndexesToStore. After
                   each backup (full or incremental or level-N) a line
                   is added to this file or an existing one is extended
                   to contain the current backup counter and a list of
                   backup levels, each associated with the cartridge
                   numbers used during write to the server with the
                   named ID. The format is:
                    <backup-counter>: <backup-level>><tape-list>@<serverid> \
                           [ <backup-level>><tape-list>@<serverid> ... ]
                   When running an incremental or differential backup
                   supplying the option -H, entries with a level lower
                   than the current one (or in differential mode equal
                   to the previous) are removed from this list. Thus the
                   tapes from these entries are permitted to be written
                   again (often called "recycled"). After each update of
                   this file, the list of all required tapes residing at
                   the current server is sent to this server and there
                   stored in the file precious_tapes (see above). When
                   tapes are removed from the file precious_tapes on the
                   server, the client updates his needed_tapes file and
                   the index contents accordingly.

     start_positions  Here for each full or incremental backup within the
                      range required by the parameter NumIndexesToStore
                    the information to retrieve all the data is stored.
                   Each line has the format
                    <backup-counter>: <backup-server> <backup-service> \
                               <cartridge-number> <file-number>
                   Having this information everything can be restored in
                   case all other data is lost

     server_ids    The information, which server network address has which
                   server-ID assiciated. The first two columns contain the
                   hostname and port number, the third the server-ID

     index_ages    For each existing index file, this file contains a
                   line with the index number in the beginning, followed
                   by a colon and the timestamp of the last modification
                   of that index in seconds since epoch (1.1.1970 0:00).
                   This file is evaluated, if the client side parameter
                   DaysToStoreIndexes is set.

     tmp_rm_on_backup  contains a list of files, that are temporary, but
                      should be kept for whatever reason until the next
                     successful backup, but then can be removed.


Q22: Help ! My (multi stream server's) client receives no data, why ?

A22: Most likely the client's official hostname has changed. The server
     does not recognize any more, what data on tape should be dispatched
     to this client. Use option -W to supply the client's old official
     hostname or configure that name using the configuration parameter
     ClientIdentifier in the client side configuration file.


Q23: My DLT writes terribly slowly and changes cartridge too early, why ?

A23: The reasons for the too early rewind are admittedly unknown.
     It has been reported, that EIO is returned during a write without
     any obvious reason. It seems, that this can be avoided and a much
     better throughput be achieved configuring a relatively large tape
     blocksize. For a DLT 32768 seems to be a good value.


Q24: When should i use the multi stream server and when not ?

A24: Basically for restore or verify you don't have to choose. The same
     port (what finally means: the same server) like during backup is
     set automatically. You do not have to care about that. For backups
     i'd suggest the following:

     Use multi stream server

      * For incremental backup of several machines in parallel. In this
        situation the multi stream feature can be a real time saver

      * For full and/or incremental backup of several machines connected
        to the backup server over slow links, where the machines must
        have separate lines each. The following scheme shows a configu-
        ration, where exploiting the multi-stream feature makes sense
        also for full backups:

                   --------
                  | server |
                   --------
                       |
                       | (fast link(s))
                       |
               --------------------------
              | switch/bridge/hub/router |
               --------------------------
                 /         |          \
                /          |           \     (slow links)
            --------    --------    --------
           | client |  | client |  | client |
            --------    --------    --------

     Use single stream server

      * For full backups over fast lines, where the streamer device is
        the bottleneck. Here the additional overhead of the multiplexing
        server might become the bottleneck on slower machines

      * For messages to the server (option -M of the afclient program)
        (mandatory !)

      * For trivial operations in combination with the afclient program
        (e.g. options -q, -Q, -w)

      * For copying tapes (copy_tape)

      * For emergency recovery with option -E

     Summarizing it i'd suggest to configure the single stream server as
     default and override the default with the appropriate options, when
     desired. The option for the afclient program is -p, for the others
     (full_backup, incr_backup, restore, verify, copy_tape) -P .


Q25: Why is my 2 GB capacity DAT tape full having written about 1.5 GB ?

A25: The following statements i collected as experience from different
     users. I pass them on here without any comment.

     Thanks to Mr. Andreas Wehler at CAD/CAM Straessle GmbH in Dsseldorf/
     Germany the following statements have been collected from HP and
     others:

      - With not compressable data DAT tapes have a real capacity of
        between 75 and 84 % of the capacity specified on the cover

      - The capacity decreases during lifetime cause of increasing
        defect density as a result of wearing out

      - No user will get notified of the current media capacity status

     DAT specs say:

        60m      DDS,    1.3GB uncompressed
        90m      DDS,    2.0GB uncompressed
       120mm     DDS-2,  4.0GB uncompressed

     Capacities achieved with new tapes in reality:

      Experience of User A:

        60m:  1.1GB
        90m:  1.6GB
       200m:  3.3GB

      Experience of User B:

        60m:  1.1GB
        90m:  1.5GB
       200m:  3.3GB

     Technical aspects:

      HP writes data in 22 frames of 128 KB each to a 90 m tape,
      what should make a capacity of 2.8 GB

      Tapes are not written completely, trailers remain free for
      possible later error correction

      The specified capacity is a theoretical value for advertising.
      They assume raw/unformatted writing to tape and do not take
      the normal format overhead into account. It is known, that the
      named values can never be reached. The discrepancy of 25 % to
      the specifications is relatively high, but "tape experts" are
      considering this to be normal.

      When a not correctable write error occurs the complete frame
      is invalidated and rewritten to the next piece of tape able to
      keep it. Thus the usable capacity decreases continuously and
      according to HP officials this is a normal side effect of the
      DAT technology.

      The device can evaluate certain hints pointing to dirty read/
      write-heads. Then a message can be transmitted to the device
      driver and this way up to some user, who should then insert a
      cleaning tape. But when the device detects a dirty head and
      transmits the notification, it is regularly and usually much
      too late. Read and write errors might have produced unusable
      data on tape or lead to wrong tape file mark counting as stated
      in FAQ Q11.

      I'd like to summarize this under the normal bullshitting, that
      is established today in the computer business (and others).
      Special thanks to Micro$oft, whose one and only incredible
      great feat is IMHO to have driven the users' pain threshold
      to heights never reached before. Does anyone believe a single
      word from them any more ?

     Other sources say about DDS2-4:

      DDS2 conformant drives (and higher) must be able to perform
      hardware compression. Having written an already compressed
      file to a DDS4 tape the mt tool of the dds2tar package
      reports, that indeed 20 GB data have been put on the tape.
      So here (at least with new tapes, i (af) guess), the specs
      are fulfilled.


Q26: Tape handling seems not to work at all, what's wrong ?

A26: Nothing seems to work, you get error messages, you don't
     understand, in the serverside log there are messages like:

     Tue May 25 15:46:55 1999, Error: Input/output error, only -1 bytes read, trying to continue.
     Tue May 25 16:47:31 1999, Warning: Expected cartridge 3 in drive, but have 2.
     Tue May 25 16:58:31 1999, Error: Input/output error, only -1 bytes read, trying to continue.
     Tue May 25 17:20:12 1999, Internal error: Device should be open, fd = -10.
     Tue May 25 17:21:24 1999, Error: Input/output error, only -1 bytes read, trying to continue.

     This means probably, that the program configured for setting a
     tape file (SetFile-Command:) does not work. Either you have
     supplied something syntactical incorrect, or you are using
     RedHat Linux-5.2 . The mt command of this distribution and
     version is broken. Solution: Update to a newer version of mt,
     0.5b reportedly works.


Q27: How can i change the compression configuration ?

A27: Basically the compression level can be changed at any time,
     but with the algorithm and afbackup version 3.2.6 or older
     it is different story.

     The only problem here is, that the filename logfiles (in other
     words: the index files) are compressed and changing the uncompress
     algorithm makes them unreadable. With afbackup 3.2.7 or higher
     for each index file the appropriate unprocess command is saved
     into a file with the same name like the index file, except that
     it has a leading dot (thus hidden). A problem arises with indexes
     without a related hidden file. The solution is to uncompress
     them with the old algorithm into files, that do not have the
     trailing .z . The existing .z files must be removed or moved out
     of the way. When running the next backup the current file will
     automatically be compressed. Of course the uncompressed files
     can then be compressed into new .z files with the new compression
     algorithm. In this case the files without the trailing .z must
     be removed.

     When using built-in compression, there is a little problem here.
     A program is needed, that performs the same algorithm like the
     built-in compression. Such a program comes with the distribution
     and is installed as helper program __z into the client side
     .../client/bin directory. The synopsis of this program is:

      __z [ -{123456789|d} ]

     __z [ -123456789 ]  compresses standard input to standard out
                         using the given compression level

     __z -d              uncompresses standard in to standard out

     Having configured built-in compression AND a compress and
     uncompress command, a pipe must be typed to get the desired
     result. Keep in mind, that during compression first the command
     processes the data and then the built-in compression (or the __z
     program) is applied. To uncompress the index files e.g. the
     following command is necessary:

      /path/to/client/bin/__z -d < backup_log.135.z | \
             /path/to/client/bin/__descrpt -d > backup_log.135

     It is a good idea to check the contents of the uncompressed
     file before removing the compressed version.

     For the files saved in backups a change of the compression
     algorithm is irrelevant, cause the name of the program to
     perform the appropriate uncompression (or built-in uncompress)
     is written with the file into the backup.


Q28: Why does my Linux kernel Oops during afbackup operation ?

A28: Reportedly on some machines/OS versions the scientific
     functions in the trivial (not DES) authentication code are
     causing the problems. Thus, when compiled with DES encryption
     enabled, the problems are gone. The libm should not be the
     problem, it operates at process/application level. A better
     candidate is kernel math emulation.

     Solutions: * Recompile the kernel with math emulation disabled.
                  This should be possible with all non-stone-age-
                  processors (Intel chips >= 486, any PPC, MIPS >=
                  R3000, any sparc sun4, Motorola >= 68030 ...)
                * Get the current libdes and link it in on all
                  servers and clients. This also enhances security


Q29: Why does afbackup not use tar as packing format ?

A29: tar is a format, that i don't have control of, and that lacks
     several features, i and other users need to have. Examples:

      - per-file compression
      - arbitrary per-file preprocessing
      - file contents saving
      - saving ACLs
      - saving command output (for database support)

     I (too) often read: In emergency cases i want to restore with
     a common command like tar or cpio, cause then afbackup won't
     help me / be available / no argumentation. This is nonsense.
     In emergency cases afbackup is still available. The clientside
     program afclient can be used very similarly like tar. Thus
     when using the single stream server you can recover from tape
     without the afserver trying something like this (replace with
     configured blocksize after bs= and get the tape file number,
     where the desired files can be found, from the index file, it
     is prefixed with hostname%port!cartridgenumber.tapefilenumber):

      cd /where/to/restore
      mt -f /dev/nst0 fsf <tapefilenumber>
      sh -c 'while true ; do dd if=/dev/nst0 bs=<blocksize> ; done' \
           /path/to/client/bin/afclient -xarvg -f-

     RTFM about afclient (e.g. /path/to/client/bin/afclient -h)
     and dd. Don't mistype if= as of= or for safety take away the
     write permission from the tape device or use the cartridge's
     hardware mechanism to prevent overwriting.
     When using the multi-stream server, the tape format must be
     multiplexed, so it will never be the raw packer's format.
     Then it won't help in any way, if it was tar or cpio or what
     ever, you need to go through the multi stream server to get
     back to the original format.


Q30: How to recover directly from tape without afclient/afserver ?

A30: See Q29.


Q31: Why do files truncated to multiples of 8192 during restore ?

A31: This happens only on Linux with the zlib shipped with recent 
     (late 1999) distributions (Debian or RedHat reportedly) linked
     in. I was unable to reproduce the problem on my Linux boxes
     (SuSE 5.2 and 6.2) or on any other platform, where i always
     built the zlib myself (1.0.4, 1.1.2 or 1.1.3). I have the
     suspicion, that the shipped header zlib.h does not fit the
     data representation expected in calls to functions in the
     delivered libz.a or libz.so . Thus programs built with the
     right header and appropriate libz do work, but programs built
     with the wrong header linked to libz do not. Don't blame that
     on me, i have a debugging output here sent to me by a user,
     that proves, that libz does not behave like documented and
     expected.


Q32: What is the difference between total and real compression factor ?

A32: The total compression factor is the sum of all the sizes of all
     files, divided by the sum of the sizes of the files not compressed
     and the number of bytes resulting from compressing files, what
     makes the sum of all bytes saved as file contents, either being
     compressed or not.
     The real compression factor only takes those files into account,
     that have been compressed and not those left uncompressed. This
     factor is the sum of the sizes of the files having been compressed,
     divided by the sum of bytes resulting from compressing those files.
     Both factors are equal, if compression is applied to all files,
     e.g. if the parameter DoNotCompress is not set or no files
     matching the patterns supplied here are saved.


Q33: How does afbackup compare to amanda ?

A33: Admittedly i don't know much about amanda. Here's what i extracted
     from an E-Mail-talk with someone, who had to report a comparison
     between them (partially it's not very fair from both sides, but i
     think everyone can take some clues from it and be motivated to ask
     further questions), it starts with the issues from an amanda user's
     view (> prefixes my comments on the items):


DESCRIPTION                                                 Amanda  afbackup

Central scheduler which attempts to smooth the daily
backups depending on set constraints, can be interrogated.  YES     NO
> (afbackup does not implement any scheduler, backups can be
> started from a central place, afbackup does NOT force the
> types of a backup, e.g. make incremental backup, if there
> is not much space on tapes left)

Sends mail when a tape is missing or on error,
while in backup.                                            YES     YES

Pre-warns of a possible error condition (host not
responding, tape not present, disk full) before backup.     YES     PARTIALLY
> (afbackup implements a connection timeout for remote
> starts, an init-command for server startup and an
> init-media-command, that is called, whenever a media
> should be loaded, can be configured, that may test for
> problems in advance)

If no tape available, can dump to spool only.               YES     NO
> (No (disk) spool area is maintained. Backup media can
> be filesystems, thus also removable disks.)

Normally dumps in parallel to a spool disk, then to tape,
for efficiency.                                             YES     N/A
> (afbackup can dump in parallel to server, clientside
> protocol optimizer for efficiency, no spool area s.a.)

Supports autochanger in a simple way (can browse for a
tape, but will not remember the pack's content, this can
be a feature)                                               YES     YES
> (Don't know, what is meant here. Autochanger is supported
> in a simple way before 3.3, enhanced in 3.3, including a
> media database)

When using tar backups, indexes are generated which can be
used to get back the data.                                  YES     YES
> tar is not used (see below), indexes are maintained

An history of the backups is available, Amanda can decide
the restore sequence, e.g. if the last full dump is
not available, go back in history, using incremental
backups.                                                    YES     Y/N
> (A before-date can be supplied, but no automatic
> walk-back in history)

Backup format can be simple tar.                            YES     YES(discouraged!)
> I decided not to use the tar packing format as it lacks
> several features, that i consider absolutely necessary,
> most notably
> - per-file compression/preprocessing
> - command output packing
> - extended include/exclude

Amanda will interrogate the client and tell him to do a 0,
1 or other level backup, depending on spool size, backup
size, etc.                                                  YES     manual

Can print tape's content labels.                            YES     N/A
> The label of an afbackup tape does not contain tape
> contents. These are located in the index file(s). Those
> can be printed easily, also only a certain end user's
> files. This feature of amanda has in my opinion one of
> the heaviest limitations of amanda (filesystem size
> <= tape capacity) as consequence

Can print weekly tape content summary.                      YES     N/A

Can print graphical summary of backup time.                 YES     NO

Restorer through an intelligent command line.               YES     YES, also GUI

Backups can be stored and/or transmitted compressed.        YES     YES
> clientside compression is one of afbackup features.
> Thus transmitted data is already compressed.

Backups can be encrypted during transport or on disk.       NO(1)   BOTH
> ssh may be used to tunnel the connection, the contents
> of the stored files can be preprocessed in any arbitrary
> way, also encrypted

Can backup file system whose size is bigger than a tape.    NO(2)   YES
> Why not ?

Backups file system to tape if bigger than spool, or to
spool, or no backup.                                        TAPE(3) N/A
> No spool area is maintained. To achieve good performance,
> ring buffers are created on client and server, client-/
> server-protocol tries to optimize throughput.

Can append to tape.                                         NO(4)   YES
> Normal append is supported since ever. As of version
> 3.2.6 full append mode is implemented, i.e. also, if
> an administrator has requested to write to another
> tape now, the current one will be appended to, if there
> is no space left on any available tape. Since 3.2.7
> there is also a variable append mode making the server
> append to any supplied tape having remaining space
> and not being in read-only state

Supports a tape verify option (just verifying the tape)     YES     NO
> Don't see the use of this.

Supports a data verify option (compare with fs).            NO(5)   YES
> (very pedantic)

Graphical, web or menu-based configuration.                 NO      GUI,CL
> CL means: command line program

Graphical, web, menu-based or command line restore.         CMD     GUI,CL

Can restore individual file automatically to most recent.   YES     YES

Can restore individual file to specified date.              ???     YES

Protects a client host from others reading its data.        NO      YES
> Client access can be restricted on cartridge set base

Supports disaster recovery.                                 NO      YES

mt and tar commands are easy to use to recover by hand,
with the printed weekly summary.                            YES     YES
> No weekly summary, minimum restore info posted to admin.
> Manual recover is possible, explained in FAQ

License.                                                    BSD     GPL

Can backup MS-WINDOWS data ?                                YES     via SMB-mount


Now for the items from an afbackup preferring user's view (> prefixing
the comments of an amanda user, >> prefixing my thoughts on the comment):

End User Restore                                            NO      YES
> Amanda doesn't support end user restore

Data safety (client/server authentication through a
challenge-response, secret key required, real client-
server system, only server can access tape devices)         NO      YES
> Amanda does NOT have it, which makes it a problem.
> There ARE extensions, for example for using Kerberos
> (export problems), or ssh (other class of problems).

Database backup support (by saving arbitrary dump
command output)                                             NO      YES
> No, Amanda requests it to be sent to a file, first.
>> So e.g. for an online database backup a huge
>> temporary disk space is required

Raw device contents backup                                  NO      YES

Using full tape capacity                                    NO      YES
> No, Amanda insists on changing tape everyday (which
> makes sense for tape's security reason, but doesn't make
> too much sense if you waste a lot of precious storage
> --- Amanda counter-balances this with its intelligent
> scheduling algorithm).

Multi-Stream (several clients backup to a server in
parallel) optional                                          YES?    YES
> Multiple clients can backup to the spool, and then to
> tape. There is no tape multiplexing or anything like
> this.

Several servers per client can be configured, selected
by availability and load, transparent during restore        NO      YES

Per file preprocessing (for safety, if the whole stream
is e.g. compressed and a single bit is wrong during restore
all the rest is lost)                                       NO      YES
> Amanda compresses the whole backup if requested.
>> (AF's comment: crazy in my opinion)

Secure remote start option (not requiring trusted
superuser remote access)                                    NO      YES
> Backups are always started centrally. You can decide at
> which time the *whole* thing starts.
>> (AF's comment: also possible with afbackup, in a
>>  secure fashion)

End user restore (already mentioned above) only of his
own files                                                   NO      YES, also GRAPHICAL

Server and client can easily change (e.g. move tape to
other machine or restore to different client)               Y/N     YES
> Amanda stores the indexes on the server, so the client
> can easily change. However, the server can only change
> provided you restore the indexes.

Duplicate tapes (make clones) (also automatically)          NO      YES
> Not supported (you can make copies, but they won't be
> considered as though).

Store in filesystems, maybe removable disks                 NO      YES
(may call it virtual cartridges)

Cartridges can be set to read-only mode                     ???     YES
> Probably no.

Maintain arbitrary cartridge sets (e.g. to switch daily,
weekly or for type or backup)                               YES     YES
> Yes. Amanda's scheduler is probably better than afbackup's.
>> (AF's comment: i didn't speak of the scheduler here,
>>  but of the option to combine tapes to sets with
>>  common properties, e.g. access restrictions)



1.2 Amanda issues

(1) Support for security is low (a this time mainly based on host name
    security, without encryption). Kerberos or ssh encryption are possible,
    but not easy to set up/well tested, and have some exportation or
    patent issues.

(2) Cannot backup a file system whose size is bigger than a tape, without
    splitting the fs with regexps.

(3) Backups bigger than the spool size are dumped to tape, which is slower
    and may cause tape trashing.

(4) Only if the tape is disabled, in that case the system dumps to spool,
    and then a flush can be done. But cannot really *append* to a tape.
    Authors say it's a feature: the tape is not used for more than one day,
    this guarantees medium integrity, and the scheduler makes this
    worthwhile.

(5) Verify option would have to be implemented.


1.2 afbackup issues

To be implemented in the next versions:

(1) Jukebox support (several tape devices sharing a set of tapes), coming
    not too soon, depends on the time and support i get for ongoing
    development by my employer and customers.


Not planned to be implemented:

- Maintaining a spool area on disk
- Distinguished scheduler for the backup system (crond is in place, so ...)


Q34: How to contribute to I18N/L10N ?

     Ask to get a pattern file for your language. It will be sent to you
     containing pairs of msgid and msgstr entries. For a first attempt
     the file afbackup.pot in the subdirectory ./po can be used, copied
     to X.po with X replaced as explained below. But then it might be,
     that someone else is already working on the translations for your
     language, so better ask first.
     You have to fill in the msgstr parts. If the msgstr part will be
     longer than one line, put an empty string behind msgstr and continue
     to write in the next lines. Example:

msgid "some long English stuff"
msgstr ""
"The multiline\n"
"equivalent in some\n"
"other language."

     There are already multiline sections in the msgid fields. Please try
     to keep the output clearly arranged.

     To test your translations, put your X.po file into the subdirectory
     ./po of the distribution. Change to it and type the following line
     (X replaced with your language setting of LANG):

      msgfmt -o X.mo X.po

     The X.mo file will be created.
     Now make a directory under the installation directory
     /.../common/share/locale (again X replaced):

      mkdir -p /.../common/share/locale/X/LC_MESSAGES

     now copy the X.mo file to that directory renaming it to afbackup.mo:

      cp X.mo /.../common/share/locale/X/LC_MESSAGES/afbackup.mo

     When you now set the environment variable LANG to the setting
     you use for other programs, afbackup should speak your language.
     Please send the X.po file with your add-ons to the author (please
     gzip -9 or bzip2 -9 before sending !!!)

     Thanks a lot !


Q35: Why does I18N not work in my environment ?

A35: A common problem is, that the programs are linked with a libintl.X,
     that does not understand the format of the .mo file. Either GNU
     msgfmt is used to create the .mo file and the vendor's lib is
     linked to your binary or the other way round. This may happen,
     though i tried to make autoconfig do it's best to find out, which
     program and which function is what sort of. To use the vendor's
     /usr/bin/msgfmt and /lib/libintl.XY, you can change to the po
     directory and run  msgfmt -o XY.mo XY.po with XY replaced with
     your language abbreviation, then  make install  again.

     If you get a warning during build, that no msgfmt program could
     be found, either add the path to GNU msgfmt to your command path
     and build again, or if no msgfmt can be found, install GNU gettext
     and start over. If GNU msgfmt is available on another architecture,
     you can simply copy the *.gmo files into the po directory and build
     again without the  make distclean  before.

     If all this does not help, the problems are elsewhere. It has been
     experienced, that afbackup I18N does not work on Solaris-2.6 while
     it does on Solaris-2.5.1 and Solaris-2.7. Strange, isn't it ?
     Any help concerning these topics is appreciated.


Q36: Is there a mailing list or a home page for afbackup ?

A36: Yes. The Homepage is http://www.sourceforge.net/projects/afbackup

     The alias http://www.afbackup.org is redirected to this URL and
     might go out of service silently.

     If you want to be informed about important changes or bugfixes,
     monitor the desired releases on the afbackup homepage.


Q37: I have trouble using the multi stream server. What can i do ?

A37: Trouble with the multi stream server are supposed to be related to
     the inetd, especially when using xinetd. In these cases the afmserver
     can be started as daemon not using (x)inetd. For this purpose there
     are the options -d and -p <port>. Please note, that this mode to run
     the afmserver requires a more tolerant and robust client behaviour
     first implemented in version 3.2.7. Older clients may have problems.

     The afmserver can e.g. be started at system boot time using the line
     below. As it should run usually under a different user ID than 0,
     which is root's, an su to this ID must be preceded (see column 5 of
     the single stream server's entry in /etc/inetd.conf for the name of
     the user). Then the line might look something like this:

      su backup -c "/usr/local/afbackup/server/bin/afmserver -d -p afmbackup /usr/local/afbackup/server/lib/backup.conf"

     The program goes into the background, so no & is required. The
     daemon can be killed normally, when not needed any more.

     A typical init-script might look like this (modify the setting of
     BASEDIR appropriately, check, if the configuration file is correct
     as $BASEDIR/lib/backup.conf and modify, if not):

#!/bin/sh
#
# I *love* RCS
#
# $Source: /home/alb/afbackup/afbackup-3.3.6/RCS/HOWTO.FAQ.DO-DONT,v $
# $Id: HOWTO.FAQ.DO-DONT,v 1.2 2002/02/27 10:17:09 alb Exp alb $
#

BASEDIR=/usr/local/afbackup/server

CONFIGFILE=$BASEDIR/lib/backup.conf

#
# cheap trick, might fail, then set PS accordingly
#
PS="ps -uxaww"
$PS >/dev/null 2>&1
if [ $? -ne 0 ] ; then
  PS="ps -ef"
fi

case "$1" in
    start)
	NPROCS=`$PS|grep -v grep|grep /afmserver|grep -v init.d|wc -l`
	if [ $NPROCS -gt 0 ] ; then
		echo "An AF-Backup server seems to be already running."
		exit 0
	fi

	echo "Starting AF-Backup multi stream server."

        su backup -c "$BASEDIR/bin/afmserver -d -p afmbackup $CONFIGFILE"

	NPROCS=`$PS|grep -v grep|grep /afmserver|grep -v init.d|wc -l`
	if [ $NPROCS -lt 1 ] ; then
		echo "Could not start the AF-Backup server"
		exit 2
	fi

	;;

    stop)
	PID=`$PS|grep -v grep|grep /afmserver|grep -v init.d|awk '{print $2}'`

	if [ _"$PID" != _ ] ; then
		echo "Stopping AF-Backup multi stream server."
		kill $PID
	else
		echo "AF-Backup multi stream server not running."
	fi

	;;

    *)
	echo "Usage: $0 {start|stop}"

	exit 1

	;;
esac

exit 0

# End of rc script


Q38: On AIX i get the warning: decimal constant is so large ... what's that ?

A38: It has definitely been proven by writing, running, and tracing test
     programs, that this warning is bogus. The definition for MAXINT looks
     something like this (reduced to the beef):

      #define MAXINT (int)((unsigned)(1 << (sizeof(int) * 8 - 1)) - 1)

     The part 1 << (sizeof(int) * 8 - 1) evaluates to 2^31 or hex 0x80000000.
     If evaluated as two's complement (int type), is is -2^31, i.e. it is
     negative. To be positive it may not be considered two's complement,
     but unsigned. This is, what the warning says (i think). Anyway, when
     decrementing it by one, it results in hex 0x7fffffff, what is the
     correct value, whether considering 0x80000000 being unsigned positive
     or two's complement. In the latter case some overflow bit will be set,
     put the result is the same (and correct).


Q39: What about security ? How does this authentication stuff work ?

A39: The server does not serve clients, that haven't authenticated. This is
     to prevent arbitrary people connecting the server port and operating
     the protocol, so they have full access to all tape operations.

     Authentication is of the challenge-response type. That is, the server
     sends some (random) data (called 'the challenge') to the client and
     expects it to process the data in a proper way and to send the result
     (called 'the response') back to the server. If the client comes to the
     same result like the server, the client has thus proven, that he knows
     the authentication key, that is necessary to find the correct result.

     The algorithm to calculate the response from the challenge depends on
     configuring DES encryption. If DES is configured, the algorithm is 128
     Bit 3DES (effectively 120 Bit). 128 Bits from the key are used and both
     challange and response consist of 16 bytes. If DES is not configured,
     the algorithm is a simple one using only 32 Bits. If ever possible, use
     the DES encryption.

     The key is generated from the entered key string or from the configured
     key file. Only the 6 least significant bits (0-5) are used from each
     character to make sure, that a key, that is composed only of printable
     characters, is fully significant. To make 128 bits, it is thus required
     to enter 22 characters, what makes 132 Bits. More characters will not
     be used i.e. they are ignored.

     With afbackup version 3.3.1 and higher, also the client requests the
     server to authenticate sending it a challenge and evaluating it's
     response. This is to make sure, that the client has connected a real
     server, that is really knowing the key. What otherwise might happen,
     is the following scenario: Some malicious guy wants to gain access to
     the tape data. Maybe he knows some computer, that clients try to
     connect, but where no afbackup service is running. Remember: the port
     number used by default is a non-privileged one. So he establishes a
     fake server on that port as normal user, listening for clients to
     connect. Now he connects a real afserver himself, receiving the
     challenge bytes from that server. He sends these bytes to the client,
     that has connected to himself and receives the correct response, cause
     this client is a proper one knowing the key. Instead of continuing to
     serve the client he uses the response from that client to successfully
     authenticate to the real afbackup server and to gain unauthorized
     access. This cannot be prevented with the mechanism, that the client
     requests the server to authenticate, it is just made a little more
     difficult. The malicious guy can go ahead and forward the client's
     challange to the connected server, receive it's response and pass it
     to the client again. If he don't do so, the client will complain and
     point out the possible security problem. So does the server, whenever
     authentication fails.
     So this kind of 'man in the middle attack' is not made impossible, but
     it must be performed perfectly to remain undetected. To avoid such an
     attack, the maintainer might choose to use a privileged port (with a
     number < 1024) for afbackup. Then the intruder must already have root
     access to spoof the port. Another option is to prevent normal users
     from login to the backup server(s) or to supervise, that the afbackup
     service is continuously available on the provided port(s). If it is
     not, some kind of alarm might be issued.


Q40: Why does remote start of backups not work, while local start does ?

A40: The most common problem is, that, when starting locally, the command
     search path is different from the one, that is used, when programs
     are started remotely. Thus it might happen, that configured commands
     cannot be found. The solution is (BTW anyway recommended for security
     reasons) to configure the commands with full directory path in the
     clientside configuration file, e.g. the IndexProcessCmd and it's
     counterpart to be /usr/local/bin/gzip and /usr/local/bin/gunzip .
     Commands started remotely are subprocesses of the inetd. The inetd
     usually has only /usr/bin and /usr/sbin in it's path, sometimes also
     /bin and /sbin . It is not implemented (and will not be) in afbackup,
     that the search path is transferred to the remote host to find the
     programs in additional directories. Configuring the full paths is
     the better way.


Q41: What is the architecture of afbackup ?

A41: Not attempting to discuss, what architecture means, i hope, the
     following explanations will give some clues:

The software architecture is about as follows:


            programs (afserver, afclient, full_backup, ...)  | use
----------------------------------------------------         V
                | libafbackup.a (special procedures |
libx_utils.a     --------------    used in several  |            ^
(general purpose library)      | afbackup programs) |            |afbackup
---------------------------------------------------------------------------
              | libintl.a (L10N), GNU regex, libz, libdes |      |3rd party
               -------------------------------------------       V
        libc, POSIX system interface and libpthread (afb.3.3.3)

Notes:
* GNU regex comes with afbackup, if not detected by autoconf
* libintl is included and compiled, if no usable system libintl found
* programs are in fact fewer programs with functionality depending on
  called binary name i.e. argv[0]


The runtime architecture is about like that:

           client side            |              server side
                                  |
  xafrestore                      |
      |                           |
      |(invokes)                  |
      |                           |
      V   full/incr_backup/       |             
     afrestore/afverify/...       |
            | (invokes/uses)      |  (network communication)
            V                     |    /
        afclient------------>-----+------------>--afmserver
                       (requests) |     (or)|         | uses
                                  |         |         V
                                  |          -->--afserver
                                  |                |     |
                                  |     (operates) V     V (uses)
                                  |                |   mt,mtx,...
                                  |                |     |
                                  |                |     V(operates)
                                  |                 ---->|
                                  |                      V
                                  |               [storage device]

Notes:
* afclient on the client side is the workhorse program including
  packer and server communication.
* high level functionality including index maintaining and so on
  is implemented in full_backup etc. These are mainly to be used
* afmserver is the multi stream server, in fact just a multiplexing
  frontend for the single stream afserver. Which one is used, can
  by chosen by the target TCP port i.e. the service name
* Functionality to operate streamer devices or changers is not
  included in afbackup. System or thirdparty tools are used
* Generally afbackup duplicates as few as possible functionality,
  that already exists
* the runtime structure is divided into several programs and the
  build structure into programs and libraries to be able to modify
  and test certain functionality separately from the rest of the
  system. E.g. the packer functionality is completely in libx_utils
  and can be considered an own subsystem. If fact afclient can be
  used just like tar


Q42: Why are new files with an old timestamp not saved during incr_backup ?

A42: To recognize, that a file is new would require to compare all
     entries of the filesystem against the index contents. Such a
     comparison would, with the current very compact structure of
     the index (simply a compressed file list with some additional
     information), take at least several seconds per entry, if the
     backup volume contains a really large number of files, even
     longer. An incremental backup would then take hours instead
     of minutes, days instead of hours.
     To have faster index lookup the index must be either kept in
     memory in a sorted fashion (normally not realistic even with
     current memory capabilities), or it has to be implemented
     completely different. Commercial products do this. Networker
     or Veritas Netbackup for example implement a kind of database
     containing entries for all saved instances of the filesystem
     entries. Implementing such a database is not very different
     from implementing another filesystem, that contains additional
     attributes like backup time, physical position on tape, server
     identification and so on. Besides the fact, that such an index
     may become really huge, especially if there are many symlinks,
     directories or tiny files in the saved original filesystem, it
     requires regular consistency checks like a filesystem. With
     networker i experienced index checks taking more than 20 hours
     for about a terabyte saved filesystem data. During this time
     no backup or easy restore is possible. If anything disturbs
     the check, it will start from the beginning.
     Instead of implementing such yet another filesystem an existing
     one might be used. This is, what Arcserve does. For each saved
     filesystem entry another one is created in a special directory,
     that is maintained by the backup software. I don't know, how
     the attributes named above (backup timestamp etc.) are coded
     in that directory, but it is populated with numerous of tiny
     files. So for each entry in the filesystem to backup another
     one is needed in that directory. Such a directory has the side
     effect, that permission checks can easily be burdened on the
     system's filesystem implementation. If e.g. users should be
     able to see/restore only the files, they had write access to,
     this test can easily be achieved attempting the appropriate
     operation to this special directory. But to implement things
     this way makes incredibly inefficient use of the filesystems
     on disk. A huge number of entries is created containing only
     few data each. Some filesystems apply a smaller fragment size
     here, but anyway, the basic structure of such implementations
     is in my opinion questionable.
     In any case the necessary implementation effort is huge. On
     the other side, to explain the users, that new files with an
     old timestamp - typically from some unpacked tar or similar
     archive - are only in backup, when explicitely touch(1)ed, is
     a pretty tiny excercise. Furthermore it is often not necessary
     to have unpacked tar/cpio/... archives in incremental backup.
     These data can usually be obtained again from where they came
     before.

     A filesystem-like index will not be implemented in afbackup
     any time soon.


Q43: What do the fields in the minimum restore info mean ?

A43: Here is a typical example:

     @@@===--->>> hydra orion 2989 6 303 /tmp/afbsp_6S6_3Of_mNAPV_UA01

     The first part makes the string recognizable within other data,
     e.g. mailbox contents. The next word `hydra' is the identifier
     of the client itself. It will be passed to the server to get
     the right data. The next word `orion' is the hostname of the
     server, the following number the port at the server to contact.
     The next number (6) is a cartridge number, followed by a tape
     file number (303). The last field is the name of a file, that
     contains the positions of all pieces of backup necessary to
     restore all data since the (first part (if configured) of) the
     last full backup. This file will be restored first before doing
     anything else from the position indicated by the previous fields.
     It had been written to backup as a temporary copy of the file
     start_positions in the client's var-directory (see FAQ Q21 for
     more details about this file). The temporary copy is saved in
     backup, because during disaster recovery it is disadvantageous,
     if this file that is recovered first overwrites an existing one
     or becomes overwritten, so the contents cannot be checked later,
     if desired.


Q44: What are those files like /tmp/afbsp_XXXXXXX ? Can i remove them ?

A44: They can be removed, but then a successive afverify will complain.
     See Q43 about the contents of such a file.
     A file like this is kept until the next backup to shut verify
     up. Otherwise it would complain about that file, if it's not
     there. But it needs no longer be there. It is only important,
     that it is in backup and if backup succeeds, it IS in backup.
     But because it is in backup, it will be verified during the next
     verify and when it's missing, afverify complains. This is not
     really necessary, but will confuse people. So the file is kept
     until next backup, so afverify will not complain. It will be
     automatically removed during next backup. If there are several
     files of this sort, backup has probably failed sometimes. This
     might indicate some kind of error or forced terminations by an
     administrator. Or tests like debugging or whatever uncontrolled
     termination. See also under FAQ Q21 next to tmp_rm_on_backup .
     Basically: Yes, they can be safely removed without any risk.