File: HOWTO

package info (click to toggle)
afbackup 3.3.8.1beta2-2
  • links: PTS
  • area: main
  • in suites: sarge
  • size: 4,128 kB
  • ctags: 3,370
  • sloc: ansic: 46,932; sh: 4,654; tcl: 4,199; makefile: 536; csh: 416; perl: 133; sed: 93
file content (1919 lines) | stat: -rw-r--r-- 82,234 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919

                AF's Backup HOWTO
                =================


Index
-----

1. How to optimize the performance to obtain a short backup time ?

2. How to start the backup on several hosts from a central machine ?

3. How to store the backup in a filesystem instead of a tape ?

4. How to use several streamer devices on one machine ?

5. How to recover from a server crash during backup ?

6. How to port to other operating systems ?

7. How to provide recovery from hard crashes (disk crash, ...) ?

8. How to make differential backups ?

9. How to use several servers for one client ?

10. How can i automatically make copies of the written tapes after a backup ?

11: How to redirect network backups through a secure ssh connection ?

12: What's the appropriate way to eject the cartridge after backup ?

13: How to encrypt the stored files and not only compress them ?

14: How to use the multi-stream server ? Anything special there ?

15: How many clients can connect the multi-stream server ?

16: How to get out of the trouble, when the migration script fails ?

17: How to use built-in compression ?

18: How to save database contents ?

19: How to use the ftape driver ?

20: How to move a cartridge to another set due to it's usage count ?

21: How to make backups to different cartridge sets by type or by date ?

22: How to achieve independence from the machine names ?

23: How to restrict the access to cartridges for certain clients ?

24: How to recover from disaster (everything is lost) ?

25: How to label a tape, while the server is waiting for a tape ?

26: How to use a media changer ?

27: How to build Debian packages ?

28: How to let users restore on a host, they may not login to ?

29: How to backup through a firewall ?

30: How to configure xinetd for afbackup ?

31: How to redirect access, when a client contacts the wrong server ?

32: How to perform troubleshooting when encountering problems ?

33: How to use an IDE tape drive with Linux the best way ?

34: How to make afbackup reuse/recycle tapes automatically ?

35: How to make the server speak one other of the supported languages ?

36: How to build a Solaris package of the afbackup software ?


D. Do-s and Dont-s

F. The FAQ


--------------------------------------------------------------------------

1. How to optimize the performance to obtain a short backup time ?

Basically since version 2.7 the client side tries to optimally adapt
to the currently maximum achievable throughput, so the administrator
doesn't have to do much here.
The crucial point is the location of the bottleneck for the throughput
of the backup data stream. This can be one of:

- The streamer device
- The network connection between backup client and server
- The CPU on the backup client (in case of compression selected)

What usually is not a problem:

- The CPU load of the server

The main influence the administrator has on a good backup performance
is the compression rate on the client side. In most cases the bottleneck
for the data stream will be the network. If it is based on standard
ethernet, the maximum throughput without any other network load will be
around 1 MB/sec. With 100 MBit ethernet or a similar technology about
10 MB/sec might be achieved, so the streamer device is probably the
slowest part (with maybe 5 MB/sec for a Exabyte tape). To use this
capacity it is not clever to plug up the client side CPU with heavy
data compression load. This might be inefficient and thus lead to a
lousy backup performance. The influence of the compression rate on the
backup performance can be made clear with the following table. The
times in seconds have been measured with the (unrepresentative)
configuration given below the table. The raw backup duration gives the
pure data transmission time without tape reeling or cartridge loading
or unloading.

 compression program   |  raw backup duration
-----------------------+----------------------
  gzip -1              |    293 seconds         |
  gzip -5              |    334 seconds         |
  compress             |    440 seconds         | increasing duration
  <no compression>     |    560 seconds         |
  gzip -9              |    790 seconds         V


Configuration:
Server/Client machine:
  586, 133/120MHz (server/client), 32/16 MB (server/client)
Network:
  Standard ethernet (10 MB, 10BASE2 (BNC/Coax), no further load)
Streamer:
  HP-<something>, 190 kByte/sec

Obviously the bottleneck in this configuration is the streamer.
Anyway it shows the big advantage compression can have on the
overall performance. The best performance is here achieved with
the lowest compression rate and thus the fastest compression
program execution. I would expect, that the performance optimum
shifts towards a somewhat better compression with a faster client
CPU (e.g. the latest Alpha-rocket).

So to find an individual performance optimum i suggest to run some
backups with a typical directory containing files and subdirectories
of various sizes. Run these backups manually on the client-side machine
with different compression ratios using the "client"-command as
follows:

/the/path/to/bin/afclient -cvnR -h your_backuphost -z "gzip -1" gunzip \
                             /your/example/directory

Replace "gzip -1" and "gunzip" appropriately for the several runs.


--------------------------------------------------------------------------

2. How to start the backup on several hosts from a central machine ?

For this purpose serves the remote startup utility. To implement this
as fast as possible, a part of the serverside installation must be
made on the client side, where it is requested to start the backup from
a remote site. Choose the appropriate option when running the Install-
script or follow the instructions in the INSTALL file.

To start a backup on another machine, use the -X option of the
client-program. A typical invocation is

/the/path/to/client/bin/afclient -h <hostname> -X incr_backup

This starts an incremental backup on the supplied host. Often
-k /path/to/cryptkey must be given as well, if on the remote
side an EncryptionKeyFile is configured, what is recommended. Each
program on the remote host residing in the directory configured
as Program-Directory in the configuration file of the serverside
installation part of the remote host (default: $BASEDIR/server/rexec)
can be started, but no other. The entries may be symlinks, but
they must have the same filename like the programs, they point to.

The machine, where this script is started may be any machine in
the network having the client side of the backup system installed.


--------------------------------------------------------------------------

3. How to store the backup in a filesystem instead of a tape ?

There are several ways how to accomplish that. Two options are
explained here. I personnally prefer option 2, but they are
basically equivalent.

* Option 1 (using symbolic links)

Assumed the directory, where you'd like to store the backup, is
/var/backup/server/vol.X with X being the number of the pseudo-
cartridge, change to the directory /var/backup/server and create
a symbolic link and a directory like this:

 ln -s vol.1 vol ; mkdir vol.1

Then create the file `data.0' and a symlink `data' to it with

 touch vol/data.0
 ln -s data.0 vol/data

The directories and symlinks /var/backup/server/vol* must be owned
or at least be writable for the user, under whose ID the backup server
is running. The same applies for the directory /var/backup/server.
If this is not root, issue an appropriate chown command, e.g.:

 chown backup /var/backup/server /var/backup/server/vol*

At least two pseudo-cartridges should be used. This is achieved by
limiting the number of bytes to be stored on each of them. So now
edit your serverside configuration file and make e.g. the following
entries (assuming /usr/backup/server/bin is the directory, where the
programs of the server side reside):

Backup-Device:          /var/backup/server/vol/data
Tape-Blocksize:         1024
Cartridge-Handler:      1
Number Of Cartridges:	1000
Max Bytes Per File:     10485760
Max Bytes Per Tape:     104857600
Cart-Insert-Gracetime:  0
SetFile-Command:        /bin/rm -f %d;touch %d.%m; ln -s %d.%m %d; exit 0
SkipFiles-Command:      /usr/backup/server/bin/__inc_link -s %d %n
Set-Cart-Command:       /bin/rm -f /var/backup/server/vol; mkdir -p /var/backup/server/vol.%n ; ln -s vol.%n /var/backup/server/vol ; touch %d.0 ; /bin/rm -f %d ; ln -s data.0 %d;exit 0
Change-Cart-Command:    exit 0
Erase-Tape-Command:     /bin/rm -f %d.[0-9]* %d ; touch %d.0 ; ln -s %d.0 %d ; exit 0

If the directory /var/backup/server/vol/data is on a removable media,
you can supply the number of media you would like to use and an
eject-command as follows:

Number Of Cartridges:   10
# or whatever

Change-Cart-Command:    your_eject_command

If a suitable eject-command does not exist, try to write one yourself.
See below for hints.

Furthermore you can add the appropriate umount command before the eject-
command like this:

Change-Cart-Command:    umount /var/backup/server/vol ; your_eject_command

To get this working the backup serverside must run as root. Install the
backup stuff supplying the root-user when prompted for the backup user.
Or edit /etc/inetd.conf and replace backup (or whatever user you configured)
(5th column) with root, sending a kill -1 to the inetd afterwards.
Actually you must mount the media manually after having it inserted into
the drive. Afterwards run the command /path/to/server/bin/cartready to
indicate, that the drive is ready to proceed. This is the same procedure
like having a tape drive.

Each media you will use must be prepared creating the file "data.0" and
setting the symbolic link "data" pointing to data.0 like described above.


* Option 2 (supply a directory name as device)

Like with option 1 several pseudo-cartridges should be used, at
least two. Like above create a directory to contain the backup data
and a symlink, then chown them to the backup user:

 mkdir -p /var/backup/server/vol.1
 ln -s vol.1 /var/backup/server/vol
 chown backup /var/backup/server/vol*

Using one of the serverside configuration programs or editing the
configuration file, supply a directory name as the backup device.
The directory must be writable for the user, under whose ID the
server process is started (whatever you configured during
installation, see /etc/inetd.conf). The backup system then writes
files with automatically generated names into this directory.
The rest of the configuration should e. g. be set as follows:

Backup-Device:          /var/backup/server/vol
Tape-Blocksize:         1024
Cartridge-Handler:      1
Number Of Cartridges:   100
Max Bytes Per File:     10485760
Max Bytes Per Tape:     104857600
Cart-Insert-Gracetime:  0
SetFile-Command:        exit 0
SkipFiles-Command:      exit 0
Set-Cart-Command:       /bin/rm -f %d ; mkdir -p %d.%n ; ln -s %d.%n %d ;  exit 0
Change-Cart-Command:    exit 0
Erase-Tape-Command:     /bin/rm -f %d/* ; exit 0

A SetFile-Command is mandatory, so this exit 0 is a dummy.
For the further options (using mount or eject commands) refer
to the explanations under * Option 1.


(
   How to write an eject command for my removable media device ?

If the information in the man-pages is not sufficient or you don't
know, where to search, try the following:
Do a grep ignoring case for the words "eject", "offline" and
"unload" over all system header-files like this:

egrep -i '(eject|offl|unload)' /usr/include/sys/*.h

On Linux also try /usr/include/linux/*.h and /usr/include/asm/*.h.
You should find macros defined in headers with names giving hints
to several kinds of devices. Look into the header, whether the
macros could be used with the ioctl system call. The comments
should tell details. Then you can eject the media with the
following code fragment:

#include <sys/ioctl.h>
#include <your_device_related_header>

{
  int   res, fd;
  char  *devicefile = "/dev/whatever";

  fd = open(devicefile, O_RDONLY);

  if(fd < 0){
    /* catch error */
    ...
  }

  res = ioctl(fd, YOUR_EJECT_MACRO);

  if(res < 0){
    /* catch error */ ...
  }

  close(fd);
}

You might want to extend the utility obtainable via ftp from:
ftp://ftp.zn.ruhr-uni-bochum.de/pub/Linux/eject.c and related
files. Please send me any success news. Thanks !


--------------------------------------------------------------------------

4. How to use several streamer devices on one machine ?

Run an installation of the server side for each streamer device,
install everything into a separate directory and give a different
port number to each installed server. This can be done giving each
server an own service name. For the default installation, the
service is named "afbackup" and has port number 2988. Thus, entries
are provided in files in /etc:

/etc/services:
afbackup  2988/tcp

/etc/inetd:
afbackup stream tcp nowait ...

For a second server, you may add appropriate lines, e.g.:

/etc/services:
afbackup2 2989/tcp

/etc/inetd.conf:
afbackup2 stream tcp nowait ...

Note, that the paths to the configuration files later in the inetd.conf-
lines must be adapted to each installation, respectively. To get the
services active, send a Hangup-Signal to the inetd.
(ps ..., kill -HUP <PID>)

It is important, that every server of several running on the same
host has it's own lock file. So e.g. configure lockfiles, that
are located in each server's var-directories. If they all share
one lockfile, several servers cannot run at the same time, what
is usually not, what you want.

The relations between backup clients and streamer devices on the
server must be unique. Thus the /etc/services on the clients must
contain the appropriate port number for the backup entry, e.g.:

afbackup  2990/tcp

Note, that on the clients the service name must always be "afbackup"
and not "afbackup2" or whatever.

As an alternative, you can supply the individual port number in
the clientside configuration. If you do so, no changes must be
made in any clientside system file, here /etc/services.

Do not use NIS (YP) for maintaining the afbackup-services-entry, i.e.
do not add the entry with "afbackup" above to your NIS-master-services-file.
It is anyway better not to use the files /etc/passwd ... as sources
for your NIS-master-server, but to use a copy of them in a separate
directory (as usually configured on Solaris and other Unixes).


--------------------------------------------------------------------------

5. How to recover from a server crash during backup ?

With some devices there will be the problem, that the end-of-tape mark
is not written on power-down during writing to the tape. Even worse,
when power is up again, the position, where the head is currently placed,
gets corrupt, even if no write access has been applied at power-down.
Some streamers are furthermore unable to start to write at a tape
position, where still records follow, e.g. if there are 5 files on tape,
it is e.g. impossible to go to file 2 and start to write there. An
I/O-error will be reported.

The only way to solve this is to tell the backup system to start to
write at the beginning of the next cartridge. If the next cartridge
has e.g. the label-number 5, log on to the backup server, become root
and type:

  /your/path/to/server/bin/cartis -i 5 1


--------------------------------------------------------------------------

6. How to port to other operating systems ?


* Unix-like systems *

This is not that difficult. The GNU-make is mandatory, but this is
usually no problem. A good way to start is to grep for AIX or sun
over all .c- and .h-files, edit them as needed and run the make.
You might want to run the prosname.sh to find out a specifier for
your operating system. This specifier will be defined as a macro
during compilation (exactly: prepocessing).

An important point is the x_types.h-file. Here the types should be
adapted as described in the comments in this file, lines 28-43.
Insert ifdef-s as needed like for the OSF 1 operating system on alpha
(macros __osf__ and __alpha). Note, that depending on the macro
USE_DEFINE_FOR_X_TYPES the types will be #define-d instead of
typedef-d. This gives you more flexibility, if one of those
possibilities is making problems.

The next point is the behaviour of the C-library concerning the
errno-variable in case the tape comes to it's physical end. In most
cases errno is set to ENOSPC, but not always (e.g. AIX is special).
This can be adapted modifying the definition of the macro
END_OF_TAPE (in budefs.h). This macro is only used in if-s as shown:
  if(END_OF_TAPE) ...
Consult your man-pages for the behaviour of the system calls on
your machine. It might be found under rmt, write or ioctl.

The next is the default name of the tape device. Define the macro
DEFAULT_TAPE_DEVICE (in budefs.h) appropriately for your OS.

A little pathological is the statfs(2) system call. It has a different
number of arguments depending on the system. Consult your man-pages,
how it should be used. statfs is only used in write.c

There may be further patches to be done, but if your system is close
to POSIX this should be easy. The output of the compiler and/or the
linker should give the necessary hints.

Please report porting successes to af@muc.de. Thanks.

Good luck !



* Win-whatever *

This is my point of view:

Porting to Microsoft's Features-and-bugs-accumulations is systematically
made complicated by the Gates-Mafia. They spend a lot of time on taking
care, that it is as difficult as possible to port to/from Win-whatever.
This is one of their monopolization strategies. Developers starting to
write programs shall have to make the basic decision: "Am i gonna hack
for Micro$oft's "operating systems", or for the others ?" Watching the
so-called market this decision is quite easy: Of course they will program
for the "market leader". And as few as possible of what they produce
should be usable on other ("dated") platforms. Companies like Cygnus
are providing cool tools (e. g. a port of the GNU-compiler) to make
things easier but due to the fact, that M$ are not providing so many
internals to the public, in my opinion porting is nonetheless an
annoying job. Thank Bill Gates for his genious strategies.

In short, at the moment i'm not gonna provide information how to port
to Micro$oft-platforms. If somebody will do a port, i don't hinder him
but will not provide any support for it. As this software (like the most
on Unix) heavily relies on POSIX-conformance and Mafia$oft has announced,
that the "POSIX-subsystem for NT" will not be shipped anymore in the near
future (BTW they discourage to use it at all "cause of security problems"
(Bullshit) - see the Microsoft web pages), the porting job will either
substitute all POSIX-calls by Win32-stuff (super-heavy efforts), or bring
only temporary fun (see above).


--------------------------------------------------------------------------

7. How to provide recovery from hard crashes (disk crash, ...) ?

A key to this is the clientside StartupInfoProgram parameter. This
command should read the standard input and write it to some place
outside of the local machine - to be more precise - not to a disk
undergoing backups or containing the clientside backup log files.
The information written to the standard input of this program is
the minimum information required to restore everything after a
complete loss of the saved filesystems and the client side of the
backup system. Recovery can be achieved using the restore-utility
with the -e flag (See: PROGRAMS) and supplying the minimum recovery
information to the standard input of restore. Several options exist:

- Write this information to a mail-program (assumed the mail folders
  are outside of the filesystems undergoing backup) and sending this
  information to a backup-user. Later the mailfile can be piped into
  the restore-utility (mail-related protocol lines and other unneeded
  stuff will be ignored). For each machine, that is a backup client,
  an individual mail user should be configured, cause the minimum
  restore information does NOT contain the hostname (to be able to
  restore to a different machine, what might make perfect sense in
  some situations)

- Write the information into a file (of course: always append),
  that resides on an NFS-mounted filesystem, eventually for security
  reasons exported especially to this machine only. To be even more
  secure, the exported directory might be owned by a non-root-user,
  who is the only one, who may write to this directory. This way it
  can be avoided to export a directory with root-access. Then the
  StartupInfoProgram should be something like:
   su myuser -c "touch /path/to/mininfo; cat >> /path/to/mininfo"
  The mininfo-file should have a name, that allows to deduce the
  name of the backup-client, that wrote it. E.g. simply use the
  hostname for this file.

- Write the information to a file on floppy disk. Then the floppy
  disk must always be in the drive, whenever a backup runs. The
  floppy could be mounted using the amd automounter as explained in
  ftp://ftp.zn.ruhr-uni-bochum.de/pub/linux/README.amd.floppy.cdrom
  or using the mtools usually installed for convenience. In the
  former case the command should contain a final sync. In the
  latter case the file must be first copied from floppy, then
  appended the information, finally copied back to floppy e.g. like
  this:
   mcopy -n a:mininfo /tmp/mininfo.$$; touch /tmp/mininfo.$$; \
       cat >> /tmp/mininfo.$$; mcopy -o /tmp/mininfo.$$ a:mininfo; \
       /bin/rm -f /tmp/mininfo.$$; exit 0
  Note, that the whole command must be entered in one line using
  the (x)afclientconfig command. In the configuration file lineend
  escaping is allowed, but not recognized by (x)afclientconfig. An
  alternative is to put everything into one script, that is started
  as StartupInfoProgram (Don't forget to provide a good exit code
  on successful completion)

My personal favourite is the second option, but individual preferences
or requirements might lead to different solutions. There are more
options here. If someone thinks, i have forgotten an important one,
feel free to email me about it.

It might be a good idea to compile afbackup linked statically with
all required libraries (building afbackup e.g. using the command
make EXTRA_LD_FLAGS=-static when using gcc), install it, run the
configuration program(s), if not yet done, tar everything and
put it to a floppy disk (if enough space is available).

To recover from a heavy system crash perform the following steps:
- Replace bad disk(s) as required
- Boot from floppy or cdrom (the booted kernel must be network-able)
- Add the backup server to /etc/hosts and the following line to
  /etc/services: afbackup 2988/tcp
- Mount your new disk filesystem(s) e.g. in /tmp/a and in a way, that
  this directory reflects your original directory hierarchy below
  / (like usually most system setup tools do)
- Untar your packed and statically linked afbackup-distribution, but
  NOT to the place, where it originally lived (e.g. /tmp/a/usr/backup),
  cause it will be overwritten, if you also saved the clientside
  afbackup-installation, what i strongly recommend.
- Run the restore-command with -e providing the minimum restore
  information saved outside of the machine to stdin:
  /path/to/staticlinked/afrestore -C /tmp/a -e < /path/to/mininfo-file

Bootsector stuff is NOT restored in this procedure. For Linux
you will have to reinstall lilo, but this is usually no problem.


--------------------------------------------------------------------------

8. How to make differential backups ?

A differential backup means for me: Save all filesystem entries modified
since the previous full backup, not only those modified since the last
incremental backup.

This task can be accomplished using the -a option of the incr_backup
command. It tells incr_backup to keep the timestamp. If -a is omitted
one time, another differential backup is no longer possible since the
timestamp is modified without -a. So if differential backups are required,
you have to do without incremental backups.


--------------------------------------------------------------------------

9. How to use several servers for one client ?

Several storage units can be configured for one client. A storage unit
is a combination of a hostname, a port number and a cartridge set number.
Several servers can be configured on one machine, each operating an own
streamer device or directory for storing the data.

The storage units are configured by the first three parameters of the
client side. These are hostnames, port numbers and cartridge set numbers,
respectively. Several entries can be made for each of these parameters.
The port numbers and/or cartridge set numbers can be omitted or fewer
than hostnames can be supplied, then the defaults will apply. If more
port or cartridge set numbers than hostnames are given, the superfluous
ones are ignored. The lists of hostnames and numbers can be separated
by whitespace and/or commas.

When a full or incremental backup starts on a client, it tests the
servers, one after the other, whether they are ready to service them.
If none is ready, it waits for a minute and tries again.

With each stored filesystem entry, not only the cartridge number and
file number on tape is stored, but now also the name of the host,
where the entry is stored to, and the appropriate port number. Thus
they can be restored without the necessarity, that the user or adminis-
trator knows, where they are now. This all happens transparently and
without additional configuration efforts. For older backups, the first
entry of each list (hostname and port) is used. Therefore, in case of
an upgrade, the first entries MUST be those, that applied for this
before the upgrade.

If there are several clients, the same order of server entries should
not be configured for all of them. This would probably cause most of
the backups to go to the first server, while the other(s) are not
exploited. The entries should be made in a way, that a good balancing
of storage load is achieved. Other considerations are:

- Can the backup be made to a server in the same subnet, where the
  client is
- Has this software been upgraded ? Then the first entry should be
  the same server as configured before (see above)
- The data volume on the clients to be saved (should be balanced)
- The tape capacity of the servers
- other considerations ...


--------------------------------------------------------------------------

10. How can i automatically make copies of the written tapes after a backup ?

For this purpose a script has been added to the distribution. It's name
is autocptapes and it can be found in the /path/to/client/bin directory.
autocptapes should read the statistics output and will copy all tapes
from the first accessed tape through the last one to the given destination.
Copying will begin at the first written tapefile, so not the whole tape
contents are copied all the time again.

The script has the following usage:

autocptapes [ -h <targetserver> ] [ -p <targetport> ] \
                   [ -k <targetkeyfile> ] [ -o cartnumoffset ]

targetserver    must be the name of the server, where to copy the tapes to.
                (default, if not set: the source server)
targetport      must be the appropriate target server port (default, if not
                set: the source port)
targetkeyfile   the file containing the key to authenticate to the target
                server (default: the same file as for the source server)
cartnumoffset   the offset to be added to the source cartridges' numbers
                to get the target cartridge numbers (may be negative,
                default: 0). This is useful, if e.g. copies of tapes 0-5
                shall be on tapes 6-10, then simply an offset of 5 would
                be supplied.

The script can be added to the client side configuration parameter
ExitProgram, so that it reads the report file containing the backup
statistics. This may e.g. look as follows:

ExitProgram:		/path/to/client/bin/autocptapes -o 5 < %r

Note, that this is a normal shell interpreted line and %r can be used
in several commands separated by semicolon, && or || ...

WARNING: If several servers are configured for the client, this
automatic copying is severely discouraged, cause cartridge numbers
on one server do not necessarily have something to do with those on
another server. It should be carefully figured out, how a mapping of
source and target servers and cartridge numbers could be achieved.
This is subject of future implementations.


--------------------------------------------------------------------------

11: How to redirect network backups through a secure ssh connection ?

ssh must be up and working on client(s) and server(s). On the
server, an sshd must be running. Then port forwarding can be
used. As afbackup does not use a privileged port, the forwarding
ssh needs not to run as root. Any user is ok. To enable afbackup
to use a secure ssh connection, no action is necessary on the
server. On the client, the following steps must be made:

- Configure the client itself as the server in the clientside
  configuration file as localhost (the ssh forwarder seems to
  accept connections only from the loopback interface). No
  afbackup server process should be running on this client. If
  an afbackup server is running, a different port than the default
  2988 must be configured. This different port number should be
  passed to ssh forwarder, when started.

- Start the ssh forwarder. The following command should do the job:

   ssh -f -L 2988:afbserver:2988 afbserver sleep 100000000

Explanations: -f makes the ssh run in the background, & is not
 necessary. -L tells the ssh to listen locally at port 2988.
 This(first) port number must be replaced, if a different port
 must be used due to an afbackup server running locally or other
 considerations. afbserver must be replaced with the name of the
 real afbackup server. The second port number 2988 is the one,
 where the afbackup server really expects connections and that
 was configured on the client before trying to redirect over ssh.
 The sleep 100000 is an arbitrary command that does not terminate
 within a sufficient time interval.

Now the afbackup client connects to the locally running ssh, who
in turn connects the remote sshd, who connects the afbackup server
awaiting connections on the remote host. So all network traffic is
done between the ssh and sshd and is thus encrypted.
A simple test can be run (portnum must only be supplied if != 2988)
on the client:

 /path/to/client/bin/client -h localhost -q [ -p portnum ]

If that works, any afbackup stuff should.

If it is not acceptable, that the ssh-connection is initiated from
the client side, the other direction can be set up using the -R
option of ssh. Instead of the second step in the explanations above
perform:

- On the server start the command:

   ssh -f -R 2988:afbserver:2988 afbclient sleep 100000000


--------------------------------------------------------------------------

12: What's the appropriate way to eject the cartridge after backup ?

In my opinion it is best to exploit the secure remote start option
of afbackup. Programs present in the directory configured as the
Program-Directory on the server side can be started from a client
using the -X option of afclient. Either write a small script, that
does the job and put the script into the configured and created (if
not already present) directory. Don't forget execute permission. Or
simply create a symbolic link to mt in that directory (e.g. type
ln -s `which mt` /path/to/server/rexec/mt). Then you can eject the
cartridge from any client eject running 

/.../client/bin/afclient -h backupserver -X "mt -f /dev/whatever rewoffl"


--------------------------------------------------------------------------

13: How to encrypt the stored files and not only compress them ?

A program, that performs the encryption is necessary, let's simply call
it des, what is an example program for what we want to achieve here. The
basic problem must be mentioned here: To supply the key it is necessary
to either type in the key twice or to supply it on the command line using
the option -k. Typing the key in is useless in an automated environment.
Supplying the key in an option makes it visible in the process list, that
any user can display using the ps command or (on Linux) reading the
pseudo-file cmdline present in each process's /proc/<pid>/ directory.
The des program tries to hide the key overwriting the 8 significant bytes
of the argument, but this does not always work. Anyway the des program
shall serve as example here. Note, that the des program will usually
return an exit status unequal to 0 (?!?), so the message "minor errors
occurred during backup" does not have special meanings.

Another encryption program comes with the afbackup distribution and is
built, if the libdes is available and des-encrypted authentication is
switched on. The program is called __descrpt. See the file PROGRAMS
for details on this program. The advantage of this program is, that
no key has to be supplied on the command line visible in the process
list. The disadvantage is, that the program must not be executable by
intruders, cause they would be able to simply start it and decrypt.
To circumvent this to a certain degree, a filename can be supplied to
this program, that the key will be read from. In this case this key
file must be access restricted instead of the program itself.

If only built-in compression is to be used, everything is quite simple.
The BuiltinCompressLevel configuration parameter must be set > 0 and the
en- and decrypt programs be specified as CompressCmd and UncompressCmd.
If an external program should be used for compress and uncompress, it
is a little more difficult:

Cause the client side configuration parameter CompressCommand is NOT
interpreted in a shell-like manner, no pipes are possible here. E.g. it
is impossible to supply something like:  gzip -9 | des -e -k lkwjer80723k
there.

To fill this gap the helper program __piper is added to the distribution.
This program gets a series of commands as arguments. The pipe symbol |
may appear several times in the argument list indicating the end of a
command and the beginning of the next one. Standard output and standard
input of the following command are connected as usual in a shell command.
No other special character is interpreted except the double quotes, that
can delimit arguments consisting of several words separated by whitespace.
The backslash serves as escape character for double quotes or the pipe
symbol. The startup of a pipe created by the __piper program is expected
to be much faster compared to a command like  sh -c "gzip | des -e ...",
where a shell with all it's initializations is used.

Example for the use of __piper in the client side configuration file:

CompressCommand:  /path/to/client/bin/__piper gzip -1 | des -e -k 87dsfd

UncompressCommand: /path/to/client/bin/__piper des -d -k 87dsfd | gunzip


--------------------------------------------------------------------------

14: How to use the multi-stream server ? Anything special there ?

The multi-stream server should be installed properly as described in the
file INSTALL or using the script Install. It is heavily recommended to
configure a separate service (i.e. TCP-port) for the multi-stream server.
Thus backups can go to either the single-stream server or to the multi-
stream server. The index mechanism of the client side handles this
transparently. The information, where the data has been saved, has not
to be supplied for restore.

The single stream server might be used for full backups, because it is
generally expected to perform better and provide higher throughput. The
multi-stream server has advantages with incremental backups, because
several clients can be started in parallel to scan through their disk
directories for things, that have changed, what may take a long time.
If there are several file servers with a lot of data it might be desired
to start the incremental backups at the same time, otherwise it would
take too much time. Having configured the single stream server as default
in the client side configuration, the incr_backup program will connect
to the multi-stream server using the option -P with the appropriate port
number of the multi-stream server.

As it is not possible, that several single stream servers operate on the
streamer at the same time, it is not possible, that a multi-stream server
and a single-stream server do in parallel. This is only the multi-stream
server's job.

The clients must be distinguishable for the multi-stream server. It puts
the data to tape in packets prefixed with a header containing the clients'
identifiers. Dispatching during read it must have an idea, which client
is connected and what data it needs. Default identifier is the official
hostname of the client or the string "<client-program>", if the program
"afclient" is used. It is not allowed, that several clients with the same
identifier connect, cause that would mix up their data during read, what
is obviously not desirable. A client identifier can be configured in the
client side configuration file using the parameter ClientIdentifier or
using the option -W (who), that every client side program supports.
It might be necessary to do this, e.g. if a client's official hostname
changes. In this case the client won't receive any data anymore, cause
the server now looks for data for the client with the new name on tape,
which he won't find.

To find out and store the client's identifiers easily it is included
into the statistics report, that can be used (e.g. sent to an admin
via E-mail) in the client side exit program.


--------------------------------------------------------------------------

15: How many clients can connect the multi-stream server ?

This depends on the maximum number of filedescriptors per process on the
server. On a normal Unix system this number is 256. The backup system
needs some file descriptors for logging, storing temporary files and so
on, so the maximum achievable number of clients is something around 240.
It is not recommended to really run that many clients at the same time,
this has NOT been tested.
Anyway the number of filedescriptors per process can be increased on
most systems, if 240 is not enough.


--------------------------------------------------------------------------

16: How to get out of the trouble, when the migration script fails ?

This depends, where the script fails. If it says:
"The file .../start_positions already exists."
there is no problem. You might have attempted migration before.
If this is true, just remove this file or rename it. If it does
not contain anything it is anyway useless. When the script tells,
that some files in .../var of your client installation contain
different (inconsistent) numbers, then it is getting harder.
Locate the last line starting with ~~Backup: in you old style
minimum restore info and take the number at the end of it.
The file `num' in your clientside var directory should contain
the same number. If it does not, check the current number of the
File index files, also in the clientside var directory. Their
name is determined by the configuration parameter IndexFilePart.
The file `num' should contain the highest number found in the
filenames. If not, edit the file num, so it does. Nonetheless
this number must also match the one noted earlier. If it does
not, this is weird. If your minimum restore info contains only
significantly lower numbers, you have a real problem, cause
then you minimum restore info is not up to date. In this case
migration makes no sense and you can skip the migration step
starting anew with fingers crossed heavily.
If the file `num' in the var directory is missing, then you
must check your configuration. If you have never made a backup
before, then this file is indeed not there and migration makes
not too much sense.
If the full_backup program you supply is found not being
executable, please double-check your configuration and make
sure, that you are a user with sufficient power.


--------------------------------------------------------------------------

17: How to use built-in compression ?

The distribution must be built selecting the appropriate
options to link the zlib functions in. When using the Install
script you are asked for the required information. Otherwise
see the file INSTALL for details.

The zlib version 1.0.2 or higher is required to build the
package with the built-in compression feature. If the zlib is not
available on your system (on Linux it is usually installed by
default), get it from some ftp server and build it first before
attempting to build afbackup.

The new clientside configuration parameter BuiltinCompressLevel
turns on built-in compression. See FAQ Q27, what to do when the
compression algorithm is to be changed.


--------------------------------------------------------------------------

18: How to save database contents ?

There are several ways to save a database. Which to choose,
depends on the properties of the database software. The
simplest way is to

1.) Save the directory containing the database files

This assumes, that the database stores the data in normal
files somewhere in the directory structure. Then these
files can be written to tape. But there is a problem here,
cause the database software might make use of caching or
generally keep necessary information in memory as long as
some database process is running. Then just saving the
files and later restoring them will quite sure corrupt the
database structure and at least make some (probably long
running) checks necessary, if not make the data unusable.
Thus it is necessary to shut down the database before
saving the files. This is often unacceptable, cause users
can not use the database while it is not running. Consult
the documentation of your database, whether it can be
saved or dumped online and read on.

2.) Save the raw device

This assumes, that the database software stores the data
on some kind of raw device, maybe a disk partition, a solid
state disk or whatever. Then it can be saved prefixing the
name with /../, no space between the prefix and the raw
device name. Instead of /../ the option -r can be used in
the client side configuration file. By default the data is
not compressed, because one single wrong bit in the saved
data stream might make the whole rest of the data stream
unusable during uncompression. If compression is nonetheless
desired, the prefix //../ can be used or the option -R .
For online/offline issues the same applies here, as if the
data were kept in normal files.

3.) Save the output of a dump command

If your database has a command to dump all it's contents,
it can be used to directly save the output of this command
to the backup. In the best case, this dump command and it's
counterpart, who reads, what the dump command has written
and thus restores the whole database or parts of it, is able
to do the job online without shutting down the database.
Such a pair of commands can be supplied in the client side
configuration file as follows: In double quotes, write a
triple bar ||| , followed by a space character and the dump
command. This may be a shell command, maybe a command pipe
or sequence or whatever. Then another triple bar must be
written, followed by the counterpart of the dump command
(also any shell-style command is allowed). After all that,
an optional comment may follow, prefixed with a triple
sharp ###. Example:

 ||| pg_dumpall ||| psql db_tmpl ### Store Postgres DBs


--------------------------------------------------------------------------

19: How to use the ftape driver ?

There's nothing very special here. All mt commands in the
server side configuration must be replaced with appropriate
ftmt versions. The script __mt should be obsolete here, as
it only handles the special case when the count value is 0
e.g. for skipping tape files with  mt fsf <count> . ftmt
should be able to handle count=0, so simply replace __mt
with ftmt in the default configuration. For the tape device,
supply /dev/nqftX with X being the appropriate serial number
assigned to the device by your OS (ls /dev/nqft* will tell
all available devices, try ftmt ... to find out the correct
one).


--------------------------------------------------------------------------

20: How to move a cartridge to another set due to it's usage count ?

This can be done automatically configuring an appropriate
program as Tape-Full-Command on the server side. An example
script has been provided and installed with the distribution.
It can be found as /path/to/server/bin/cartagehandler. As is,
it maintains 3 cartridge sets. If a tape has become full more
than 80 times and it is in set 1, it is moved to set 2. If
it became full more than 90 times and it is in set 1 or 2,
it is moved to set 3. If the number of cycles exceeds 95, the
cartridge is removed from all sets.
To accomplish this task, the script gets 3 arguments:
The number of the cartridge currently getting full, the number
of it`s complete write cycles up to now and the full path to
the serverside configuration file, which is modified by the
script. If the Tape-Full-Command is configured like this:

 TapeFull-Command:  /path/to/server/bin/cartagehandler %c %n %C

then it will do the job as expected. Feel free to modify this
script to fit your needs. The comments inside should be helpful.
Look for "User configured section" and the like in the comments.
This script is not overwritten, when upgrading i.e. installing
another version of afbackup. Please note, that the configuration
file must be writable for the user, under whose id the server
starts. The best way is to make the configuration file be owned
by this user.
See also the documentation for the program __numset, it's very
helpful in this context.


--------------------------------------------------------------------------

21: How to make backups to different cartridge sets by type or by date ?

Sometimes people want to make the incremental backups to other sets
of cartridges than the full backups. Or they want to change the
cartridge set weekly. Here the normal cartridge set mechanisms can
be used (client side option -S). If the difference is the type
(full or incremental), the -S can be hardcoded into the crontab
entry. If the difference is the date, a little simple script can
help. If e.g. in even weeks the backup should go to set 1 and in
odd weeks to set 2 the following script conveys the appropriate
set number, when called:

#!/bin/sh

expr '(' `date +%W` % 2 ')' + 1

This script can be called within the crontab entry. Typical crontab
entries will thus look as follows, assuming the script is called
as /path/to/oddevenweek:

# full backup starting Friday evening at 10 PM
0 22 * * 5  /path/to/client/bin/full_backup -d -S `/path/to/oddevenweek`
# incremental backup starting Monday - Thursday at 10 PM
0 22 * * 1-4 /path/to/client/bin/incr_backup -d -S `/path/to/oddevenweek`


--------------------------------------------------------------------------

22: How to achieve independence from the machine names ?

- Use a host alias for the backup server and use this name in the
  clients' configuration files. Thus, if the server changes, only
  the hostname alias must be changed to address the new server

- Configure a ServerIdentifier, e.g. reflecting the hostname alias
  on the server side

- Use the client identifiers in the clientside configuration files.
  Set them to strings, that can easily be remembered

Notes:

Performing the steps above no hostname should appear in any index
file, minimum restore info or other varying status information
files any more.
If now the server changes, the server identifier must be set to
the value the other server had before and the client will accept
him after contacting. To contact the correct server the client
configurations would have to be changed to the new hostname. Here
the hostname alias serves for making things easier. No client
configuration must be touched, just the hostname alias assigned
to a different real hostname in NIS or whatever nameservice is
used.
If restore should go to a different client, the identifier of the
original client, the files have been saved from, must be supplied
to get the desired files back. Option -W will be used in most cases.


--------------------------------------------------------------------------

23: How to restrict the access to cartridges for certain clients ?

Access can restricted on a cartridge set base. For each cartridge
set a check can be configured, whether a client has access to it
or not. Refer to the afserver.conf manual page under Cartridge-Sets
how to specify the desired restrictions.


--------------------------------------------------------------------------

24: How to recover from disaster (everything is lost) ?

There are several stages to recover. First for the client side:

* Only the data is lost, afbackup installation and indexes are still
  in place

Nothing special here. To avoid searching the index the -a option of
afrestore is recommended. Instead, afrestore '*' can be used, but
this will search the index and might take longer.

* Data, afbackup installation and indexes are gone, minimum restore
  information is available

Install afbackup from whatever source. Then run afrestore -e. If you
haven't configured afbackup after installing, pass the client's unique
identifier to the program using the option -W. After pressing <Return>
to start the command, you are expected to enter the minimum restore
info. It is necessary, that it is typed in literally like written by
the backup system. The easiest way is to cut and paste. The line, that
is containing this information needs not to be the first one entered
and there may be several lines of the expected format, also from other
clients (the client identifier is part of the minimum restore info).
The lastest available one from the input and coming from the client
with the given or configured identifier will be picked and used. Thus
the easiest way to use the option -e is to read from a file containing
the expected information. If you have forgotten the identifier of the
crashed client, look through your minimum restore infos to find it.
To restore only the indexes use option -f instead of -e.

* Data, afbackup installation and indexes are gone, minimum restore
  information is also lost

Find out, which tape(s) has been written to the last time the backup
succeeded for the crashed client. Possibly see the mails sent by the
ExitProgram for more information about this. Install afbackup on the
client. Now run afrestore with option -E, pass it the client identifier
with option -W and one or more tape specifiers with the hostname and
port number (if it's not the default) of the server, where the client
did it's backup to. Examples:

 afrestore -E -W teefix 3@backupserver%3002
 afrestore -E -W my-ID 4-6,9@buhost%backupsrv
 afrestore -E -W c3po.foodomain.org 3@buserv 2@buserv

The third example will scan tapes 3 and 2 on the server buserv using
the default TCP-service to retrieve the minimum restore information.
The first will scan tape 3 on host backupserver, using port number
3002 (TCP). The second one will scan tapes 4 through 6 and 9 on the
server buhost connecting the TCP-service backupsrv. This name must
be resolvable from /etc/services, NIS or similar. Otherwise this
command will not work.
While scanning the tapes all found minimum restore informations (for
any client) will be output, so another one than the one with the
latest timestamp can be used later with option -e. If only the tapes
should be scanned for minimum restore informations without restoring
everything afterwards, option -l can be supplied. Then operation will
terminate having scanned all the given tapes and having printed all
found minimum restore informations.


For the server side:

The var-directory of the server is crucial for operation, so it is
heavily recommended to save it, too (see below under Do-s and Dont-s).
The afbackup system itself can be installed from the latest sources
after a crash.
To get the var-directory back, run afrestore -E or -e, depending on
the availability of the minimum restore information, as explained
above, and pass it a directory to relocate the recovered files. Then
make sure, that no afserver process is running anymore (kill them,
if they don't terminate voluntarily), and move all files from the
recovered and relocated var-directory to the one, that is really
used by the server. If you are doing this as root, don't forget to
chown the files to the userid, with that the afbackup server is
started. If the server's var directory has been stored separately
as explained in Do-Dont, the different client-ID must be supplied to
the afrestore command using the options -W like when having run the
full_backup, e.g.
 afrestore -E -W serv-var -V /tmp/foo -C /tmp/servvar 2@backuphost%backupport
The directory /tmp/foo must exist and can be removed afterwards.
See the man-pages of afrestore for details of the -E mode.


--------------------------------------------------------------------------

25: How to label a tape, while the server is waiting for a tape ?

Start the program label_tape with the desired options, furthermore
supplying the option -F, but without option -f. Wait for the program
to ask you for confirmation. Do not confirm now, first put the tape,
you want to label, into the drive. (The server does not perform any
tape operation, while the label_tape program is running) Now enter
yes to proceed. If the label is the one expected by the server and
the server is configured to probe the tape automatically, it will
immediately use it, otherwise eject the cartridge.


--------------------------------------------------------------------------

26: How to use a media changer ?

To use a media changer, a driver program must be available. On many
architectures mtx can be used. On the Sun under Solaris-2 the stctl
package is very useful. On FreeBSD chio seems to be the preferred
tool. Another driver available for Linux is the sch driver coming
together with the mover command (See changer.conf.sch-mover for a
link). Check the documentation of either package how to use them.
Changer configuration files for these four are coming with the
afbackup distribution (changer.conf.mtx, changer.conf.stctl,
changer.conf.chio and changer.conf.sch-mover), they should work
immediately with the most changers. mtx and stctl can be obtained
from the place, afbackup has been downloaded from.

Very short:
mtx uses generic SCSI devices (e.g. /dev/sg0 ... on Linux), stctl
ships a loadable kernel module, that autodetects changer devices
and creates device files and symlinks /dev/rmt/stctl0 ... in the
default configuration. With stctl it is crucial to write enough
target entries to probe into the /kernel/drv/stctl.conf file.
Note, that the attached mtx.c is a special implementation i was
never able to test myself. It is quite likely, that it behaves
differently than the official mtx, so it will not work with the
attached changer.conf.mtx file. The mover command also comes with
a kernel driver called sch.

If the driving command is installed and proven to work (play around
a little with it), the configuration file for it must be created.
It should reside in the same directory like the serverside config
file, but this is arbitrary. The path to the file must be given
in the server configuration file as parameter like this example:

Changer-Configuration-File:     %C/changer.conf

%C will be replaced with the path to the confdir of the server side.
See the manual pages of the cart_ctl command about what this file
must contain.

Now the device entry in the server configuration must be extended.
The new format is:

<streamerdevice>[=<drive-count>]@<device>#<num-slots>[^<num-loadbays>]

Whitespace is allowed between the special characters for readability.
An example:

/dev/nst0 @ /dev/sg0 # 20

This means: Streamer /dev/nst0 is attached to media handler at /dev/sg0,
which has 20 slots. The part = <drive-count> is optional. It must be
set appropriately, if the streamer is not in position 1 in the changer.
(Note, that with cart_ctl every count starts with 1, independent of the underlaying driver command. This abstraction is done in the configuration).
^ <num-loadbays> is also optional and must not be present, if the changer
does not have any loadbay. A full example:

/dev/nst1 = 2 @ /dev/sg0 # 80 ^ 2

If is recommended to configure a lockfile for the changer with full
path, too. For example:

Changer-Lockfile:        /var/adm/backup/changer.lock

To check the configuration now the command cart_ctl should be run,
simply with option -l. An empty list of cartridge locations should
be printed, just the header should appear. Now the backup system
should be told, where the cartridges currently are. This is done
using the option -P of cart_ctl. To tell the system, that the tapes
10-12 are in slot 1-3 and tapes 2-4 in slot 4-6, enter:

cart_ctl -P -C 10-12,2-4 -S 1-6

Verify this with cart_ctl -l . To tell the system, that Tape 1 is in
the drive 1, enter:

cart_ctl -P -C 1 -D 1

(The drive number 1 is optional, as this is the default)
Optionally the system can store locations for all cartridges not
placed inside any changer. A free text line can be given with the
-P option, what might be useful, for example:

cart_ctl -C 5-9,13-20 -P 'Safe on 3rd floor'

To test the locations database, one might move some cartridges around,
e.g. cartridge 3 into the drive (assumed tape 6 is in some slot and
the location has been told to the system as explained above):

cart_ctl -m -C 3 -D

Load another cartridge to drive, it will be automatically unloaded to
a free slot, if the List-free-slots-command in the configuration works
properly.

Instead of telling the system, what tapes are located in the slots,
one might run an inventory, what makes them all to be loaded into
the drive and the labels to be read. To do this, enter:

cart_ctl -i -S 1-6

For further information about the cart_ctl command, refer to the
manual pages.

To make the server also use the cart_ctl command for loading tapes,
the SetCartridgeCommand in the server configuration must be set as
follows:

Setcart-Command:  %B/cart_ctl -F -m -C %n -D

The parameter Cartridge-Handler must be set to 1 .

Now the whole thing can be tested making the server load a tape from
a client command:

/path/to/client -h serverhost [ -p serverport ] -C 4

Cartridge 4 should be loaded to drive now. Try with another
cartridge. If this works, the afbackup server is properly
configured to use the changer device. Have fun.


--------------------------------------------------------------------------

27: How to build Debian packages ?

Run the debuild command in the debian subdirectory of the distribution


--------------------------------------------------------------------------

28: How to let users restore on a host, they may not login to ?

Here's one suggestion, how to do that. It uses inetd and the
tcpwrapper tcpd on the NFS-server side, where login is not permitted,
and the identd on the client, where the user sits. It starts the X11-
frontend of afrestore setting the display to the user's host:0.
Furthermore required is the ssu (silent su, only for use by the
superuser, not writing to syslog) program. Source can be obtained
from the same download location, where afbackup had been found.
It is part of the albiutils package.

Perform the following steps:

* Add to /etc/services:

remote-afbackup		789/tcp
(or another unused service number < 1024)


* Add to or create the tcpd configuration file /etc/hosts.allow (or similar,
  man tcpd ...):

in.remote-afbackup : ALL : rfc931 : twist=/usr/sbin/in.remote-afbackup %u %h


* Add to /etc/inetd.conf and kill -HUP the inetd:

remote-afbackup   stream tcp  nowait  root  /usr/sbin/tcpd  in.remote-afbackup

(if the tcpd is not in /usr/sbin, adapt the path. If it's not
installed: Install it. It makes sense anyway)


* create a script /usr/sbin/in.remote-afbackup and chmod 755 :
#!/bin/sh
#
# $Id: HOWTO,v 1.1 2004/07/08 20:34:48 alb Exp alb $
#  
# shell script for starting the afbackup X-frontend remotely through
# inetd, to be called using the 'twist' command of the tcp wrapper.
# Note: on the client the identd must be running or another RFC931
# compliant service
#

if [ $# != 2 ] ; then
   echo Error, wrong number of arguments
   exit 0
fi

remuser="$1"
remhost="$2"

if [ "$remuser" = "" -a "$remuser" = "" ] ; then
   echo Error, required argument empty
   exit 0
fi

# check for correct user entry in NIS
ushell=`/usr/bin/ypmatch "$remuser" passwd 2>/dev/null | /usr/bin/awk -F: ' {print $7}'`
if [ _"$ushell" = _ -o _"$ushell" = "_/bin/false" ] ; then
   echo "You ($remuser) are not allowed to use this service"
   exit 0
fi

gr=`id "$remuser"| sed 's/^.*gid=[0-9]*(//g' | sed 's/).*$//g'`

# check, if group exists
ypmatch $gr group.byname >/dev/null 2>&1
if [ $? -ne 0 ] ; then
  echo "Error: group $gr does not exist. Please check"
  exit 0
fi

DISPLAY="$remhost":0
export DISPLAY

/path/to/ssu "$remuser":$gr -c /usr/local/afbackup/client/bin/xafrestore

####### end of script ######

* Edit the last line with ssu to reflect the full path to ssu, that you
  have built from the albiutils package.

Now a user can start the xafrestore remotely by simply:

telnet servername 789

(or whatever port has been chosen above).
For user-friendlyness, this command can be put into a script
with an appropriate name.

Thanks to Dr. Stefan Scholl at Infineon Techologies for this
concept and part of the implementation


--------------------------------------------------------------------------

29: How to backup through a firewall ?

Connections to port 2988 (or whatever port the service is assigned)
must be allowed in direction towards the server (TCP is used for all
afbackup connections). If the multi stream service is to be used,
this port must also be open (default 2989, if not changed) in the
same direction.
If the remote start option is desired (afclient -h hostname -X ...),
connections to the target port 2988 (i.e. afbackup) of the client
named with option -h must be permitted from the host, this command
is started on.
If the encryption key for the client-server authentication is kept
secret and protected with care on the involved computers, the server
port of afbackup is not exploitable. So it may be connectable by the
world without any security risk. The only non desirable thing, that
might happen, is a denial of service attack opening high numbers of
connections to that port. The inetd will probably limit the number
of server programs to be started simultaneously, but clients will
no longer be able to open connections to run their backup.
The connections permitted through the firewall should in any case be
restricted from and to the hosts participating in the backup service.
If initiating connections from outside of the firewall is unwanted,
an ssh tunnel can be started from the inside network to a machine
outside thus acting as kind of a proxy server. The outside backup
clients must be configured to connect the proxy machine for backup,
where the TCP port is listening, i.e. the other side of the ssh
tunnel sees the light of the outside world. It should be quite clear,
that ssh tunneling reduces throughput because of the additional
encryption/decryption effort. See the ssh documentation and HOWTO Q11
for more information.


--------------------------------------------------------------------------

30: How to configure xinetd for afbackup ?

Here are the appropriate xinetd.conf entries. As long as the convenient
way of configuration like with inetd is not included into afbackup,
the entries have to be made manually, followed by a kill -USR2 to the
xinetd.

For the single stream service:

service afbackup
{
        flags           = REUSE NAMEINARGS
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = backup
        server          = /usr/local/afbackup/server/bin/afserver
        server_args     = /usr/local/afbackup/server/bin/afserver /usr/local/afbackup/server/lib/backup.conf
}

For the multi stream service:

service afmbackup
{
        flags           = REUSE NAMEINARGS
        socket_type     = stream
        protocol        = tcp
        wait            = yes
        user            = backup
        server          = /usr/local/afbackup/server/bin/afmserver
        server_args     = /usr/local/afbackup/server/bin/afmserver /usr/local/afbackup/server/lib/backup.conf
}

Replace the user value with the appropriate user permitted to operate
the device to be used (see: INSTALL).

--------------------------------------------------------------------------

31: How to redirect access, when a client contacts the wrong server ?

This situation might arise, when localhost has been configured and
restore should be done on a different client, but the same server.
Or it might happen, that the backup service has moved, no host alias
has been used during backup and it is not possible to rename the
machine cannot be renamed.

Here the xinetd can help, cause it is able to redirect ports to
different machines and/or ports. On the machine, that does not have
the service, but is contacted by a client, put an entry like this
into the xinetd configuration file (normally /etc/xinetd.conf) and
(re)start xinetd (sending the typical kill -HUP):

service afbackup_redirect
{
        flags           = REUSE
        socket_type     = stream
        protocol        = tcp
        port            = 2988
        redirect        = backupserver 2988
        wait            = no
}

Replace backupserver with the real name of the backup server host.
If the multi stream service is to be used, add another entry:

service afmbackup_redirect
{
        flags           = REUSE
        socket_type     = stream
        protocol        = tcp
        port            = 2989
        redirect        = backupserver 2989
        wait            = no
}


--------------------------------------------------------------------------

32: How to perform troubleshooting when encountering problems ?

Here are some steps, that will help narrowing the search and probably
even solve the problem:

Check, if the environment variable BACKUP_HOME is set. If yes, this
might lead to all kinds of problems as afbackup evaluates this setting
and considers it as the base directory of the afbackup installation.
Maybe the name of this variable should be changed in afbackup ...

Start on the client side:

If full_backup or incr_backup report cryptic error messages, probably
in the client side logfile (check this file out, maybe cleartext error
messages can be found here), try to run the low level afclient command
querying the server. Don't forget to supply the authentication key file,
if one is configured, with option -k, because afclient is a low level
program, that can be run standalone and does NOT read the configuration
file. An afclient call to check basic functionality can be:

/path/to/afclient -qwv -h <servername> [ -p <service-or-port> ] \
                      [ -k /path/to/keyfile ]

After a short time < 2 seconds it should printout something like this:
Streamer state: READY+CHANGEABLE
Server-ID: Backup-Server_1
Actual tape access position
Cartridge: 8
File:      1
Number of cartridges: 1000
Actual cartridge set: 1

If afclient does not finish within half a minute or so and later prints
the error message 'Error: Cannot open communication socket', then there
is a problem on the server side or with the network communication. Try
to telnet to the port, where the afbackup server (i.e. usually inetd)
is awaiting connections:

  telnet <servername> 2988

(or whatever your afbackup service portnumber is). You should see some
response like this:

Trying 10.142.133.254...
Connected to afbserver.mydomain.de.
Escape character is '^]'.
afbackup 3.3.4
 
AF's backup server ready.
h>|pρ(O

Type return until the afserver terminates the connection or type
Ctrl-] and on the >-prompt enter quit to terminate telnet.

If you don't see a response like indicated above, but instead
'Connection refused', then the service is not properly configured on the
server host. Please check the /etc/inetd.conf or /etc/xinetd.conf file
for proper afbackup entries and make sure, the service name is known
either in the local /etc/services file or from NIS or NIS+ or whatever
service is used. Send a kill -HUP <PID> with the PID of inetd or -USR2
with the PID or xinetd (if that one is used) to make the daemon reread
it's configuration. If afterwards the connection is still not possible,
see the syslog of the server for error messages from the (x)inetd. They
will indicate, what the real problem is. The syslog file is usually one
of the following files:
 /var/adm/messages
 /var/adm/SYSLOG
 /var/log/syslog
 /var/log/messages
 /var/adm/syslog/syslog.log

On AIX use the errpt command, e.g. with option -a to get recent syslog
output (see man-page).

If you don't get any connection response starting the telnet command,
there is a network problem. If you can ping the remote machine, but
can't telnet to the afbackup port, try to connect any other port, e.g.
the real telnet port (without 3rd argument) or the daytime port (type
telnet <remotehost> 13). If they work, there is probably a firewall
between the afbackup client and the server, that is blocking connections
to the afbackup port. Then check the firewall configuration and permit
the afbackup and afmbackup connections, if you want to remote start by
afbackup means, in both directions.

The error message 'An application seems to hold a lock ...' indicates,
that there is already an afbackup program like full_backup or afverify
running on the same host. Use ps to find out, what that process is. If
you need to know, what this program is doing, see the client side log
for hints. If that doesn't give any clue, try to trace that program or
the subprocess afbackup, that is running in most cases, when one of the
named programs is also running. To trace a program use:
 truss     on Solaris
 strace    on Linux, SunOS-4, FreeBSD
 par       on IRIX
 trace     on HP-UX

For AIX a system tracer is announced. Until now there can only scripts
be used, that are in turn running trace -a -d -j <what-you-want-to-get>,
trcon, trcstop and trcrpt, but this must be done with real care, cause
changes are high, that the filesystem, where the trace is written (normally
/tmp) will be plugged up. See the manpages for the named commands for
details.

Very useful is lsof, what helps to find out, what the filedescriptors
in system calls like read, write, close, select etc. are meaning. Run
lsof either with no arguments to grep for something specific or with
the arguments -p <PID>, with <PID> being e.g. the process id of afbackup
or afserver.

If there is something wrong on the server, e.g. the server starts up, but
immediately terminates with or without any message in the serverside log,
it might help to trace the (x)inetd using the flag -f with strace (or
truss or ...) and -p with the pid of the inetd. The -f flags makes the
trace follow subprocess forks and execs. So one can probably see, why
the server terminates. If this does not help, one can try to catch the
server in a debugger after startup. This requires the server to be built
debuggable. To achieve this the easiest way, after building afbackup run

 make clean

in the distribution directory and then run

 make afserver DEBUG=-g [ OPTIMIZE=-DORIG_DEFAULTS ]

the ORIG_DEFAULTS stuff is needed, if you built afbackup using the Install
script. Now do NOT run make install, but copy the files over to the
installation directory using cp thus overwriting the files in there. If
you moved the original binaries out of the way, don't forget to chown
the copied files to the user configured in the /etc/(x)inetd.conf file.
Otherwise they can't be executed by (x)inetd.
Then add the option -D to the afserver or afmserver configured in the
(x)inetd.conf file. The inetd.conf entry will then e.g. look like this:

afbackup stream tcp nowait backup /usr/local/afbackup/server/bin/afserver /usr/local/afbackup/server/bin/afserver -D /usr/local/afbackup/server/lib/backup.conf

or the xinetd.conf entry as follows:

service afbackup
{
        flags           = REUSE NAMEINARGS
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = backup
        server          = /usr/local/afbackup/server/bin/afserver
        server_args     = /usr/local/afbackup/server/bin/afserver -D /usr/local/afbackup/server/lib/backup.conf
}

Send a kill -HUP <PID> to the PID of the inetd or -USR2 to xinetd.
Now, when any client connects the server, the afserver or afmserver
process is in an endless loop awaiting either an attach of a debugger
or the USR1 signal causing him to continue. Please note, that during
a full_backup or incr_backup, the server will probably contacted not
only once during a backup, but several times. Furthermore the afmserver
starts the afserver in slave mode as subprocess passing it also the -D
flag, so this process must also kill -USR1 'ed or caught in a debugger.
Attaching the debugger gdb works passing the binary as first argument
and the process ID as second argument, e.g.:

 gdb /path/to/afserver 2837

Now you see lines similar to those ones:

0x80453440 in main () at server.c:3743
3743:     while(edebug);   /* For debugging the caught running daemon */
(gdb)

on the gdb prompt set the variable edebug to 0:
(gdb) set edebug=0

Enter n to step through the program, s to probably step into subroutines,
c to continue, break <functionname> to stop in certain functions, c to
continue, finish to continue until return from the current subroutine etc.
See the man-page of gdb or enter help for more details. With dbx and
graphical frontends it's quite similar. It is possible to first start the
debugger and then attach a process. Supply only the binary to the debugger
when starting, then e.g. with gdb enter  attach 2837  (if that's the PID).
This works also with xxgdb or ddd (very fine program !)
The named calling structure and the possible several server startups can
make the debugging a little complicated, but that's the price for a system
comprising of several components running concurrently or being somewhat
independent from each other. But it makes development and testing easier
and less error prone.

Debugging the client side is not as complicated. To build the client side
debuggable works the same way as explained, except that the make step must
have afclient as target:

 make afclient DEBUG=-g [ OPTIMIZE=-DORIG_DEFAULTS ]

For the installation the same applies like above: Do NOT run make install,
but copy the files to the installation directory using cp.


--------------------------------------------------------------------------

33: How to use an IDE tape drive with Linux the best way ?

As the IDE tape driver on Linux seems to have problems to work well,
the recommendation is to use the ide-scsi emulation driver. Here's how
Mr. Neil Darlow managed to get his HP Colorado drive to work properly:

The procedure, for my Debian Woody system with 2.4.16 kernel, was
as follows:

1) Disable IDE driver access to the Tape Drive in lilo.conf
   append="hdd=ide-scsi"

2) Ensure the ide-scsi module is modprobe'd at system startup by
   adding it to /etc/modules

3) Install the linux mt-st package for the SCSI ioctl-aware mt
   program

4) Modify the Tape Blocksize parameter in server/lib/backup.conf
   Tape Blocksize: 30720

After all this, you can access the Colorado as a SCSI Tape Drive
using /dev/nst0. Then full_backup and afverify -v work flawlessly.


--------------------------------------------------------------------------

34: How to make afbackup reuse/recycle tapes automatically ?

There are two parameters in the client side configuration, that affect
reusing tapes. One of them is NumIndexesToStore. A new index file is
started with each full backup. For all existing indexes the backup data
listed inside of them is protected from being overwritten on the server.
This is achieved by telling the server, that all tapes, the data has
been written to, are write protected. The parameter NumIndexesToStore
tells the client side, how many indexes in addition to the current one
that is needed in any case are kept. More i.e. older index files are
removed and the related tapes freed. A common pitfall is, that the
number configured here is one too high. If the number is e.g. 3, the
current index file plus 3 older indexes are kept, not 3 in total. Note
furthermore, that afbackup only removes an older index, when the next
full backup has succeeded.

The other parameter DaysToStoreIndexes can be configured the number of
days, how old index file contents may become. Still a new index file
is created on every full backup. That is, an index file may contain
references to tapes and data, that are in fact older than configured by
this parameter. Nonetheless the index file is kept to be able to restore
a status completely, that has the given age, what requires also older
data. E.g.: To restore a status, that is 20 days old, the previous full
backup is also needed that is e.g. 25 days old together with data from
following incremental, level-X or differential backups.

The server side also keeps track, what tapes are needed by what client.
When a client tells the server a new current list of tapes, that are to
be write-protected, the server overwrites the previously stored list
for that client. The lists are lines in the file .../var/precious_tapes.
It may be desired, that a client is no longer in backup, but was before.
Then the associated tapes must be freed manually on the server(s) either
removing the appropriate line in the precious_tapes file (not while a
server is running !) or issuing a server message using a command like
that:
 /path/to/afclient -h <server> [ -p <service> ] [ -k /path/to/keyfile ] \
                      -M "DeleteClient:  <client-identifier>"
The setting for the <client-identifier> can be taken from the outdated
client's configuration file (default: the official hostname) or from
the precious_tapes file on the server: it's the first column. Using
the command makes sure the file remains in a consistent state as the
server locks the files in the var-directory during modification.

When a server refuses to overwrite tapes, but there is no obvious reason
for this behaviour, the precious_tapes file on the server should be
checked like mentioned above, furthermore the readonly_tapes file.
Probably tapes have been set to read-only mode some time ago, but one
doesn't remember, when or why. Note, that afbackup never sets tapes to
read-only by itself. This can only be done manually.


--------------------------------------------------------------------------

35: How to make the server speak one other of the supported languages ?

If your system's gettext uses the settings made by the setlocale
function or supports one of the functions setenv or putenv, then
the option -L of af(m)server can be used to set a locale on the
command line in the /etc/(x)inetd.conf file. GNU gettext in most
cases is not built to use setlocale due to compatibility problems.
Fortunately the glibc supports both setenv and putenv, so the
option is usually available. If supplying the commandline option
does not work, environment variables can be used:

The environment variable LANG must be set to it in the server's
environment. To achieve that the command from the inetd.conf file
can be put into a script, where the LANG environment variable is
set before e.g.

#!/bin/sh
#
# this is a script e.g.
#    /usr/local/afbackup/server/bin/afserverwrapper
#
LANG=it
export LANG

exec /usr/backup/server/bin/afserver /usr/backup/server/bin/afserver /usr/backup/server/lib/backup.conf

# end of script


Do the same for afmserver. Then replace the command in
inetd.conf with

/usr/local/afbackup/server/bin/afserverwrapper afserverwrapper

When using the xinetd, environment settings can be made by adding
a line to the appropriate section in the configuration file, e.g.:

   env  =   LANG=de

so a complete xinetd entry for afserver would be:

service afbackup
{
        flags           = REUSE NAMEINARGS
        env             = LANG=de
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = backup
        server          = /usr/local/afbackup/server/bin/afserver
        server_args     = /usr/local/afbackup/server/bin/afserver /usr/local/afbackup/server/lib/backup.conf
}

If the multi-stream server is configured to run permanently, the
LANG setting can be simply be done in the start script like in
the script above.


--------------------------------------------------------------------------

36: How to build a Solaris package of the afbackup software ?

NOTE: only /usr/local/afbackup is supported as base directory for
      afbackup with this procedure and the supplied packaging files,
      furthermore libz must be used and libdes for encryption

* Run the Install script as normal (probably several times if required),
  leave the target directories with the default values, answer the
  question whether to install the software with 'no'

* Run the script ./build_local_inst
  (this creates a subdirectory root containing the install image)

* Run the command
   pkgmk -o -d . -b `pwd` -f afbackup.sun.map AFbackup

  This creates the Solaris package AFbackup in the current directory
  (specified by `pwd`) with the name AFbackup. If this name should
  be changed, also the file pkginfo must be modified.


--------------------------------------------------------------------------