1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492
|
Copyright (c) 2002-2008 by Heinz-Josef Claes (see README)
Published under the GNU General Public License v3 or any later version
A *much* more detailed description of storeBackup is available as a pdf
file at http://storebackup.org or via web browser.
If possible, please consult it instead of this readme.
CONTENTS
--------
- overview
- license
- important hint
- installation (for GNU/Linux, BSD and other Unixes)
- getting more information / needing help
- general functionality
- how does it work in general?
- renaming backups
- how does in work with 'lateLinks'?
- storing the backup via nfs
- mounting read only
- parameters
- configuration file
- command line parameters
- including / excluding files and directories
- strategies to delete old backups
- monitoring
- explanations to the statistical output
- limitations
- old stuff from README.1ST
OVERVIEW
--------
- Copies directory hierarchies recursively into another location, by
date (e.g. /home/ => /var/bkup/2002.12.13_04.27.56/). Permissions are
preserved, so users with access to the backup directory can recover
their files themselves.
- If you want to backup multiple independent directories, look at option
includeDir / excludeDir or better: followLinks in storeBackup.pl
- File comparisons are done with MD5 checksums, so no changes go unnoticed.
- Hard-links unchanged backed up files to old versions and identical
files within the backed up tree.
- Can hard-link between independent backup series (from different machines)
- Compresses large files (that don't match exclusion patterns).
- Manages backups and removes old ones.
LICENSE
-------
storeBackup is licensed under the GPL v3 or any later version
Copyright (C) Dr. Heinz-Josef Claes (2000-2009) <hjclaes@web.de>
and Nikolaus Rath <Nikolaus@rath.org> (2008)
(who made substantial contributions to version 2)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
IMPORTANT HINT
--------------
storeBackup is a tool for backing up a file system tree on GNU/Linux
or other Unixes to a separate directory. For reasons of security you
have to mount the directory that is going to be backed up as read
only. This makes it impossible for storeBackup to destroy the original
data. No such case is known to the author after intensive testing on
large file systems (about 3 million files) and over a period of over eight
years (without mount read only). This is a safety precaution you should
use to protect you and your data because this program is distributed
in the hope that it will be useful, but WITHOUT ANY WARRANTY; without
even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See below how to mount read only.
INSTALLATION
------------
The tar file consists of two directories: 'bin' and 'lib' with some files.
Simply unpack the archive where ever you want to install storeBackup:
cd 'where ever you want'
tar jxf .../storeBackup.tar.bz2
and add $PATH to the resulting 'bin' directory (or call the program with
the full path).
If you are a Debian user, you can copy the shell script cron-storebackup
to /etc/cron.daily/storebackup. Refer to the file for further instructions.
In order for storeBackup to function, you need:
- /usr/bin/env
and in you $PATH:
- perl (with Berkeley DB, which is part of the common perl distribution)
It should run with perl5.6 or newer.
- md5sum
- bzip2
- cp
- mknod
- mount (for checking the file system type)
- Lastly any other compression programs e.g. gzip if you want to use them
If you are using FreeBSD or other versions of Unix, you need the program md5sum
in your $PATH. If this program is not implemented on your system, I appended
a tar file 'md5sum.tar'. Unpack the tar file and compile the program, for
example:
gcc -O2 -o md5sum md5.c md5sum.c
Then install md5sum to a directory that is in your $PATH Variable.
If you have problems installing storeBackup on other Unix Systems, don't
hesitate to contact me.
My knowledge is that storeBackup runs on GNU/Linux, FreeBSD, Solaris and AIX.
GETTING MORE INFORMATION / NEEDING HELP
----------------------------------------------
You can cantact me via email (hjclaes@web.de).
There is also an EXAMPLES files in the distribution.
Debian users should look at
http://packages.debian.org/unstable/utils/storebackup.html
GENERAL FUNCTIONALITY
---------------------
storeBackup is a disk-to-disk backup tool for GNU/Linux. It should run
on other Unix like machines. You can directly browse through the
backed up files (locally, via NFS, SAMBA or whatever). This gives the
users the possibility to restore files easily and fast. He/She only
has to copy (and possibly uncompress) the file. There is also a tool
for easily restoring (sub) trees for the administrator. There is also
an option that allows single backups of specific times to be deleted
without affecting the other existing backups.
This package consists of the following tools:
storeBackup.pl - performs the backups
storeBackupUpdateBackup.pl
- if you chose the option 'lateLinks' in
storeBackup, you have to set them late
with this program (see below)
storeBackupRecover.pl - recovers files or (sub) trees from the backups
(especially for spanning multiple users)
storeBackupVersion.pl - analyze the versions of backed up files
storeBackupls.pl - lists backed up directories (versions) with
information (week day, age of backup)
storeBackupConvertBackup.pl - convert (very) old backups to new format
(see file _ATTENTION_)
storeBackupDel.pl - deletes old backups (same algorithms as in
storebackup.pl).
storeBackup_du.pl - evaluates the disk usage in one or more
backup directories
storeBackupMount.pl - pings server, mounts file system(s),
calls storeBackup, umounts file systems(s)
It writes a log file and has a detailed
error handling
For your convenience I have added the following scripts:
llt - show atime, ctime and mtime; llt -h gives a
usage info
multitail.pl - more robust as `tail -f` for n files
use the man command to get a detailed description of the parameters, eg:
man storeBackup.pl
HOW DOES IT WORK IN GENERAL?
----------------------------
storeBackup makes a backup of one directory to another, it does not care
where this location is (same disk, different disc, via NFS over the
network). You should use another disk or better another computer to
store the backup. The target directory must be on a Unix virtual file
system which supports hard links, backing up to a SAMBA share is not
possible. Naturally, you can also mount the source directory via NFS
and backup in a local file system. In this case, it's good to have a
fast network. The backup(s) can be seen below a directory in the form
date_time (yyyy.mm.dd_hh.mm.ss) which it creates.
There are several optimizations that have been implemented to reduce
disk usage:
- The files to be backed up will be compressed (default bzip2) as
discrete files in the backup. Files with definable suffixes (like
.gz, which is part of the default value) will not be compressed. It is
also possible to configure storeBackup so that it does not compress
anything.
- If a file with the same contents exists in the previous backup, the
new backup will only be a hard link to the other one. (This
mechanism depends on the contents, not on a file name or path!) If you
rename a file or directory or move sub trees around, it will not cost
you additional space in the backup.
- You can also check older backups than the last one for files with
the same contents. But this is normally not worth the effort. You
can also check backups from *other* machines for files with the same
contents, which can be very efficient.
- If a files with the same contents exist in the actual performed backup,
they will be hard linked (and naturally to the older ones in the
existing backups).
As a result, only changes resulting in different file contents will be
stored (compressed) and will require disk space. Normally, the
required disk space is less then the required space of the
original. But this depends on the number of backups and changes.
There are several optimizations to improve performance. The first
backup is *very* much slower than the following, because all the data
has to be compressed and/or copied. StoreBackup has the ability to take
advantage of multiprocessor machines.
StoreBackup creates special files in the root of the backup called
.md5CheckSums.info and .md5CheckSums or .md5CheckSums.bz2
(default). Do not delete these files! They contain all the
information about the original files. You can use this information to
write your own tools to restore or to analyze the
backups.
When started, storeBackup will read .md5CheckSums and creates its own
databases (dbm file) in $TMPDIR or --tmpdir (default is /tmp). If you
back up a large number of files, the required space can be several
dozens of megabytes. If you do not have enough memory to cache the dbm
file, I recommend using a separate hard disk (if available) for better
performance.
RENAMING BACKUPS
----------------
storeBackup recognizes the backups using the format of its names:
YYYY.MM.DD_HH.MM.SS
If you change this name, it will not been recognized as a backup and
therefore not deleted. So this is an appropriate trick to 'archive'
your backups forever.
If you want to do so, change the name to
YYYY.MM.DD_HH.MM.SS-<something>
It is very important to use this schema, because when checking for
late linked backups (see lateLinks), storeBackup searches recursively
beginning from 'topLevel' for backups. It will ignore directories with
the format described above. You change the name eg. simply to 'xmas',
then storeBackup (and other programs) will recursively searches in
'xmas' for backups, which simply makes no sense and can take some time.
HOW DOES IT WORK WITH 'LATELINKS'?
----------------------------------
If you save a backup to a remote server via NFS or other network file
systems, then usually most of the time is spend for setting hard
links, since the nfs file system sends individual requests for each hard link over
the network.
You can speed up things a lot (more than a factor of 10), if you use the
option lateLinks (possibly in combination with lateCompress). If you use
lateLinks to backup from a local to disc to another local disc, you can
reduce your backup window time by about 50%. You can also speed up the
backup window for your very first backup using lateLinks in combination
with lateCompress because compression can be done asynchronously.
With lateLinks, no hard links are performed, only the file
.storeBackupLinks/linkFile.bz2 is written in the backup. This file
contains amongst others the information which hard links should have been
set. (If you saved with lateCompress, there also will be the information
which files still have to be compressed.)
In the directory .storeBackupLinks are also files called linkTo and
linkFrom<number>. The contents of these files tell 'to' what other backup
links are missing (linkTo) and 'from' which other backup links are missing
(linkFrom).
After backing up (several times) with lateLinks you have to call
storeBackupUpdateBackup.pl later (normally on the backup server) to set
all the links, compressions and permission settings. After calling
storeBackupUpdateBackup.pl your backup is in the same state as if you
never used option lateLinks.
Naturally, using lateLinks has some impacts you should consider:
A result of a backup made with option lateLinks will give you an
incremental backup! If you delete one of the backups, where the unperformed
hard links point to, you will definitely loose data. The cleaning
functionality (deleting of old backups) takes this into consideration.
So: Never ever delete a backup unless your sure that all backups have
finished hard linking! You can simply check this with
storeBackupUpdateBackup.pl within a few seconds (if nothing has to be
done).
If you accidentally deleted the wrong backup or you killed a backup at
a very inappropriate time, storeBackupUpdateBackup.pl will tell you what's
wrong and help you to repair it (if possible).
Generally, using lateLinks is not as robust as not to use it. Therefore
you should call storeBackupUpdateBackup.pl regularly (eg. with cron at
night) and check if everything is fine (grep ERROR <logfile>).
The reason, why using lateLinks is not as robust as other usage of
storeBackup.pl is as follows:
- without lateLinks, storeBackup.pl directly tries to link to the same
file in an older backup. If this fails, it will copy / compress the file
instead (and link to that one in the future).
- with lateLinks, storeBackup does not care if there is a link. (If the
backup you refer was also made with lateLinks and not yet hard linked,
there simply is nothing.) So if a file in a backup is not there (for
whatever reason), it is lost also in the new backup.
This all will not result in a problem, if you run storeBackupUpdateBackup.pl
eg. every night to get full backups.
STORING THE BACKUP VIA NFS
--------------------------
Let's assume, that your server, where you want to write your backup via
nfs is called 'nfsserver' and the path to the backup is /storeBackup.
You then can use the following entry in /etc/exports on 'nfsserver'
(example with GNU/Linux, can differ on other Unix like operating systems):
/storeBackup 192.168.1.0/24(async,rw,no_root_squash)
Where you probably have to change the ip address and the mask to your needs.
It is important to use 'no_root_squash', so the client root user has root
permissions on the mounted file system. Use 'async' to get a much better
write performance (see `man mount` for further explanations).
In /etc/fstab on the nfs client (where you run storeBackup) you should
configure a line like
nfsserver:/storeBackup /backup nfs user,exec,async,noatime 1 1
This will mount the file system /storeBackup of 'nfsserver' to /backup on
your client.
MOUNTING READ ONLY USING NFS
-----------------------------
If you want to mount read only, follow the instruction of the used
tools. If you want to mount a tree of your local file system read only
for storeBackup, you can use NFS. Make sure you do not generate an
infinite loop :-)
It's a good idea to set the option noatime and to use NFSv3. An example
for an entry in /etc/fstab of your storeBackup server is:
pussycat:/vol1 /backup nfs nfsvers=3,ro,noauto,noatime,async 0 0
This will mount the directory /vol1 from the server 'pussycat' to the
local directory /backup. The local machine will only have read access
to /vol1 which it has to backup.
On pussycat, you need to have a line like in
/vol1 192.168.3.1/255.255.255.255(async,ro,no_root_squash)
in /etc/exports. This means, that /vol1 can be mounted only by the machine
192.168.3.1 (the other server who runs storeBackup). This storeBackup server
can only mount read only. It is important to set no_root_squash, because
this the backup server to have (read only) root permissions.
The restore the data, you can allow a (read only) mount vice versa.
MOUNTING READ ONLY USING ROFS
-----------------------------
If your system has FUSE support, then there is an easier way to mount
parts of your file system read only that does not require an NFS
server. For this, simple get rofs from
http://mattwork.potsdam.edu/projects/wiki/index.php/Rofs (there is
also a Debian package available) and then call
# rofs /home /backup-source/home
to mount the /home directory read-only under /backup-source/home
PARAMETERS OF STOREBACKUP
-------------------------
NAME
storeBackup.pl - fancy compressing managing checksumming hard-linking cp -ua
DESCRIPTION
This program copies trees to another location. Every file copied is
potentially compressed (see --exceptSuffix). The backups after the first
backup will compaire the files with an md5 checksum with the last stored
version. If they are equal, it will only make an hard link to it. It
will also check mtime, ctime and size to recognize idential files in
older backups very fast. It can also backup big image files fast and
efficiently on a per block basis (data deduplication).
You can overwrite options in the configuration file on the command line.
SYNOPSIS
$prog --help
or
$prog -g configFile
or
$prog [-f configFile] [-s sourceDir]
[-b backupDirectory] [-S series] [--print]
[-T tmpdir] [-L lockFile] [--unlockBeforeDel]
[--exceptDirs dir1,dir2,dir3] [--contExceptDirsErr]
[--includeDirs dir1,dir2,dir3]
[--exceptRule rule] [--includeRule rule]
[--exceptTypes types] [--cpIsGnu] [--linkSymlinks]
[--precommand job] [--postcommand job]
[--followLinks depth]
[--ignorePerms] [--lateLinks [--lateCompress]]
[--checkBlocksSuffix suffix] [--checkBlocksMinSize size]
[--checkBlocksBS]
[--checkBlocksRule0 rule [--checkBlocksBS0 size]
[--checkBlocksCompr0] [--checkBlocksRead0 filter]
[--checkBlocksParallel0]]
[--checkBlocksRule1 rule [--checkBlocksBS1 size]
[--checkBlocksCompr1] [--checkBlocksRead1 filter]
[--checkBlocksParallel1]]
[--checkBlocksRule2 rule [--checkBlocksBS2 size]
[--checkBlocksCompr2] [--checkBlocksRead2 filter]
[--checkBlocksParallel2]]
[--checkBlocksRule3 rule [--checkBlocksBS3 size]
[--checkBlocksCompr3] [--checkBlocksRead3 filter]
[--checkBlocksParallel3]]
[--checkBlocksRule4 rule [--checkBlocksBS4 size]
[--checkBlocksCompr4] [--checkBlocksRead4 filter]
[--checkBlocksParallel4]]
[--checkDevices0 list [--checkDevicesDir0]
[--checkDevicesBS0] [checkDevicesCompr0]
[--checkDevicesParallel0]]
[--checkDevices1 list [--checkDevicesDir1]
[--checkDevicesBS1] [checkDevicesCompr1]
[--checkDevicesParallel1]]
[--checkDevices2 list [--checkDevicesDir2]
[--checkDevicesBS2] [checkDevicesCompr2]
[--checkDevicesParallel2]]
[--checkDevices3 list [--checkDevicesDir3]
[--checkDevicesBS3] [checkDevicesCompr3]
[--checkDevicesParallel3]]
[--checkDevices4 list [--checkDevicesDir4]
[--checkDevicesBS4] [checkDevicesCompr4]
[--checkDevicesParallel1]]
[--saveRAM] [-c compress] [-u uncompress] [-p postfix]
[--noCompress number] [--queueCompress number]
[--noCopy number] [--queueCopy number] [--copyBWLimit kbps]
[--withUserGroupStat] [--userGroupStatFile filename]
[--exceptSuffix suffixes] [--addExceptSuffix suffixes]
[--minCompressSize size] [--comprRule]
[--doNotCompressMD5File] [--chmodMD5File] [-v]
[-d level][--progressReport number] [--printDepth]
[--ignoreReadError] [--ignoreTime time]
[--linkToRecent name]
[--doNotDelete] [--deleteNotFinishedDirs]
[--resetAtime] [--keepAll timePeriod] [--keepWeekday entry]
[[--keepFirstOfYear] [--keepLastOfYear]
[--keepFirstOfMonth] [--keepLastOfMonth]
[--firstDayOfWeek day] [--keepFirstOfWeek]
[--keepLastOfWeek] [--keepDuplicate] [--keepMinNumber]
[--keepMaxNumber]
| [--keepRelative] ]
[-l logFile
[--plusLogStdout] [--suppressTime] [-m maxFilelen]
[[-n noOfOldFiles] | [--saveLogs]]
[--compressWith compressprog]]
[--logInBackupDir [--compressLogInBackupDir]
[--logInBackupDirFileName logFile]]
[otherBackupSeries ...]
OPTIONS
--help
show this help
--generate, -g
generate a template of the configuration file
--print
print configuration read from configuration file
or command line and stop
--file, -f
configuration file (instead of or additionally to options
on command line)
--sourceDir, -s
source directory (must exist)
--backupDir, -b
top level directory of all backups (must exist)
--series, -S
series directory, default is 'default'
relative path from backupDir
--tmpdir, -T
directory for temporary files, default is </tmp>
--lockFile, -L
lock file, if exists, new instances will finish if an old
is allready running, default is $lockFile
--unlockBeforeDel
remove the lock file before deleting old backups
default is to delete the lock file after removing old
backups
--exceptDirs, -e
directories to except from backing up (relative path),
wildcards are possible and should be quoted to avoid
replacements by the shell
use this parameter multiple times for multiple
directories
--contExceptDirsErr
continue if one or more of the exceptional directories
do not exist (default is to stop processing)
--includeDirs, -i
directories to include in the backup (relative path),
wildcards are possible and have to be quoted
use this parameter multiple times for multiple directories
--exceptRule
Files to exclude from backing up.
see README: 'including / excluding files and directories'
--includeRule
Files to include in the backug up - like exceptRule
see README: 'including / excluding files and directories'
--writeExcludeLog
write a file name .storeBackup.notSaved.bz2 with the names
of all skipped files
--exceptTypes
do not save the specified type of files, allowed: Sbcfpl
S - file is a socket
b - file is a block special file
c - file is a character special file
f - file is a plain file
p - file is a named pipe
l - file is a symbolic link
Sbc can only be saved when using option [cpIsGnu]
--cpIsGnu
Activate this option if your systems cp is a full-featured
GNU version. In this case you will be able to also backup
several special file types like sockets.
--linkSymlinks
hard link identical symlinks
--precommand
exec job before starting the backup, checks lockFile (-L)
before starting (e.g. can be used for rsync)
stops execution if job returns exit status != 0
This parameter is parsed like a line in the configuration
file and normally has to be quoted.
--postcommand
exec job after finishing the backup, but before erasing of
old backups reports if job returns exit status != 0
This parameter is parsed like a line in the configuration
file and normally has to be quoted.
--followLinks
follow symbolic links like directories up to depth
default = 0 -> do not follow links
--ignorePerms
If this option choosen, files will not necessarily have
the same permissions and owner as the originals. This
speeds up backups on network drives a lot. Recovery with
storeBackupRecover.pl will restore them correctly.
--lateLinks
do *not* write hard links to existing files in the backup
during the backup
you have to call the program storeBackupWriteLateLink.pl
later on your server if you set this flag to 'yes'
--lateCompress
only in combination with --lateLinks
compression from files >= minCompressSize will be done
later, the file is (temporarily) copied into the backup
--checkBlocksSuffix
Files with suffix for which storeBackup will make an md5
check on blocks of that file. Executed after
--checkBlocksRule(n)
This option can be repeated multiple times
--checkBlocksMinSize
Only check files specified in --checkBlocksSuffix if there
file size is at least this value, default is 100M
--checkBlocksBS
Block size for files specified with --checkBlocksSuffix
Default is $checkBlocksBSdefault (1 megabyte)
--checkBlocksCompr
if set, the blocks generated due to checkBlocksSuffix
are compressed
--checkBlocksRule0
Files for which storeBackup will make an md5 check
depending on blocks of that file.
--checkBlocksBS0
Block size for option checkBlocksRule
Default is $checkBlocksBSdefault (1 megabyte)
--checkBlocksCompr0
if set, the blocks generated due to this rule are
compressed
--checkBlocksRead0
Filter for reading the file to treat as a blocked file
eg. 'gzip -d' if the file is compressed. Default is no
read filter.
This parameter is parsed like the line in the
configuration file and normally has to be quoted,
eg. 'gzip -9'
--checkBlocksParallel0
Read files specified here in parallel to "normal" ones.
This only makes sense if they are on a different disk.
Default value is 'no'
--checkBlocksRule1
--checkBlocksBS1
--checkBlocksCompr1
--checkBlocksRead1
--checkBlocksParallel1
--checkBlocksRule2
--checkBlocksBS2
--checkBlocksCompr2
--checkBlocksRead2
--checkBlocksParallel2
--checkBlocksRule3
--checkBlocksBS3
--checkBlocksCompr3
--checkBlocksRead3
--checkBlocksParallel3
--checkBlocksRule4
--checkBlocksBS4
--checkBlocksCompr4
--checkBlocksRead4
--checkBlocksParallel4
--checkDevices0
List of devices for md5 ckeck depending on blocks of these
devices
--checkDevicesDir0
Directory where to store the backup of the device
--checkDevicesBS0
Block size of option checkDevices0,
default is 1M (1 megabyte)
--checkDevicesCompr0
Compress blocks resulting from option checkDevices0
--checkDevicesParallel0
Read devices specified in parallel to the rest of the
backup. This only makes sense if they are on a different
disk. Default value is 'no'
--checkDevices1
--checkDevicesDir1
--checkDevicesBS1
--checkDevicesCompr1
--checkDevicesParallel1
--checkDevices2
--checkDevicesDir2
--checkDevicesBS2
--checkDevicesCompr2
--checkDevicesParallel2
--checkDevices3
--checkDevicesDir3
--checkDevicesBS3
--checkDevicesCompr3
--checkDevicesParallel3
--checkDevices4
--checkDevicesDir4
--checkDevicesBS4
--checkDevicesCompr4
--checkDevicesParallel4
--saveRAM
write temporary dbm files in --tmpdir
use this if you do not have enough RAM
--compress, -c
compress command (with options), default is <bzip2>
This parameter is parsed like the line in the
configuration file and normally has to be quoted,
eg. 'gzip -9'
--uncompress, -u
uncompress command (with options), default is <bzip2 -d>
This parameter is parsed like the line in the
configuration file and normally has to be quoted, eg.
'gzip -d'
--postfix, -p
postfix to add after compression, default is <.bz2>
--noCompress
maximal number of parallel compress operations,
default = choosen automatically
--queueCompress
length of queue to store files before compression,
default = 1000
--noCopy
maximal number of parallel copy operations,
default = 1
--queueCopy
length of queue to store files before copying,
default = 1000
--copyBWLimit
maximum bandwidth, KBytes per second per copying process
storeBackup.pl uses rsync for this option
default = 0 -> no limit, use cp for copying
--withUserGroupStat
write statistics about used space in log file
--userGroupStatFile
write statistics about used space in name file
will be overridden each time
--exceptSuffix
do not compress files with the following
suffix (uppercase included):
('\.zip', '\.bz2', '\.gz', '\.tgz', '\.jpg', '\.gif',
'\.tiff', '\.tif', '\.mpeg', '\.mpg', '\.mp3', '\.ogg',
'\.gpg', '\.png')
This option can be repeated multiple times
If you do not want any compression, set this option
to '.*'
--addExceptSuffix
like --exceptSuffix, but do not replace defaults, add
--minCompressSize
Files smaller than this size will never be compressed
but copied
--comprRule
alternative to --exceptSuffix and minCompressSize:
definition of a rule which files will be compressed
--doNotCompressMD5File
do not compress .md5CheckSumFile
--chmodMD5File
permissions of .md5CheckSumFile and corresponding
.storeBackupLinks directory, default is 0600
--verbose, -v
verbose messages
--debug, -d
generate debug messages, levels are 0 (none, default),
1 (some), 2 (many) messages, especially in
--exceptRule and --includeRule
--resetAtime
reset access time in the source directory - but this will
change ctime (time of last modification of file status
information)
--doNotDelete
check only, do not delete any backup
--deleteNotFinishedDirs
delete old backups which where not finished
this will not happen if doNotDelete is set
--keepAll
keep backups which are not older than the specified amount
of time. This is like a default value for all days in
--keepWeekday. Begins deleting at the end of the script
the time range has to be specified in format 'dhms', e.g.
10d4h means 10 days and 4 hours
default = 20d
--keepWeekday
keep backups for the specified days for the specified
amount of time. Overwrites the default values choosen in
--keepAll. 'Mon,Wed:40d Sat:60d10m' means:
keep backups from Mon and Wed 40days + 5mins
keep backups from Sat 60days + 10mins
keep backups from the rest of the days like spcified in
--keepAll (default $keepAll)
if you also use the 'archive flag' it means to not
delete the affected directories via --keepMaxNumber:
a10d4h means 10 days and 4 hours and 'archive flag'
e.g. 'Mon,Wed:a40d5m Sat:60d10m' means:
keep backups from Mon and Wed 40days + 5mins + 'archive'
keep backups from Sat 60days + 10mins
keep backups from the rest of the days like specified in
--keepAll (default 30d)
--keepFirstOfYear
do not delete the first backup of a year
format is timePeriod with possible 'archive flag'
--keepLastOfYear
do not delete the last backup of a year
format is timePeriod with possible 'archive flag'
--keepFirstOfMonth
do not delete the first backup of a month
format is timePeriod with possible 'archive flag'
--keepLastOfMonth
do not delete the last backup of a month
format is timePeriod with possible 'archive flag'
--firstDayOfWeek
default: 'Sun'. This value is used for calculating
--keepFirstOfWeek and --keepLastOfWeek
--keepFirstOfWeek
do not delete the first backup of a week
format is timePeriod with possible 'archive flag'
--keepLastOfWeek
do not delete the last backup of a week
format is timePeriod with possible 'archive flag'
--keepDuplicate
keep multiple backups of one day up to timePeriod
format is timePeriod, 'archive flag' is not possible
default = 7d
--keepMinNumber
Keep that miminum of backups. Multiple backups of one
day are counted as one backup. Default is 10.
--keepMaxNumber
Try to keep only that maximum of backups. If you have more
backups, the following sequence of deleting will happen:
- delete all duplicates of a day, beginning with the old
once, except the last of every day
- if this is not enough, delete the rest of the backups
beginning with the oldest, but *never* a backup with
the 'archive flag' or the last backup
--keepRelative, -R
Alternative deletion scheme. If you use this option, all
other keep options are ignored. Preserves backups depending
on their *relative* age. Example:
-R '1d 7d 61d 92b'
will (try to) ensure that there is always
- One backup between 1 day and 7 days old
- One backup between 5 days and 2 months old
- One backup between ~2 months and ~3 months old
If there is no backup for a specified timespan
(e.g. because the last backup was done more than 2 weeks
ago) the next older backup will be used for this timespan.
--progressReport, -P
print progress report after each 'number' files
--printDepth, -D
print depth of actual read directory during backup
--ignoreReadError
ignore read errors in source directory; not readable
directories do not cause storeBackup.pl to stop processing
--ignoreTime
ignore specified time when compairing files; possible
values are: 'ctime', 'mtime' or 'none', default is 'none'
Setting this parameter only makes sense in mixed
environments, when one time has stochastic values.
--linkToRecent
after a successful backup, set a symbolic link to
that backup and delete existing older links with the
same name
--logFile, -l
log file (default is STDOUT)
--plusLogStdout
if you specify a log file with --logFile you can
additionally print the output to STDOUT with this flag
--suppressTime
suppress output of time in logfile
--maxFilelen, -m
maximal length of log file, default = 1e6
--noOfOldFiles, -n
number of old log files, default = 5
--saveLogs
save log files with date and time instead of deleting the
old (with [-noOldFiles])
--compressWith
compress saved log files (e.g. with 'gzip -9')
default is 'bzip2'
This parameter is parsed like a line in the configuration
file and normally has to be quoted.
--logInBackupDir
write log file (also) in the backup directory
Be aware that this log does not contain all error
messages of the one specified with --logFile!
--compressLogInBackupDir
compress the log file in the backup directory
--logInBackupDirFileName
filename to use for writing the above log file,
default is .storeBackup.log
otherBackupSeries
List of other backup series to consider for
hard linking. Relative path from backupDir!
Format (examples):
backupSeries/2002.08.29_08.25.28 -> consider this backup
or
0:backupSeries ->last (youngest) in <backupDir>/backupSeries
1:backupSeries ->one before last in <backupDir>/backupSeries
n:backupSeries ->
n'th before last in <backupDir>/backupSeries
3-5:backupSeries ->
3rd, 4th and 5th in <backupDir>/backupSeries
all:backupSeries -> all in <backupDir>/backupSeries
default is to link to the last backup in every series
COPYRIGHT
Copyright (c) 2000,2004,2008-2009 by Heinz-Josef Claes (see README).
Published under the GNU General Public License or any later version.
INCLUDING / EXCLUDING FILES AND DIRECTORIES
-------------------------------------------
storeBackup has five parameters (beside --sourceDir) to manipulate
what files will go into the backup.
With followLinks you can control which (sub)directories will be saved
via symbolic links. See the EXAMPLES files for some explanations.
The other four parameters are _all_ examined _if_set_:
A file which is
not in 'exceptDirs' and
in 'includeDirs' and
does not match 'exceptRule' (with full relative path) and
matches 'includeRule' (with full relative path)
will be saved! In all cases you have to define _relative_ paths from your
sourceDir! If you additionally use 'followLinks' (see below), interpret the
specified symbolic links as directories.
----- exceptDirs, includeDirs -----
The parameters of exceptDirs and includeDirs are a list of
directories. You can use shell type wildcards (like home/*/.mozilla)
which will be expanded via a sub shell from perl. If the result of your
wildcard is very long, you might run into a limitation. If you have
many thousands include directories, the performance of storeBackup
will decrease. (This will not happen with excludeDirs.) In such a case
you should think about using includePattern.
----- followLinks -----
If you want to backup multiple directories located somewhere in your
file system, you can create a directory (eg. /backup) and make symbolic
links to all the directories you want to save:
ls -l /backup
total 0
lrwxrwxrwx 1 root root 1 Jun 4 19:23 backup -> .
lrwxrwxrwx 1 root root 13 Jun 4 19:23 disk2_Noten -> /disk2/Noten/
lrwxrwxrwx 1 root root 14 Jun 4 19:23 disk2_bilder -> /disk2/bilder/
lrwxrwxrwx 1 root root 12 Jun 4 19:23 disk2_home -> /disk2/home/
lrwxrwxrwx 1 root root 11 Jun 4 19:23 disk2_svn -> /disk2/svn/
lrwxrwxrwx 1 root root 18 Jun 4 19:23 disk2_svn-backup -> /disk2/svn-backup/
lrwxrwxrwx 1 root root 4 Jun 4 19:23 etc -> /etc
lrwxrwxrwx 1 root root 17 Jun 4 19:23 opt_storeBackup -> /opt/storeBackup/
lrwxrwxrwx 1 root root 6 Jun 4 19:23 root -> /root/
lrwxrwxrwx 1 root root 16 Jun 4 19:23 var_spool_mail -> /var/spool/mail/
lrwxrwxrwx 1 root root 8 Jun 4 19:23 var_www -> /var/www
Now set followLinks to 1 and configure sourceDir to /backup. In the
backup, you will now see disk2_home where all the stuff from
/disk2/home is stored.
If you add / delete symbolic in /backup, you automatically add / remove
those directories to / from the backup.
----- exceptRule, includeRule, searchRule -----
This part of the description shows how to use rules in storeBackup. If you are
not familiar with pattern matching and perl you should try to change the
examples very carefully a little bit. But you can run easily into error
messages you will not understand.
First, all the examples are explained for being written in a configuration
file. Mostly I will use the key word from storeBackup.pl (exceptRule), but
the rules are identical to the ones you can use for includeRule and
searchRule. Later, we will see how to use rules on the command line.
All the values we a talking about now, are the ones from the files backed
up at the point in time when the backup was performed, _not_ from the
files in the backup!
In general, rules are a piece of perl with some specialities. We start
with some easy and typical examples:
EXAMPLE 1:
exceptRule = '$size > 1610612736'
(Take care of the quotes. Generate a configuration file with storeBackup.pl
or storeBackupSearch.pl and read the comments in the beginning how quoting
and environment variables are interpreted.)
This rule will match for all files with more than 1.5GB (1.5 * 1024^3) bytes.
$size represents the size of each individual file. In this example, all
files bigger than 1.5GB will not be saved. This is not very easy to read,
and you can write instead:
exceptRule = '$size > &::SIZE("1.5G")'
(Take care of all quotes.) This will have the same effect as the rule before.
&::SIZE is a function which calculates the real value from the string "1G".
You can use identifiers from 'k' to 'P' with the following meaning:
"1k" -> 1 kilobyte = 1024 Byte
"1M" -> 1 Megabyte = 1024^2 Byte
"1G" -> 1 Gigabyte = 1024^3 Byte
"1T" -> 1 Terabyte = 1024^4 Byte
"1P" -> 1 Petabyte = 1024^5 Byte
Eg: &::DATE("0.4T") is valid, while &::DATE("1G1M") is not.
EXAMPLE 2:
exceptRule = '$file =~ m#\.bak$#'
(Take care of the quotes.) This rule will match for all files ending
with '.bak' which means they will not be saved. $file represents the
individual file name _with_relative_path. If you do not understand the
strange thing right to '$file', it's called pattern matching or regular
expression. See `man perlretut` (perl regular expressions tutorial) for
detailed explanation. But you should be able to expand this to simple
needs:
exceptRule = '$file =~ m#\.bak$#' or '$file =~ m#\.mpg$#'
(Take care of the quote and all blanks.) This rule will match and
therefore not save files ending with '.bak' or '.mpg'.
exceptRule = '$file =~ m#\.bak$#' or '$file =~ m#\.mpg$#'
or '$file =~ m#\.avi$#'
It should not be a surprise, that you will not backup files ending with
'.bak', '.mpg' or '.avi'.
Now we want to create a rule which will prevent the backup of all
files which end with '.bak', '.mpg' or '.avi' and also all files
bigger than 500 Megabyte:
exceptRule = '$file =~ m#\.bak$#' or '$file =~ m#\.mpg$#'
or '$file =~ m#\.avi$#' or '$size > &::SIZE("1.5G")'
If you set 'debug = 2', you can see if and how the rule matches for
individual files. If you set 'debug = 1', you can see if the rule matches
for each file. With 'debug = 0' (default), you will not get a message.
---
You can use the following 'preset variables':
$file -> file name with relative path from original 'sourceDir'
$size -> size of the file in bytes
$mode -> mode of the file (integer, use 0... to compare with octal
value, eg. '$mode = 0644'
$ctime -> creation time in seconds since epoch (Jan 1 1970), see below
$mtime -> modify time in seconds since epoch, see below
$uid -> user id (string if defined in operating systems),
eg. '$uid eq "bob"'
$uidn -> user id (numerical value), eg. '$uidn = 1001'
$gid -> group id (string if defined in operating system), see $uid
$gidn -> group id (numerical value), see $uidn
$type -> type of the file, can be one of 'SbcFpl', see option
exceptTypes in storeBackup
If you use ctime or mtime, it's not pure fun to calculate the number of
seconds since epoch every time. For this reason, storeBackup supports
a special function &::DATE to make your live cosy ;-) :
EXAMPLE 3:
searchRule = '$mtime > &::DATE("14d")' and '$mtime < &::DATE("3d12h")'
With this search rule (in storeBackupSearch.pl) you will find all files
which are younger than exactly 14 days and older than 3 days and 12 hours.
The syntax understood by &::DATE is:
a) "d" -> day
"h" -> hour
"m" -> minute
"s" -> second
So "3d2h50s" means 3 days, 2 hours and 50 seconds.
With this format, you specify now minus that period.
b) "YYYY.MM.DD" (year.month.day)
"YYYY.MM.DD_hh.mm.ss" (same format as backup dirs)
"2008.04.30" specifies April 30 2008, 0:00,
"2008.04.30_14.03.05" specifies April 30 2008, at 2 o'clock, 3 minutes
and 5 seconds in the afternoon.
With this format, you specify a point in time.
You already saw some possibilities to group the checking of the 'variables':
'and' and 'or'. You can use:
and, or, not, (, )
Everything is like in perl. (To be honest, it is evaluated by the perl
interpreter.). But you should surround each of these with one (or more)
blanks (white spaces) if you want 'debug = 2' to work correctly.
EXAMPLE 4:
searchRule = ( '$mtime > &::DATE("14d")' and '$mtime < &::DATE("3d12h")' )
and not '$file =~ m#\.bak$#'
Finds all files younger than 14 days and older than 3 days, 12 hours, but
only if they do not end with '.bak'.
See how 'and', 'not', '(' and ')' have at least one white space
surrounding it.
--
using rules on the command line
Let's take a look at:
exceptRule = '$size > &::SIZE("1.5G")'
If we try to use the command line like this:
--exceptRule '$size > &::SIZE("1.5G")' ### WRONG ###
we will get some nasty error messages because the shell strips the
single quotes and storeBackup tries to interpret the result the same
way as in the configuration file (see description in each configuration
file at the top). Here, storeBackup will complain about not knowing the
environment variable '$size'. (The $-sign is not masked any more because
the shell removed the single quote.) So we have to mask the $-sign.
We also have to mask the double quotes, because normally, storeBackup
will interpret them as grouping quotes and will not bypass them directly
to perl. The right way specifying this option is:
--exceptRule '\$size > &::SIZE(\"1.5G\")' ### CORRECT ###
We have to write example 4 in the following way:
--searchRule '( \$mtime > &::DATE(\"14d\") and \$mtime < &::DATE(\"3d12h\") ) and not \$file =~ m#\.bak\$#'
In case or problems, you should read the perl error massage which shows
what perl really gets. Beside this, option --print will show each parameter
after being parsed through shell and storeBackup. You can use --print in
combination with configuration files also.
STRATEGIES TO DELETE OLD BACKUPS
--------------------------------
storeBackup gives you a lot of possibilities to delete or not delete
your old backups. If you have a backup which should never be deleted,
the simplest way to achieve this is to rename its name, eg:
$ mv 2003.07.28_06.12.41 2003.07.28_06.12.41-archive
This is possible because storeBackup and storeBackupDel only delete
directories which match exactly the pattern YYYY.MM.DD_hh.mm.ss .
The most simple way to delete a specific directory is to use `rm -rf`.
If you want to delete backups which are too old depending on rules,
there are several options you can choose. You can specify the time to
keep old backups on the basis of weekdays (with a default value for
all weekdays in --keepAll which can be overwritten with
--keepWeekday). You can also specify to keep them with
--keepFirstOfYear, --keepLastOfYear, --keepFirstOfMonth and
--keepLastOfMonth. or with --keepFirstOfWeek and
--keepLastOfWeek where you can define the first weekday of your
definition of a week. In all of these cases, you have to specify a
time period. How to specify a time period is described in the
parameters section of this file.
Now imagine you are making your backups on an irregular basis, perhaps
from a laptop to a server or you make your backups when you think you
have finished an important step of your work. In such cases, it is
useful to say "only keep the last backup of a day in a long time
range" (with --keepDuplicate). If you were on holiday for a month
and have set --keepAll to '30d' (30 days), then you probably do not
want that storeBackup to delete all of your old backups when you start
it for the first time when you're back. You can avoid this with the
parameter --keepMinNumber. On the other hand, if you have limited
space on your backup disk, you want to limit the total number of
backups, for this, you can use --keepMaxNumber.
With --keepDuplicate you specify a time period in which storeBackup
keeps duplicate backups of a day. After this time period only the last
backup of a day will survive.
With --keepMinNumber you specify the minimal number of backups
storeBackup (or storeBackupDel) will *not* delete. The logic is as
follows:
- Do not delete backups specified with --keepAll ... --keepLastOfWeek and
--keepDuplicate.
- If this is not enough, do not delete other ones beginning with the newest
backups. Duplicates of a day are not affected by this parameter.
With --keepMaxNumber you specify the maximal number of
backups. StoreBackup will then delete the oldest backups if
necessary. To prevent special backups from deletion, you can specify
an "archive flag" with --keepAll ... --keepLastOfWeek. Backups
matching an archive flag will never be delete by --keepMaxNumber. In
this way it is possible that more backups will remain than specified
with this parameter, but the archive flag is useful to prevent special
backups like "last backup of a month" or "last backup of a week" to be
deleted.
using --keepRelative as a deletion Strategy:
-----------------------------------------------
This option activates an alternative backup deletion scheme that
allows you to specify the relative age of the backups you would like
to have rather then the period over which a backup should be kept.
Imagine that you always want to have the following backups available:
- 1 backup from yesterday
- 1 backup from last week
- 1 backup from last month
- 1 backup from 3 months ago
Note that this is most likely *not* what you really want to have,
because it simply means that you have to do daily backups and have to
keep every backup for exactly 3 months. Otherwise you wouldn't always
have a backup that is of *exactly* the requested age.
What you really want to have is therefore probably something like
this:
- 1 backup of age 1 hour to 24 hours / 1 day
- 1 backup of age 1 day to 7 days
- 1 backup of age 14 days to 31 days
- 1 backup of age 80 days to 100 days
This is now a very common backup strategy, but you would have
difficulty to achieve this with the usual keepFirstOf* options,
especially if you don't do backups with perfect regularity. However,
you can implement it very easily using keepRelative. All you need to
write is:
keepRelative = 1h 1d 7d 14d 31d 80d 100d
i.e. you list all the intervals for which you want to have backups.
storeBackup will delete backups in such a way that you come as close
as possible (if you don't do backups often enough, there is of course
nothing that storeBackup can do) to your requested backup scheme.
Note that this may mean that storeBackup keeps more backups that you
think it has to, i.e. it may keep two backups in the same period. In
this case storeBackup "looks into the future" and determines that both
backups will *later* be necessary in order to have a backups for all
periods. This is also the reason why in the above example you have
somehow implicitly specified the period 7 days to 14 days, although
you didn't really want to have a backup in this period - in order to
have backups in the next period (14 days to 31 days) you always need
to have a backup in the period 7 days to 14 days as well. Therefore
the syntax doesn't allow you to exclude some periods.
Finally you should be aware that storeBackup shifts all the intervals
if it cannot find a recent enough backup: if your first intervals is
from 10 days to 20 days, but your most recent backup is actually 25
days old, all subsequent periods will be extended by 5 days. This
ensures that if you haven't made any backups over a large period, this
period is not taken into account for your backup scheme. To give an
example why this is useful: if you wanted to have backups 1, 3, 7 and
10 days old and then went on vacation for 14 days, it is pretty
unlikely that you want all your backups deleted when you come back,
hence storeBackup ignores these 14 days and keeps the backups
appropriately longer.
MONITORING
----------
If you want to monitor storeBackup, simply grep for '^ERROR' and
possibly '^WARNING' in the log file.
EXPLANATIONS ABOUT THE STATISTICAL OUTPUT
--------------------------------------
After creating a new backup and possibly deleting old ones, storeBackup
will write some statistical output:
directories
Number of directories storeBackup found in the data source and
created in the backup
files
Number of files (more exactly number of links) storeBackup found
in the data source. This includes all types of files
storeBackup is able (or configured) to process.
symbolic links
Number of symbolic links storeBackup found in the data source
name pipes
Number of named pipes storeBackup found in the data source
new internal linked files
Number of files with the same contents storeBackup found
in the actual backup (not in an old backup) (this is checked
first)
old linked files
Number of files which exists in the previous backup with the
same name, same size, same ctime and same mtime
unchanged files
Number of files with the same contents StoreBackup found
in the old backup(s)
copied files
Files with a new contents, copied to the backup directory
compressed files
Files with a new contents, compressed into the backup directory
excluded files because pattern
Files excluded because of option 'exceptPattern'
included files because pattern
Files included because of option 'includePattern'
max size of copy queue
Maximum size of copy queue during the backup
max size of compress queue
Maximum size of compress queue during the backup
calculated md5 sums
Number of files for which an md5 sum was calculated.
forks total
Total number of forks (number of forks md5 + forks compress
+ forks copy + forks named pipes)
forks md5
Number of forks for program md5sum.
forks copy
Number of forks for program cp
forks <compress>
Number of forks for program <compress>
sum of source
Size in bytes of all files in the source directory
sum of target all
Size in bytes of all files in the target directory
sum of target new
Size in bytes of new copied or compressed files in the target
directory
sum of md5ed files
Size in bytes of all files for which an md5 sum was processed
sum internal linked (copy)
Size of bytes of all files which were internal linked (see:
new internal linked files). These files were linked to files which
were copied into the backup.
sum internal linked (compr)
Size in bytes of all files which were internal linked (see:
new internal linked files). These files were linked to files which
were stored compressed into the backup.
sum old linked (copy)
Size in bytes of all files which were linked to older backups (see:
old linked files). These files were linked to files which were copied
into the backup.
sum old linked (compr)
Size in bytes of all files which were linked to older backups (see:
old linked files). These files were linked to files which were
stored compressed into the backup.
sum unchanged (copy)
Size in bytes of all files which existed with the same name, mtime
and atime in the previous backup. These files were linked to files
which were copied into the old backup.
sum unchanged (compr)
Size in bytes of all files which existed with the same name, mtime
and atime in the previous backup. These files were linked to files
which were stored compressed into the old backup.
sum new (copy)
Size in bytes of all files which were copied into the backup
sum new (compr)
Size in bytes of all files which were stored compressed into the backup
sum new (compr), orig size
Size in bytes in the source directory of the above files
sum new / orig
Percentage of new files in the backup to their original size in
the source directory
size of md5CheckSum file
Size of the file <backupDir>/.md5CheckSums[.bz2]
size of temporary db files
Size of the db files generated during the backup in tmpdir
deleted old backups
Number of old backups which were deleted.
deleted directories
Number of directories deleted in old backups.
deleted files
Number of files truly deleted in old backups (last link removed)
(only) removed links
Number of links removed in old backups (files not deleted)
freed space in old directories
Freed space in old directories, does not include meta information.
add. used space in files
Additionally used space for this backup: difference between new
allocated space and freed space in old backups.
backup duration
Backup duration: time for precommand, backup, postcommand and
deletion of old backups.
over all files/sec (real time)
number of files divided by real time
over all files/sec (CPU time)
number of files divided by (user and system time)
CPU usage
average cpu time for the time period of "backup duration"
PROGRESS -> 10000 files processed (6.1G, 349M)
storeBackup read 10000 files with 6.1GB, 349MB have been
copied or compressed up to now
LIMITATIONS
-----------
- storeBackup can backup normal files, directories, symbolic links and
named pipes. You can backup other file types only with option cpIsGnu.
- The permissions in the backup tree(s) are equal to the permissions
in the original directory. Under special rare conditions it is
possible, that a user cannot read one ore more of own his/her files
in the backup. With the restore tool - storeBackupRecover.pl -
everything is restored with the original permissions.
- storeBackup uses hard links to save disk space. GNU/Linux with ext2
file system supports up to 32000, reiserfs up to 64535 hard links. If
storeBackup needs more hard links, it will write a warning and store
a new (compressed) copy of the file. If you use ext2 for the backup,
you have to reserve enough (static) inodes! (You will need one inode
for each different file in the backup, *not* for every single hard link.)
OLD STUFF FROM README.1ST
-------------------------
To correct a simple but nasty bug which exists in version 1.15 and
1.15.1 you have to do:
File backupdir/<date_time>/.md5CheckSums.info is read by
storeBackupRecover to get information about the uncompression
program. This information is false since version 1.15 if you use a
configuration file:
uncompress=bzip2
it must be:
uncompress=bzip2 -d
You can change this with an editor or use the script correct.sh:
cd backupdir # (now you see the date_time directories)
<path>/correct.sh # start the script which appends the ' -d'
--
correct.sh has the following contents:
#! /bin/sh
perl -p -e 's/^\s*uncompress=bzip2\s*$/uncompress=bzip2 -d\n/' -i */.md5CheckSums.info
|