File: comment_1_d89ffbebe59d05fb2c47d957792cf97e._comment

package info (click to toggle)
git-annex 7.20190129-3
  • links: PTS, VCS
  • area: main
  • in suites: buster
  • size: 56,292 kB
  • sloc: haskell: 59,105; sh: 1,255; makefile: 225; perl: 136; ansic: 44
file content (21 lines) | stat: -rw-r--r-- 1,013 bytes parent folder | download | duplicates (8)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[[!comment format=mdwn
 username="joey"
 subject="""comment 1"""
 date="2015-07-06T16:09:22Z"
 content="""
If the remote that git-annex is moving files to uses rsync as its transport
(ie, is a regular git remote, or a rsync special remote), then
the receiving rsync process verifies a checksum of the file it's receiving.
So, if the sender sends incorrect data, or calculates the wrong checksum
for the right data, the rsync will fail, and git-annex won't delete the
local copy of the file. I guess that if the sending rsync reads bad data
from disk, these checksums won't prevent the bad data being sent to the
remote though. (Some other special remotes don't have a transfer checksum
at all.)

If I had hardware this broken, I'd be glad that `git annex fsck` detected
it, and would move the storage to good hardware and run my fsck there to
learn the actual state of my repository. Re-running checksums or using
multiple checksum types to work around badly broken hardware seems like a
losing idea to me.
"""]]