Package: rustc / 1.85.0+dfsg3-1

vendor/gitoxide-backport-fix-for-CVE-2025-31130.patch Patch series | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
From: Emily <hello@emily.moe>
Date: Tue, 1 Apr 2025 21:55:16 +0100
Subject: gitoxide: backport fix for CVE-2025-31130
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

fix feat!: detect SHA‐1 collision attacks

Fix [GHSA-2frx-2596-x5r6].

[GHSA-2frx-2596-x5r6]: https://github.com/GitoxideLabs/gitoxide/security/advisories/GHSA-2frx-2596-x5r6

This uses the `sha1-checked` crate from the RustCrypto project. It’s
a pure Rust implementation, with no SIMD or assembly code.

The hashing implementation moves to `gix-hash`, as it no longer
depends on any feature configuration. I wasn’t sure the ideal
crate to put this in, but after checking reverse dependencies on
crates.io, it seems like there’s essentially no user of `gix-hash`
that wouldn’t be pulling in a hashing implementation anyway, so I
think this is a fine and logical place for it to be.

A fallible API seems better than killing the process as Git does,
since we’re in a library context and it would be bad if you could
perform denial‐of‐service attacks on a server by sending it hash
collisions. (Although there are probably cheaper ways to mount a
denial‐of‐service attack.)

The new API also returns an `ObjectId` rather than `[u8; 20]`; the
vast majority of `Hasher::digest()` users immediately convert the
result to `ObjectId`, so this will help eliminate a lot of cruft
across the tree. `ObjectId` also has nicer `Debug` and `Display`
instances than `[u8; 20]`, and should theoretically make supporting
the hash function transition easier, although I suspect further API
changes will be required for that anyway. I wasn’t sure whether
this would be a good change, as not every digest identifies an
entry in the Git object database, but even many of the existing
uses for non‐object digests across the tree used the `ObjectId`
API anyway. Perhaps it would be best to have a separate non‐alias
`Digest` type that `ObjectId` wraps, but this seems like the pragmatic
choice for now that sticks with current practice.

The old API remains in this commit, as well as a temporary
non‐fallible but `ObjectId`‐returning `Hasher::finalize()`,
pending the migration of all in‐tree callers.

I named the module `gix_hash::hasher` since `gix_hash::hash` seemed
like it would be confusing. This does mean that there is a function
and module with the same name, which is permitted but perhaps a
little strange.

Everything is re‐exported directly other than
`gix_features::hash::Write`, which moves along with the I/O
convenience functions into a new public submodule and becomes
`gix_hash::hasher::io::Write`, as that seems like a clearer name
to me, being akin to the `gix_hash::hasher` function but as an
`std::io::Write` wrapper.

Raw hashing is somewhere around 0.25× to 0.65× the speed of the
previous implementation, depending on the feature configuration
and whether the CPU supports hardware‐accelerated hashing. (The
more portable assembly in `sha1-asm` that doesn’t require the SHA
instruction set doesn’t seem to speed things up that much; in fact,
`sha1_smol` somehow regularly beats the assembly code used by `sha1`
on my i9‐9880H MacBook Pro! Presumably this is why that path was
removed in newer versions of the `sha1` crate.)

Performance on an end‐to‐end `gix no-repo pack verify` benchmark
using pack files from the Linux kernel Git server measures around
0.41× to 0.44× compared to the base commit on an M2 Max and a
Ryzen 7 5800X, both of which have hardware instructions for SHA‐1
acceleration that the previous implementation uses but this one does
not. On the i9‐9880H, it’s around 0.58× to 0.60× the speed;
the slowdown is reduced by the older hardware’s lack of SHA‐1
instructions.

The `sha1collisiondetection` crate from the Sequoia PGP project,
based on a modified C2Rust translation of the library used by Git,
was also considered; although its raw hashing performance seems
to measure around 1.12–1.15× the speed of `sha1-checked` on
x86, it’s indistinguishable from noise on the end‐to‐end
benchmark, and on an M2 Max `sha1-checked` is consistently
around 1.03× the speed of `sha1collisiondetection` on that
benchmark. The `sha1collisiondetection` crate has also had a
soundness issue in the past due to the automatic C translation,
whereas `sha1-checked` has only one trivial `unsafe` block. On the
other hand, `sha1collisiondetection` is used by both Sequoia itself
and the `gitoid` crate, whereas rPGP is the only major user of
`sha1-checked`. I don’t think there’s a clear winner here.

The performance regression is very unfortunate, but the [SHAttered]
attack demonstrated a collision back in 2017, and the 2020 [SHA‐1 is
a Shambles] attack demonstrated a practical chosen‐prefix collision
that broke the use of SHA‐1 in OpenPGP, costing $75k to perform,
with an estimate of $45k to replicate at the time of publication and
$11k for a classical collision.

[SHAttered]: https://shattered.io/
[SHA‐1 is a Shambles]: https://sha-mbles.github.io/

Given the increase in GPU performance and production since then,
that puts the Git object format squarely at risk. Git mitigated this
attack in 2017; the algorithm is fairly general and detects all the
existing public collisions. My understanding is that an entirely new
cryptanalytic approach would be required to develop a collision attack
for SHA‐1 that would not be detected with very high probability.

I believe that the speed penalty could be mitigated, although not
fully eliminated, by implementing a version of the hardened SHA‐1
function that makes use of SIMD. For instance, the assembly code used
by `openssl speed sha1` on my i9‐9880H measures around 830 MiB/s,
compared to the winning 580 MiB/s of `sha1_smol`; adding collision
detection support to that would surely incur a performance penalty,
but it is likely that it could be much more competitive with
the performance before this commit than the 310 MiB/s I get with
`sha1-checked`. I haven’t been able to find any existing work on
this; it seems that more or less everyone just uses the original
C library that Git does, presumably because nothing except Git and
OpenPGP is still relying on SHA‐1 anyway…

The performance will never compete with the >2 GiB/s that can
be achieved with the x86 SHA instruction set extension, as the
`SHA1RNDS4` instruction sadly runs four rounds at a time while the
collision detection algorithm requires checks after every round,
but I believe SIMD would still offer a significant improvement,
and the AArch64 extension seems like it may be more flexible.

I know that these days the Git codebase has an additional faster
unsafe API without these checks that it tries to carefully use only
for operations that do not depend on hashing results for correctness
or safety. I personally believe that’s not a terribly good idea,
as it seems easy to misuse in a case where correctness actually does
matter, but maybe that’s just my Rust safety bias talking. I think
it would be better to focus on improving the performance of the safer
algorithm, as I think that many of the operations where the performance
penalty is the most painful are dealing with untrusted input anyway.

The `Hasher` struct gets a lot bigger; I don’t know if this is
an issue or not, but if it is, it could potentially be boxed.

Closes: #585

Backported from upstream, paths and context adapted and non-applicabable parts
removed.
Signed-off-by: Fabian Grünbichler <git@fabian.gruenbichler.email>
---
 vendor/gix-features-0.39.1/Cargo.toml    |  18 +--
 vendor/gix-features-0.39.1/src/hash.rs   | 183 +------------------------------
 vendor/gix-hash-0.15.1/Cargo.toml        |   8 ++
 vendor/gix-hash-0.15.1/src/hasher/io.rs  | 138 +++++++++++++++++++++++
 vendor/gix-hash-0.15.1/src/hasher/mod.rs |  90 +++++++++++++++
 vendor/gix-hash-0.15.1/src/lib.rs        |   5 +
 6 files changed, 250 insertions(+), 192 deletions(-)
 create mode 100644 vendor/gix-hash-0.15.1/src/hasher/io.rs
 create mode 100644 vendor/gix-hash-0.15.1/src/hasher/mod.rs

diff --git a/vendor/gix-features-0.39.1/Cargo.toml b/vendor/gix-features-0.39.1/Cargo.toml
index 88ae0b8..1f7b2fb 100644
--- a/vendor/gix-features-0.39.1/Cargo.toml
+++ b/vendor/gix-features-0.39.1/Cargo.toml
@@ -91,14 +91,6 @@ default-features = false
 version = "29.0.0"
 optional = true
 
-[dependencies.sha1]
-version = "0.10.0"
-optional = true
-
-[dependencies.sha1_smol]
-version = "1.0.0"
-optional = true
-
 [dependencies.thiserror]
 version = "2.0.0"
 optional = true
@@ -115,7 +107,7 @@ default-features = false
 cache-efficiency-debug = []
 crc32 = ["dep:crc32fast"]
 default = []
-fast-sha1 = ["dep:sha1"]
+fast-sha1 = []
 fs-read-dir = ["dep:gix-utils"]
 fs-walkdir-parallel = [
     "dep:jwalk",
@@ -131,9 +123,10 @@ progress = ["prodash"]
 progress-unit-bytes = [
     "dep:bytesize",
     "prodash?/unit-bytes",
+    "gix-hash/progress-unit-bytes",
 ]
 progress-unit-human-numbers = ["prodash?/unit-human"]
-rustsha1 = ["dep:sha1_smol"]
+rustsha1 = []
 tracing = ["gix-trace/tracing"]
 tracing-detail = ["gix-trace/tracing-detail"]
 walkdir = [
@@ -162,11 +155,6 @@ zlib-stock = [
     "flate2?/zlib",
 ]
 
-[target.'cfg(all(any(target_arch = "aarch64", target_arch = "x86", target_arch = "x86_64"), not(target_os = "windows")))'.dependencies.sha1]
-version = "0.10.0"
-features = ["asm"]
-optional = true
-
 [target."cfg(unix)".dependencies.libc]
 version = "0.2.119"
 
diff --git a/vendor/gix-features-0.39.1/src/hash.rs b/vendor/gix-features-0.39.1/src/hash.rs
index eebc40f..719c579 100644
--- a/vendor/gix-features-0.39.1/src/hash.rs
+++ b/vendor/gix-features-0.39.1/src/hash.rs
@@ -1,54 +1,12 @@
 //! Hash functions and hash utilities
-//!
-//! With the `fast-sha1` feature, the `Sha1` hash type will use a more elaborate implementation utilizing hardware support
-//! in case it is available. Otherwise the `rustsha1` feature should be set. `fast-sha1` will take precedence.
-//! Otherwise, a minimal yet performant implementation is used instead for a decent trade-off between compile times and run-time performance.
-#[cfg(all(feature = "rustsha1", not(feature = "fast-sha1")))]
-mod _impl {
-    use super::Digest;
-
-    /// A implementation of the Sha1 hash, which can be used once.
-    #[derive(Default, Clone)]
-    pub struct Sha1(sha1_smol::Sha1);
-
-    impl Sha1 {
-        /// Digest the given `bytes`.
-        pub fn update(&mut self, bytes: &[u8]) {
-            self.0.update(bytes);
-        }
-        /// Finalize the hash and produce a digest.
-        pub fn digest(self) -> Digest {
-            self.0.digest().bytes()
-        }
-    }
-}
-
-/// A hash-digest produced by a [`Hasher`] hash implementation.
-#[cfg(any(feature = "fast-sha1", feature = "rustsha1"))]
-pub type Digest = [u8; 20];
-
-#[cfg(feature = "fast-sha1")]
-mod _impl {
-    use sha1::Digest;
-
-    /// A implementation of the Sha1 hash, which can be used once.
-    #[derive(Default, Clone)]
-    pub struct Sha1(sha1::Sha1);
-
-    impl Sha1 {
-        /// Digest the given `bytes`.
-        pub fn update(&mut self, bytes: &[u8]) {
-            self.0.update(bytes);
-        }
-        /// Finalize the hash and produce a digest.
-        pub fn digest(self) -> super::Digest {
-            self.0.finalize().into()
-        }
-    }
-}
 
+// TODO: Remove this.
 #[cfg(any(feature = "rustsha1", feature = "fast-sha1"))]
-pub use _impl::Sha1 as Hasher;
+pub use gix_hash::hasher::{
+    hasher,
+    io::{bytes, bytes_of_file, bytes_with_hasher, Write},
+    Digest, Hasher,
+};
 
 /// Compute a CRC32 hash from the given `bytes`, returning the CRC32 hash.
 ///
@@ -71,132 +29,3 @@ pub fn crc32(bytes: &[u8]) -> u32 {
     h.update(bytes);
     h.finalize()
 }
-
-/// Produce a hasher suitable for the given kind of hash.
-#[cfg(any(feature = "rustsha1", feature = "fast-sha1"))]
-pub fn hasher(kind: gix_hash::Kind) -> Hasher {
-    match kind {
-        gix_hash::Kind::Sha1 => Hasher::default(),
-    }
-}
-
-/// Compute the hash of `kind` for the bytes in the file at `path`, hashing only the first `num_bytes_from_start`
-/// while initializing and calling `progress`.
-///
-/// `num_bytes_from_start` is useful to avoid reading trailing hashes, which are never part of the hash itself,
-/// denoting the amount of bytes to hash starting from the beginning of the file.
-///
-/// # Note
-///
-/// * Only available with the `gix-object` feature enabled due to usage of the [`gix_hash::Kind`] enum and the
-///   [`gix_hash::ObjectId`] return value.
-/// * [Interrupts][crate::interrupt] are supported.
-#[cfg(all(feature = "progress", any(feature = "rustsha1", feature = "fast-sha1")))]
-pub fn bytes_of_file(
-    path: &std::path::Path,
-    num_bytes_from_start: u64,
-    kind: gix_hash::Kind,
-    progress: &mut dyn crate::progress::Progress,
-    should_interrupt: &std::sync::atomic::AtomicBool,
-) -> std::io::Result<gix_hash::ObjectId> {
-    bytes(
-        &mut std::fs::File::open(path)?,
-        num_bytes_from_start,
-        kind,
-        progress,
-        should_interrupt,
-    )
-}
-
-/// Similar to [`bytes_of_file`], but operates on a stream of bytes.
-#[cfg(all(feature = "progress", any(feature = "rustsha1", feature = "fast-sha1")))]
-pub fn bytes(
-    read: &mut dyn std::io::Read,
-    num_bytes_from_start: u64,
-    kind: gix_hash::Kind,
-    progress: &mut dyn crate::progress::Progress,
-    should_interrupt: &std::sync::atomic::AtomicBool,
-) -> std::io::Result<gix_hash::ObjectId> {
-    bytes_with_hasher(read, num_bytes_from_start, hasher(kind), progress, should_interrupt)
-}
-
-/// Similar to [`bytes()`], but takes a `hasher` instead of a hash kind.
-#[cfg(all(feature = "progress", any(feature = "rustsha1", feature = "fast-sha1")))]
-pub fn bytes_with_hasher(
-    read: &mut dyn std::io::Read,
-    num_bytes_from_start: u64,
-    mut hasher: Hasher,
-    progress: &mut dyn crate::progress::Progress,
-    should_interrupt: &std::sync::atomic::AtomicBool,
-) -> std::io::Result<gix_hash::ObjectId> {
-    let start = std::time::Instant::now();
-    // init progress before the possibility for failure, as convenience in case people want to recover
-    progress.init(
-        Some(num_bytes_from_start as prodash::progress::Step),
-        crate::progress::bytes(),
-    );
-
-    const BUF_SIZE: usize = u16::MAX as usize;
-    let mut buf = [0u8; BUF_SIZE];
-    let mut bytes_left = num_bytes_from_start;
-
-    while bytes_left > 0 {
-        let out = &mut buf[..BUF_SIZE.min(bytes_left as usize)];
-        read.read_exact(out)?;
-        bytes_left -= out.len() as u64;
-        progress.inc_by(out.len());
-        hasher.update(out);
-        if should_interrupt.load(std::sync::atomic::Ordering::SeqCst) {
-            return Err(std::io::Error::new(std::io::ErrorKind::Other, "Interrupted"));
-        }
-    }
-
-    let id = gix_hash::ObjectId::from(hasher.digest());
-    progress.show_throughput(start);
-    Ok(id)
-}
-
-#[cfg(any(feature = "rustsha1", feature = "fast-sha1"))]
-mod write {
-    use crate::hash::Hasher;
-
-    /// A utility to automatically generate a hash while writing into an inner writer.
-    pub struct Write<T> {
-        /// The hash implementation.
-        pub hash: Hasher,
-        /// The inner writer.
-        pub inner: T,
-    }
-
-    impl<T> std::io::Write for Write<T>
-    where
-        T: std::io::Write,
-    {
-        fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
-            let written = self.inner.write(buf)?;
-            self.hash.update(&buf[..written]);
-            Ok(written)
-        }
-
-        fn flush(&mut self) -> std::io::Result<()> {
-            self.inner.flush()
-        }
-    }
-
-    impl<T> Write<T>
-    where
-        T: std::io::Write,
-    {
-        /// Create a new hash writer which hashes all bytes written to `inner` with a hash of `kind`.
-        pub fn new(inner: T, object_hash: gix_hash::Kind) -> Self {
-            match object_hash {
-                gix_hash::Kind::Sha1 => Write {
-                    inner,
-                    hash: Hasher::default(),
-                },
-            }
-        }
-    }
-}
-#[cfg(any(feature = "rustsha1", feature = "fast-sha1"))]
-pub use write::Write;
diff --git a/vendor/gix-hash-0.15.1/Cargo.toml b/vendor/gix-hash-0.15.1/Cargo.toml
index 6cb9fc8..a219070 100644
--- a/vendor/gix-hash-0.15.1/Cargo.toml
+++ b/vendor/gix-hash-0.15.1/Cargo.toml
@@ -46,18 +46,26 @@ optional = true
 [dependencies.faster-hex]
 version = "0.9.0"
 
+[dependencies.prodash]
+version = "29"
+
 [dependencies.serde]
 version = "1.0.114"
 features = ["derive"]
 optional = true
 default-features = false
 
+[dependencies.sha1-checked]
+version = "0.10.0"
+default-features = false
+
 [dependencies.thiserror]
 version = "2.0.0"
 
 [dev-dependencies]
 
 [features]
+progress-unit-bytes = ["prodash/unit-bytes"]
 serde = ["dep:serde"]
 
 [lints.clippy]
diff --git a/vendor/gix-hash-0.15.1/src/hasher/io.rs b/vendor/gix-hash-0.15.1/src/hasher/io.rs
new file mode 100644
index 0000000..ec582d9
--- /dev/null
+++ b/vendor/gix-hash-0.15.1/src/hasher/io.rs
@@ -0,0 +1,138 @@
+use crate::{hasher, Hasher};
+
+// Temporary, to avoid a circular dependency on `gix-features`.
+///
+mod gix_features {
+    ///
+    pub mod progress {
+        pub use prodash::{self, unit, Progress, Unit};
+
+        ///
+        #[cfg(feature = "progress-unit-bytes")]
+        pub fn bytes() -> Option<Unit> {
+            Some(unit::dynamic_and_mode(
+                unit::Bytes,
+                unit::display::Mode::with_throughput().and_percentage(),
+            ))
+        }
+
+        ///
+        #[cfg(not(feature = "progress-unit-bytes"))]
+        pub fn bytes() -> Option<Unit> {
+            Some(unit::label_and_mode(
+                "B",
+                unit::display::Mode::with_throughput().and_percentage(),
+            ))
+        }
+    }
+}
+
+/// Compute the hash of `kind` for the bytes in the file at `path`, hashing only the first `num_bytes_from_start`
+/// while initializing and calling `progress`.
+///
+/// `num_bytes_from_start` is useful to avoid reading trailing hashes, which are never part of the hash itself,
+/// denoting the amount of bytes to hash starting from the beginning of the file.
+///
+/// # Note
+///
+/// * Interrupts are supported.
+// TODO: Fix link to `gix_features::interrupt`.
+pub fn bytes_of_file(
+    path: &std::path::Path,
+    num_bytes_from_start: u64,
+    kind: crate::Kind,
+    progress: &mut dyn gix_features::progress::Progress,
+    should_interrupt: &std::sync::atomic::AtomicBool,
+) -> std::io::Result<crate::ObjectId> {
+    bytes(
+        &mut std::fs::File::open(path)?,
+        num_bytes_from_start,
+        kind,
+        progress,
+        should_interrupt,
+    )
+}
+
+/// Similar to [`bytes_of_file`], but operates on a stream of bytes.
+pub fn bytes(
+    read: &mut dyn std::io::Read,
+    num_bytes_from_start: u64,
+    kind: crate::Kind,
+    progress: &mut dyn gix_features::progress::Progress,
+    should_interrupt: &std::sync::atomic::AtomicBool,
+) -> std::io::Result<crate::ObjectId> {
+    bytes_with_hasher(read, num_bytes_from_start, hasher(kind), progress, should_interrupt)
+}
+
+/// Similar to [`bytes()`], but takes a `hasher` instead of a hash kind.
+pub fn bytes_with_hasher(
+    read: &mut dyn std::io::Read,
+    num_bytes_from_start: u64,
+    mut hasher: Hasher,
+    progress: &mut dyn gix_features::progress::Progress,
+    should_interrupt: &std::sync::atomic::AtomicBool,
+) -> std::io::Result<crate::ObjectId> {
+    let start = std::time::Instant::now();
+    // init progress before the possibility for failure, as convenience in case people want to recover
+    progress.init(
+        Some(num_bytes_from_start as gix_features::progress::prodash::progress::Step),
+        gix_features::progress::bytes(),
+    );
+
+    const BUF_SIZE: usize = u16::MAX as usize;
+    let mut buf = [0u8; BUF_SIZE];
+    let mut bytes_left = num_bytes_from_start;
+
+    while bytes_left > 0 {
+        let out = &mut buf[..BUF_SIZE.min(bytes_left as usize)];
+        read.read_exact(out)?;
+        bytes_left -= out.len() as u64;
+        progress.inc_by(out.len());
+        hasher.update(out);
+        if should_interrupt.load(std::sync::atomic::Ordering::SeqCst) {
+            return Err(std::io::Error::new(std::io::ErrorKind::Other, "Interrupted"));
+        }
+    }
+
+    let id = crate::ObjectId::from(hasher.digest());
+    progress.show_throughput(start);
+    Ok(id)
+}
+
+/// A utility to automatically generate a hash while writing into an inner writer.
+pub struct Write<T> {
+    /// The hash implementation.
+    pub hash: Hasher,
+    /// The inner writer.
+    pub inner: T,
+}
+
+impl<T> std::io::Write for Write<T>
+where
+    T: std::io::Write,
+{
+    fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
+        let written = self.inner.write(buf)?;
+        self.hash.update(&buf[..written]);
+        Ok(written)
+    }
+
+    fn flush(&mut self) -> std::io::Result<()> {
+        self.inner.flush()
+    }
+}
+
+impl<T> Write<T>
+where
+    T: std::io::Write,
+{
+    /// Create a new hash writer which hashes all bytes written to `inner` with a hash of `kind`.
+    pub fn new(inner: T, object_hash: crate::Kind) -> Self {
+        match object_hash {
+            crate::Kind::Sha1 => Write {
+                inner,
+                hash: Hasher::default(),
+            },
+        }
+    }
+}
diff --git a/vendor/gix-hash-0.15.1/src/hasher/mod.rs b/vendor/gix-hash-0.15.1/src/hasher/mod.rs
new file mode 100644
index 0000000..db4350b
--- /dev/null
+++ b/vendor/gix-hash-0.15.1/src/hasher/mod.rs
@@ -0,0 +1,90 @@
+use sha1_checked::CollisionResult;
+
+/// A hash-digest produced by a [`Hasher`] hash implementation.
+pub type Digest = [u8; 20];
+
+/// The error returned by [`Hasher::try_finalize()`].
+#[derive(Debug, thiserror::Error)]
+#[allow(missing_docs)]
+pub enum Error {
+    #[error("Detected SHA-1 collision attack with digest {digest}")]
+    CollisionAttack { digest: crate::ObjectId },
+}
+
+/// A implementation of the Sha1 hash, which can be used once.
+///
+/// We use [`sha1_checked`] to implement the same collision detection
+/// algorithm as Git.
+#[derive(Clone)]
+pub struct Hasher(sha1_checked::Sha1);
+
+impl Default for Hasher {
+    #[inline]
+    fn default() -> Self {
+        // This matches the configuration used by Git, which only uses
+        // the collision detection to bail out, rather than computing
+        // alternate “safe hashes” for inputs where a collision attack
+        // was detected.
+        Self(sha1_checked::Builder::default().safe_hash(false).build())
+    }
+}
+
+impl Hasher {
+    /// Digest the given `bytes`.
+    pub fn update(&mut self, bytes: &[u8]) {
+        use sha1_checked::Digest;
+        self.0.update(bytes);
+    }
+
+    /// Finalize the hash and produce an object ID.
+    ///
+    /// Returns [`Error`] if a collision attack is detected.
+    #[inline]
+    pub fn try_finalize(self) -> Result<crate::ObjectId, Error> {
+        match self.0.try_finalize() {
+            CollisionResult::Ok(digest) => Ok(crate::ObjectId::Sha1(digest.into())),
+            CollisionResult::Mitigated(_) => {
+                // SAFETY: `CollisionResult::Mitigated` is only
+                // returned when `safe_hash()` is on. `Hasher`’s field
+                // is private, and we only construct it in the
+                // `Default` instance, which turns `safe_hash()` off.
+                //
+                // As of Rust 1.84.1, the compiler can’t figure out
+                // this function cannot panic without this.
+                #[allow(unsafe_code)]
+                unsafe {
+                    std::hint::unreachable_unchecked()
+                }
+            }
+            CollisionResult::Collision(digest) => Err(Error::CollisionAttack {
+                digest: crate::ObjectId::Sha1(digest.into()),
+            }),
+        }
+    }
+
+    /// Finalize the hash and produce an object ID.
+    #[inline]
+    pub fn finalize(self) -> crate::ObjectId {
+        self.try_finalize().expect("Detected SHA-1 collision attack")
+    }
+
+    /// Finalize the hash and produce a digest.
+    #[inline]
+    pub fn digest(self) -> Digest {
+        self.finalize()
+            .as_slice()
+            .try_into()
+            .expect("SHA-1 object ID to be 20 bytes long")
+    }
+}
+
+/// Produce a hasher suitable for the given kind of hash.
+#[inline]
+pub fn hasher(kind: crate::Kind) -> Hasher {
+    match kind {
+        crate::Kind::Sha1 => Hasher::default(),
+    }
+}
+
+/// Hashing utilities for I/O operations.
+pub mod io;
diff --git a/vendor/gix-hash-0.15.1/src/lib.rs b/vendor/gix-hash-0.15.1/src/lib.rs
index 20cb9c4..924681d 100644
--- a/vendor/gix-hash-0.15.1/src/lib.rs
+++ b/vendor/gix-hash-0.15.1/src/lib.rs
@@ -13,6 +13,11 @@
 mod borrowed;
 pub use borrowed::{oid, Error};
 
+/// Hash functions and hash utilities
+pub mod hasher;
+pub use hasher::io::{bytes, bytes_of_file, bytes_with_hasher};
+pub use hasher::{hasher, Hasher};
+
 mod object_id;
 pub use object_id::{decode, ObjectId};