1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246
|
llama.cpp (8064+dfsg-1) unstable; urgency=medium
* New upstream version 8064+dfsg
* autopkgtest: CUDA no longer supported on ppc64el
-- Christian Kastner <ckk@debian.org> Sun, 15 Feb 2026 22:11:57 +0100
llama.cpp (7965+dfsg-1) unstable; urgency=medium
* New upstream version 7965+dfsg
- Added examples: llama-debug
- Removed examples: llama-logits
- Removed tools (extra): llama-run
* Drop patch included upstream
* Refresh patches
* Bump libggml-dev dependency to 0.9.6
* Replace libcurl4-openssl-dev with libssl-dev
* d/copyright: Drop references to no longer embedded libs
* Install additional man pages
* Refresh copyrights
-- Christian Kastner <ckk@debian.org> Sun, 08 Feb 2026 10:18:39 +0100
llama.cpp (7593+dfsg-3) unstable; urgency=medium
* autopkgtest: Fix detection of skipped tests
* Also make llama-bench(1) reproducible (Closes: #1113813)
* autopkgtest: Temporarily disable models >4GB
Not all workers support these models, and the test runner needs to be
adapted to discover this at runtime
* autopkgtest: Export JSON tests results as artifacts
-- Christian Kastner <ckk@debian.org> Fri, 16 Jan 2026 18:30:50 +0100
llama.cpp (7593+dfsg-2) unstable; urgency=medium
* Re-add accidentally dropped dependency on libggml-dev
* Designate bin:llama.cpp as Multi-Arch: foreign
* Bump Standards-Version to 4.7.3
- Drop Priority: optional, it's the dpkg default since trixie
* Update patch metadata
* Enable Salsa CI
* Tweak lintian overrides
* autopkgtest: Also run upstream's basic (non-LLM) tests
-- Christian Kastner <ckk@debian.org> Sun, 11 Jan 2026 16:26:14 +0100
llama.cpp (7593+dfsg-1) unstable; urgency=medium
* New upstream version 7593+dfsg
* Refresh patches
* Switch build to the now public libggml
* Also install libmtmd to private directory
* Install newly appeared tools
* Drop obsolete lintian overrides for ggml
* Refresh copyright
-- Christian Kastner <ckk@debian.org> Sun, 04 Jan 2026 13:37:14 +0100
llama.cpp (6641+dfsg-3) unstable; urgency=medium
* autopkgtest: Add Classes to enable filtering by custom tools
-- Christian Kastner <ckk@debian.org> Sun, 28 Dec 2025 16:05:04 +0100
llama.cpp (6641+dfsg-2) unstable; urgency=medium
[ Chris Lamb ]
* Add reproducible-builds.patch (Closes: #1113813)
[ Christian Kastner ]
* autopkgtest: Explicitly limit architectures for GPU tests
* autopkgtest: Use Q4_K_M quants, as per mbaudier's suggestion
* Replace libggml0-backend-cpu dependencies with libggml0
* autopkgtest: backend-vulkan-nvidia is also skip-not-installable
-- Christian Kastner <ckk@debian.org> Tue, 23 Dec 2025 21:24:32 +0100
llama.cpp (6641+dfsg-1) unstable; urgency=medium
[ Christian Kastner ]
* New upstream version 6641+dfsg
- Refresh patches
* Depend on versioned ggml
Upstream is experimenting with semantic versioning of ggml, so we can
switch our dependencies to that format.
However, because this is still experimental, we continue do depend on a
very specific version. This will be relaxed as soon as we have symbols
tracking.
* autopkgtests: Support blank lines/comments in d/t/supported-models.*
* autopkgtests: Improve upon supported test models list
* Update installed examples
- Added: llama-diffusion-cli, llama-logits
- Dropped: llama-gritlm
* d/clean: Also remove generated completions/
[ Kentaro Hayashi ]
* Use d/watch 5.
-- Christian Kastner <ckk@debian.org> Sun, 05 Oct 2025 22:09:48 +0200
llama.cpp (5882+dfsg-4) unstable; urgency=medium
* Add autopkgtests.
- The tests expect a /models directory in the testbed
- This includes a test helper for running tests on AMD and NVIDIA
GPUs, if the system has one available.
-- Christian Kastner <ckk@debian.org> Sun, 14 Sep 2025 23:13:11 +0200
llama.cpp (5882+dfsg-3) unstable; urgency=medium
* Upload to unstable
* Package description fixes
-- Christian Kastner <ckk@debian.org> Wed, 27 Aug 2025 07:01:15 +0200
llama.cpp (5882+dfsg-3~exp3) experimental; urgency=medium
[ Christian Kastner ]
* Switch over to SOVERsioned, dynamic-backend-loading ggml
* libllama0: Drop spurious python3 dependency
[ Mathieu Baudier ]
* llama.cpp-tools: Introduce bash completion
-- Christian Kastner <ckk@debian.org> Thu, 07 Aug 2025 12:43:22 +0200
llama.cpp (5882+dfsg-3~exp2) experimental; urgency=medium
* Correct the Section field of a few packages
-- Christian Kastner <ckk@debian.org> Mon, 14 Jul 2025 18:54:14 +0200
llama.cpp (5882+dfsg-3~exp1) experimental; urgency=medium
* Split llama.cpp package into subpackages
* Build new package python3-gguf
* d/rules: Pass in LLAMA_BUILD_{NUMBER,COMMIT}
* Add gguf-py-depends-on-the-requests-library.patch
* Add Add-soversion-to-libraries.patch
* Rename private directories llama.cpp -> llama
* Improve llama-server theme handling
* Generate manpages using help2man
-- Christian Kastner <ckk@debian.org> Mon, 14 Jul 2025 17:17:43 +0200
llama.cpp (5882+dfsg-2) unstable; urgency=medium
* Build-Depend on the exact version of ggml.
For the same reason the binaries depend on the exact version. Avoids
FTBFS because of frequent API/ABI breakages
-- Christian Kastner <ckk@debian.org> Sun, 13 Jul 2025 11:16:13 +0200
llama.cpp (5882+dfsg-1) unstable; urgency=medium
* New upstream version 5882+dfsg
* Rebase patches
* Fix broken path to llama-server theme
* Bump ggml dependency
* d/gbp.conf: Convert to DEP-14 layout
* d/gbp.conf: Enforce non-numbered patches
* Update d/copyright
-- Christian Kastner <ckk@debian.org> Sat, 12 Jul 2025 17:31:41 +0200
llama.cpp (5760+dfsg-4) unstable; urgency=medium
* Fix installability yet again (ggml version still mis-specified)
(Closes: #1108925)
-- Christian Kastner <ckk@debian.org> Tue, 08 Jul 2025 08:44:50 +0200
llama.cpp (5760+dfsg-3) unstable; urgency=medium
* Fix installability (ggml version was mis-specified)
* Improve lintian overrides
-- Christian Kastner <ckk@debian.org> Mon, 07 Jul 2025 18:27:22 +0200
llama.cpp (5760+dfsg-2) unstable; urgency=medium
* Hard-code (relaxed) ggml dependency
We can't deduce the support ggml version, the maintainers must explicitly
specify it. In doing so, ignore the Debian revision number
-- Christian Kastner <ckk@debian.org> Fri, 27 Jun 2025 22:13:39 +0200
llama.cpp (5760+dfsg-1) unstable; urgency=medium
* New upstream version 5760+dfsg (Closes: #1108368)
- Includes a fix for CVE-2025-52566
* Refactor/add missing copyrights for vendored code
* Refresh patches
-- Christian Kastner <ckk@debian.org> Fri, 27 Jun 2025 07:55:00 +0200
llama.cpp (5713+dfsg-1) unstable; urgency=medium
* New upstream release (Closes: #1108113)
- Includes a fix for CVE-2025-49847
* Refresh patches
* Update d/copyright
* Document ggml/llama.cpp/whisper.cpp update procedure
* Install the new mtmd headers
-- Christian Kastner <ckk@debian.org> Fri, 20 Jun 2025 21:00:33 +0200
llama.cpp (5318+dfsg-2) unstable; urgency=medium
[ Mathieu Baudier ]
* Install public headers and build configurations to private directories
* Fix private directories for pkg-config
[ Christian Kastner ]
* Depend on exact build-time ggml version
Upstream ships llama.cpp with a specific version of ggml. We have no
guarantees that any version earlier or later than that, in fact it's
common for newer versions to break something.
So going forward, we ship llama.cpp and ggml in tandem, with ggml being
updated first, and llama.cpp depending on the exact version used at
build-time.
* Install all free server themes
* Enable changing server theme using update-alternatives
* Simplify server frontend patches
* Begin shipping the tests
-- Christian Kastner <ckk@debian.org> Thu, 19 Jun 2025 23:17:31 +0200
llama.cpp (5318+dfsg-1) unstable; urgency=medium
* Upload to unstable.
* New upstream version 5318+dfsg
- Refresh patches
* Update d/copyright
-- Christian Kastner <ckk@debian.org> Fri, 09 May 2025 09:54:32 +0200
llama.cpp (5151+dfsg-1~exp3) experimental; urgency=medium
* Initial release (Closes: #1063673)
-- Christian Kastner <ckk@debian.org> Sat, 19 Apr 2025 21:59:05 +0200
|