1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
|
Source: xnnpack
Section: math
Homepage: https://github.com/google/XNNPACK
Priority: optional
Standards-Version: 4.5.0
Vcs-Git: https://salsa.debian.org/deeplearning-team/xnnpack.git
Vcs-Browser: https://salsa.debian.org/deeplearning-team/xnnpack
Maintainer: Debian Deep Learning Team <debian-ai@lists.debian.org>
Uploaders: Mo Zhou <lumin@debian.org>
Rules-Requires-Root: no
Build-Depends: cmake,
debhelper-compat (= 13),
googletest,
# for libclog.a and clog.h
libcpuinfo-dev (>= 0.0~git20200422.a1e0b95-2~),
libfp16-dev,
libfxdiv-dev,
libpsimd-dev,
libpthreadpool-dev,
ninja-build
Package: libxnnpack-dev
Architecture: any
Depends: libxnnpack0 (= ${binary:Version}), ${misc:Depends}
Description: High-efficiency floating-point neural network inference operators (dev)
XNNPACK is a highly optimized library of floating-point neural network
inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not
intended for direct use by deep learning practitioners and researchers; instead
it provides low-level performance primitives for accelerating high-level
machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch,
and MediaPipe.
.
This package contains the development files.
Package: libxnnpack0
Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: High-efficiency floating-point neural network inference operators (libs)
XNNPACK is a highly optimized library of floating-point neural network
inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not
intended for direct use by deep learning practitioners and researchers; instead
it provides low-level performance primitives for accelerating high-level
machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch,
and MediaPipe.
.
This package contains the shared object.
|