File: control

package info (click to toggle)
armnn 19.11.1-1
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 16,388 kB
  • sloc: cpp: 159,089; ansic: 8,175; sh: 113; makefile: 13; asm: 12
file content (53 lines) | stat: -rw-r--r-- 2,516 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
Source: armnn
Section: devel
Priority: optional
Maintainer: Francis Murtagh <francis.murtagh@arm.com>
Uploaders: Wookey <wookey@debian.org>
Build-Depends: libboost-filesystem-dev (>= 1.64), libboost-test-dev (>= 1.64),
  libboost-system-dev (>= 1.64), libboost-filesystem-dev (>= 1.64), 
  libboost-log-dev (>= 1.64), libboost-program-options-dev (>= 1.64), 
  cmake, debhelper-compat (= 12), valgrind, libflatbuffers-dev, 
  libarm-compute-dev [arm64], libarm-compute-dev [armhf] 
Standards-Version: 4.5.0
Vcs-Git: https://salsa.debian.org/deeplearning-team/armnn.git
Vcs-Browser: https://salsa.debian.org/deeplearning-team/armnn

Package: libarmnn19
Architecture: any
Multi-Arch: same
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.

Package: libarmnn-dev
Architecture: any
Multi-Arch: same
Depends: libarmnn19 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.