File: control

package info (click to toggle)
armnn 20.08-9
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 20,472 kB
  • sloc: cpp: 211,547; python: 2,756; sh: 285; makefile: 38; asm: 6
file content (201 lines) | stat: -rw-r--r-- 9,642 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
Source: armnn
Section: devel
Priority: optional
Maintainer: Francis Murtagh <francis.murtagh@arm.com>
Uploaders: Wookey <wookey@debian.org>
Build-Depends: libboost-test-dev (>= 1.64),
  libboost-system-dev (>= 1.64), libboost-filesystem-dev (>= 1.64), 
  libboost-log-dev (>= 1.64), libboost-program-options-dev (>= 1.64), 
  cmake, debhelper-compat (= 12), valgrind, libflatbuffers-dev, 
  libarm-compute-dev [arm64 armhf],
  swig (>= 4.0.1-5), dh-python, python3-all, python3-setuptools,
  python3-dev, python3-numpy, xxd, flatbuffers-compiler, chrpath
Standards-Version: 4.5.0
Vcs-Git: https://salsa.debian.org/deeplearning-team/armnn.git
Vcs-Browser: https://salsa.debian.org/deeplearning-team/armnn

Package: libarmnn22
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el 
Multi-Arch: same
Depends: ${shlibs:Depends}, ${misc:Depends}
Suggests: libarmnntfliteparser22 (= ${binary:Version}),
          python3-pyarmnn (= ${binary:Version})
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.

Package: libarmnn-dev
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

Package: libarmnntfliteparser22
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.


Package: libarmnntfliteparser-dev
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn-dev (= ${binary:Version}),
         libarmnntfliteparser22 (= ${binary:Version}),
         ${shlibs:Depends},
         ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

Package: python3-pyarmnn
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Depends: libarmnn22 (= ${binary:Version}),
         libarmnntfliteparser22 (= ${binary:Version}),
         ${shlibs:Depends},
         ${misc:Depends},
         ${python3:Depends}
Description: PyArmNN is a python extension for the Armnn SDK
 PyArmNN provides interface similar to Arm NN C++ Api.
 .
 PyArmNN is built around public headers from the armnn/include folder
 of Arm NN. PyArmNN does not implement any computation kernels itself,
 all operations are delegated to the Arm NN library.

Package: libarmnn-cpuacc-backend22
Architecture: armhf arm64
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Neon backend package.

Package: libarmnnaclcommon22
Architecture: armhf arm64
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the common shared library used by Arm Compute Library backends.

Package: libarmnn-cpuref-backend22
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}),
         libarmnnaclcommon22 (= ${binary:Version}) [arm64 armhf],
         ${shlibs:Depends},
         ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Reference backend package.

Package: libarmnn-gpuacc-backend22
Architecture: armhf arm64
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}),
         libarmnnaclcommon22 (= ${binary:Version}) [arm64 armhf],
         ${shlibs:Depends},
         ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable CL backend package.