File: control

package info (click to toggle)
libhttp-async-perl 0.33-3
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, forky, sid, trixie
  • size: 268 kB
  • sloc: perl: 691; makefile: 2
file content (46 lines) | stat: -rw-r--r-- 2,076 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Source: libhttp-async-perl
Maintainer: Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org>
Uploaders: Ernesto Hernández-Novich (USB) <emhn@usb.ve>,
           Florian Schlichting <fsfs@debian.org>
Section: perl
Testsuite: autopkgtest-pkg-perl
Priority: optional
Build-Depends: debhelper-compat (= 13)
Build-Depends-Indep: libnet-https-nb-perl,
                     libtest-fatal-perl,
                     libtest-http-server-simple-perl,
                     libtest-pod-perl,
                     libtest-pod-coverage-perl,
                     libtest-tcp-perl,
                     liburi-perl,
                     libwww-perl,
                     perl
Standards-Version: 3.9.8
Vcs-Browser: https://salsa.debian.org/perl-team/modules/packages/libhttp-async-perl
Vcs-Git: https://salsa.debian.org/perl-team/modules/packages/libhttp-async-perl.git
Homepage: https://metacpan.org/release/HTTP-Async

Package: libhttp-async-perl
Architecture: all
Depends: ${misc:Depends},
         ${perl:Depends},
         liburi-perl,
         libwww-perl
Suggests: libnet-https-nb-perl
Multi-Arch: foreign
Description: module for parallel non-blocking processing of multiple HTTP requests
 Although using the conventional LWP::UserAgent is fast and easy it does have
 some drawbacks - the code execution blocks until the request has been
 completed and it is only possible to process one request at a time.
 HTTP::Async attempts to address these limitations.
 .
 It gives you a 'Async' object that you can add requests to, and then get the
 requests off as they finish. The actual sending and receiving of the requests
 is abstracted. As soon as you add a request it is transmitted, if there are
 too many requests in progress at the moment they are queued. There is no
 concept of starting or stopping - it runs continuously.
 .
 Whilst it is waiting to receive data it returns control to the code that
 called it meaning that you can carry out processing whilst fetching data from
 the network. All without forking or threading - it is actually done using
 select lists.