File: control

package info (click to toggle)
python-w3lib 2.1.1-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 360 kB
  • sloc: python: 3,130; makefile: 127; sh: 8
file content (47 lines) | stat: -rw-r--r-- 1,581 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
Source: python-w3lib
Maintainer: Debian Python Team <team+python@tracker.debian.org>
Uploaders:
 Ignace Mouzannar <mouzannar@gmail.com>,
 Andrey Rakhmatullin <wrar@debian.org>
Section: python
Priority: optional
Build-Depends:
 debhelper-compat (= 13),
 dh-sequence-python3,
 python3-all,
 python3-pytest <!nocheck>,
 python3-setuptools,
 python3-six,
Standards-Version: 4.6.2
Vcs-Browser: https://salsa.debian.org/python-team/packages/python-w3lib
Vcs-Git: https://salsa.debian.org/python-team/packages/python-w3lib.git
Homepage: https://github.com/scrapy/w3lib
Testsuite: autopkgtest-pkg-python
Rules-Requires-Root: no

Package: python3-w3lib
Architecture: all
Depends:
 python3-six,
 ${misc:Depends},
 ${python3:Depends},
Description: Collection of web-related functions (Python 3)
 Python module with simple, reusable functions to work with URLs, HTML,
 forms, and HTTP, that aren’t found in the Python standard library.
 .
 This module is used to, for example:
  - remove comments, or tags from HTML snippets
  - extract base url from HTML snippets
  - translate entities on HTML strings
  - encoding mulitpart/form-data
  - convert raw HTTP headers to dicts and vice-versa
  - construct HTTP auth header
  - RFC-compliant url joining
  - sanitize urls (like browsers do)
  - extract arguments from urls
 .
 The code of w3lib was originally part of the Scrapy framework but was later
 stripped out of Scrapy, with the aim of make it more reusable and to provide
 a useful library of web functions without depending on Scrapy.
 .
 This is the Python 3 version of the package.