1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
|
Source: python-w3lib
Section: python
Priority: optional
Maintainer: Debian Python Modules Team <python-modules-team@lists.alioth.debian.org>
Uploaders: Ignace Mouzannar <mouzannar@gmail.com>
Build-Depends:
debhelper (>= 9),
dh-python,
python-all (>= 2.6.6-3~),
python-setuptools,
python3-all,
python3-setuptools,
python-six,
python3-six
X-Python-Version: >= 2.7
X-Python3-Version: >= 3.3
Standards-Version: 3.9.8
Homepage: http://pypi.python.org/pypi/w3lib
Vcs-Browser: https://anonscm.debian.org/viewvc/python-apps/packages/python-w3lib/trunk/
Package: python-w3lib
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-six (>= 1.6.1)
Description: Collection of web-related functions for Python 2
Python module with simple, reusable functions to work with URLs, HTML,
forms, and HTTP, that aren’t found in the Python standard library.
.
This module is used to, for example:
- remove comments, or tags from HTML snippets
- extract base url from HTML snippets
- translate entities on HTML strings
- encoding mulitpart/form-data
- convert raw HTTP headers to dicts and vice-versa
- construct HTTP auth header
- RFC-compliant url joining
- sanitize urls (like browsers do)
- extract arguments from urls
.
The code of w3lib was originally part of the Scrapy framework but was later
stripped out of Scrapy, with the aim of make it more reusable and to provide
a useful library of web functions without depending on Scrapy.
.
This is the Python 2 version of the package.
Package: python3-w3lib
Architecture: all
Depends: ${misc:Depends}, ${python3:Depends}, python3-six (>= 1.6.1)
Description: Collection of web-related functions for Python 3
Python module with simple, reusable functions to work with URLs, HTML,
forms, and HTTP, that aren’t found in the Python standard library.
.
This module is used to, for example:
- remove comments, or tags from HTML snippets
- extract base url from HTML snippets
- translate entities on HTML strings
- encoding mulitpart/form-data
- convert raw HTTP headers to dicts and vice-versa
- construct HTTP auth header
- RFC-compliant url joining
- sanitize urls (like browsers do)
- extract arguments from urls
.
The code of w3lib was originally part of the Scrapy framework but was later
stripped out of Scrapy, with the aim of make it more reusable and to provide
a useful library of web functions without depending on Scrapy.
.
This is the Python 3 version of the package.
|