File: control

package info (click to toggle)
python-ftfy 6.3.1-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 808 kB
  • sloc: python: 1,716; makefile: 148
file content (46 lines) | stat: -rw-r--r-- 1,852 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Source: python-ftfy
Maintainer: Debian Python Team <team+python@tracker.debian.org>
Uploaders:
 Edward Betts <edward@4angle.com>,
Section: python
Priority: optional
Build-Depends:
 debhelper-compat (= 13),
 dh-sequence-python3,
 dh-sequence-sphinxdoc <!nodoc>,
 pybuild-plugin-pyproject,
 python3-all,
 python3-hatchling,
Build-Depends-Indep:
 furo <!nodoc>,
 python3-pytest <!nocheck>,
 python3-sphinx <!nodoc>,
 python3-wcwidth <!nocheck>,
Standards-Version: 4.7.2
Homepage: https://github.com/rspeer/python-ftfy
Vcs-Browser: https://salsa.debian.org/python-team/packages/python-ftfy
Vcs-Git: https://salsa.debian.org/python-team/packages/python-ftfy.git
Testsuite: autopkgtest-pkg-pybuild

Package: python3-ftfy
Architecture: all
Depends:
 ${misc:Depends},
 ${python3:Depends},
 ${sphinxdoc:Depends},
Built-Using:
 ${sphinxdoc:Built-Using},
Description: Fixes mojibake and other Unicode text problems
 This library automatically repairs text that has been corrupted by misapplied
 character encodings, such as mojibake or other encoding-related issues. It
 analyzes strings to identify and correct cases where characters were
 incorrectly decoded, reconstructing the intended Unicode text. This includes
 fixing multiple layers of encoding errors, handling curly quote characters,
 and decoding HTML entities that are outside of proper HTML contexts, even with
 unusual capitalization. The library is designed to avoid making unnecessary or
 incorrect changes to text that is already correctly encoded. It helps restore
 text readability in content that has been malformed through various data
 handling and transfer processes, such as those involving databases,
 spreadsheets, or outputs from web sources. It does not attempt to detect
 encodings from scratch, but rather focuses on repairing commonly-encountered
 forms of corrupted Unicode text.