1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
|
Source: libstring-tokenizer-perl
Maintainer: Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org>
Section: perl
Testsuite: autopkgtest-pkg-perl
Priority: optional
Build-Depends: debhelper-compat (= 13)
Build-Depends-Indep: perl
Standards-Version: 3.9.7
Vcs-Browser: https://salsa.debian.org/perl-team/modules/packages/libstring-tokenizer-perl
Vcs-Git: https://salsa.debian.org/perl-team/modules/packages/libstring-tokenizer-perl.git
Homepage: https://metacpan.org/release/String-Tokenizer
Package: libstring-tokenizer-perl
Architecture: all
Depends: ${misc:Depends},
${perl:Depends}
Multi-Arch: foreign
Description: simple string tokenizer
String::Tokenizer is a simple string tokenizer which takes a string and splits
it on whitespace. It also optionally takes a string of characters to use as
delimiters, and returns them with the token set as well. This allows for
splitting the string in many different ways.
.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs, but
it spans a gap between simple split / /, $string and the other options that
involve much larger and complex modules.
.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input into
specific chunks, a lexical analyzer classifies those chunks. Sometimes these
two steps are combined, but not here.
|