File: ptb-tokenizer.Rd

package info (click to toggle)
r-cran-tokenizers 0.3.0-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, forky, sid, trixie
  • size: 824 kB
  • sloc: cpp: 143; sh: 13; makefile: 2
file content (63 lines) | stat: -rw-r--r-- 2,630 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ptb-tokenizer.R
\name{tokenize_ptb}
\alias{tokenize_ptb}
\title{Penn Treebank Tokenizer}
\usage{
tokenize_ptb(x, lowercase = FALSE, simplify = FALSE)
}
\arguments{
\item{x}{A character vector or a list of character vectors to be tokenized
into n-grams. If \code{x} is a character vector, it can be of any length,
and each element will be tokenized separately. If \code{x} is a list of
character vectors, each element of the list should have a length of 1.}

\item{lowercase}{Should the tokens be made lower case?}

\item{simplify}{\code{FALSE} by default so that a consistent value is
returned regardless of length of input. If \code{TRUE}, then an input with
a single element will return a character vector of tokens instead of a
list.}
}
\value{
A list of character vectors containing the tokens, with one element
  in the list for each element that was passed as input. If \code{simplify =
  TRUE} and only a single element was passed as input, then the output is a
  character vector of tokens.
}
\description{
This function implements the Penn Treebank word tokenizer.
}
\details{
This tokenizer uses regular expressions to tokenize text similar to
  the tokenization used in the Penn Treebank. It assumes that text has
  already been split into sentences. The tokenizer does the following:

  \itemize{ \item{splits common English contractions, e.g. \verb{don't} is
  tokenized into \verb{do n't} and \verb{they'll} is tokenized into ->
  \verb{they 'll},} \item{handles punctuation characters as separate tokens,}
  \item{splits commas and single quotes off from words, when they are
  followed by whitespace,} \item{splits off periods that occur at the end of
  the sentence.} }

This function is a port of the Python NLTK version of the Penn
  Treebank Tokenizer.
}
\examples{
song <- list(paste0("How many roads must a man walk down\n",
                    "Before you call him a man?"),
             paste0("How many seas must a white dove sail\n",
                    "Before she sleeps in the sand?\n"),
             paste0("How many times must the cannonballs fly\n",
                    "Before they're forever banned?\n"),
             "The answer, my friend, is blowin' in the wind.",
             "The answer is blowin' in the wind.")
tokenize_ptb(song)
tokenize_ptb(c("Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.",
  "They'll save and invest more.",
  "Hi, I can't say hello."))
}
\references{
\href{https://www.nltk.org/_modules/nltk/tokenize/treebank.html#TreebankWordTokenizer}{NLTK
TreebankWordTokenizer}
}