File: introduction-to-tokenizers.Rmd

package info (click to toggle)
r-cran-tokenizers 0.3.0-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, forky, sid, trixie
  • size: 824 kB
  • sloc: cpp: 143; sh: 13; makefile: 2
file content (133 lines) | stat: -rw-r--r-- 5,065 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
title: "Introduction to the tokenizers Package"
author: "Lincoln Mullen"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{Introduction to the tokenizers Package}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r setup, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
```

## Package overview

In natural language processing, tokenization is the process of breaking human-readable text into machine readable components. The most obvious way to tokenize a text is to split the text into words. But there are many other ways to tokenize a text, the most useful of which are provided by this package.

The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one. The idea is that each element comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers.

Using the following sample text, the rest of this vignette demonstrates the different kinds of tokenizers in this package.

```{r}
library(tokenizers)
options(max.print = 25)

james <- paste0(
  "The question thus becomes a verbal one\n",
  "again; and our knowledge of all these early stages of thought and feeling\n",
  "is in any case so conjectural and imperfect that farther discussion would\n",
  "not be worth while.\n",
  "\n",
  "Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
  "for us _the feelings, acts, and experiences of individual men in their\n",
  "solitude, so far as they apprehend themselves to stand in relation to\n",
  "whatever they may consider the divine_. Since the relation may be either\n",
  "moral, physical, or ritual, it is evident that out of religion in the\n",
  "sense in which we take it, theologies, philosophies, and ecclesiastical\n",
  "organizations may secondarily grow.\n"
)
```

## Character and character-shingle tokenizers

The character tokenizer splits texts into individual characters. 

```{r}
tokenize_characters(james)[[1]] 
```

You can also tokenize into character-based shingles.

```{r}
tokenize_character_shingles(james, n = 3, n_min = 3, 
                            strip_non_alphanum = FALSE)[[1]][1:20]
```

## Word and word-stem tokenizers

The word tokenizer splits texts into words. 

```{r}
tokenize_words(james)
```

Word stemming is provided by the [SnowballC](https://cran.r-project.org/package=SnowballC) package.

```{r}
tokenize_word_stems(james)
```

You can also provide a vector of stopwords which will be omitted. The [stopwords package](https://github.com/quanteda/stopwords), which contains stopwords for many languages from several sources, is recommended. This argument also works with the n-gram and skip n-gram tokenizers.

```{r}
library(stopwords)
tokenize_words(james, stopwords = stopwords::stopwords("en"))
```

An alternative word stemmer often used in NLP that preserves punctuation and separates common English contractions is the Penn Treebank tokenizer.

```{r}
tokenize_ptb(james)
```

## N-gram and skip n-gram tokenizers

An n-gram is a contiguous sequence of words containing at least `n_min` words and at most `n` words. This function will generate all such combinations of n-grams, omitting stopwords if desired.

```{r}
tokenize_ngrams(james, n = 5, n_min = 2,
                stopwords = stopwords::stopwords("en"))
```

A skip n-gram is like an n-gram in that it takes the `n` and `n_min` parameters. But rather than returning contiguous sequences of words, it will also return sequences of n-grams skipping words with gaps between `0` and the value of `k`. This function generates all such sequences, again omitting stopwords if desired. Note that the number of tokens returned can be very large.

```{r}
tokenize_skip_ngrams(james, n = 5, n_min = 2, k = 2,
                     stopwords = stopwords::stopwords("en"))
```

## Sentence and paragraph tokenizers

Sometimes it is desirable to split texts into sentences or paragraphs prior to tokenizing into other forms.

```{r, collapse=FALSE}
tokenize_sentences(james) 
tokenize_paragraphs(james)
```

## Text chunking

When one has a very long document, sometimes it is desirable to split the document into smaller chunks, each with the same length. This function chunks a document and gives it each of the chunks an ID to show their order. These chunks can then be further tokenized.

```{r}
chunks <- chunk_text(mobydick, chunk_size = 100, doc_id = "mobydick")
length(chunks)
chunks[5:6]
tokenize_words(chunks[5:6])
```

## Counting words, characters, sentences

The package also offers functions for counting words, characters, and sentences in a format which works nicely with the rest of the functions.

```{r}
count_words(mobydick)
count_characters(mobydick)
count_sentences(mobydick)
```