File: token_bigram.rst

package info (click to toggle)
groonga 9.0.0-1%2Bdeb10u1
  • links: PTS, VCS
  • area: main
  • in suites: buster
  • size: 101,496 kB
  • sloc: ansic: 608,707; ruby: 35,042; xml: 23,643; cpp: 10,319; sh: 7,453; yacc: 5,968; python: 3,033; makefile: 2,609; perl: 133
file content (117 lines) | stat: -rw-r--r-- 3,678 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
.. -*- rst -*-

.. highlightlang:: none

.. groonga-command
.. database: tokenizers

.. _token-bigram:

``TokenBigram``
===============

Summary
-------

``TokenBigram`` is a bigram based tokenizer. It's recommended to use
this tokenizer for most cases.

Bigram tokenize method tokenizes a text to two adjacent characters
tokens. For example, ``Hello`` is tokenized to the following tokens:

  * ``He``
  * ``el``
  * ``ll``
  * ``lo``

Bigram tokenize method is good for recall because you can find all
texts by query consists of two or more characters.

In general, you can't find all texts by query consists of one
character because one character token doesn't exist. But you can find
all texts by query consists of one character in Groonga. Because
Groonga find tokens that start with query by predictive search. For
example, Groonga can find ``ll`` and ``lo`` tokens by ``l`` query.

Bigram tokenize method isn't good for precision because you can find
texts that includes query in word. For example, you can find ``world``
by ``or``. This is more sensitive for ASCII only languages rather than
non-ASCII languages. ``TokenBigram`` has solution for this problem
described in the below.

Syntax
------

``TokenBigram`` hasn't parameter::

  TokenBigram

Usage
-----

``TokenBigram`` behavior is different when it's worked with any
:doc:`/reference/normalizers`.

If no normalizer is used, ``TokenBigram`` uses pure bigram (all tokens
except the last token have two characters) tokenize method:

.. groonga-command
.. include:: ../../example/reference/tokenizers/token-bigram-no-normalizer.log
.. tokenize TokenBigram "Hello World"

If normalizer is used, ``TokenBigram`` uses white-space-separate like
tokenize method for ASCII characters. ``TokenBigram`` uses bigram
tokenize method for non-ASCII characters.

You may be confused with this combined behavior. But it's reasonable
for most use cases such as English text (only ASCII characters) and
Japanese text (ASCII and non-ASCII characters are mixed).

Most languages consists of only ASCII characters use white-space for
word separator. White-space-separate tokenize method is suitable for
the case.

Languages consists of non-ASCII characters don't use white-space for
word separator. Bigram tokenize method is suitable for the case.

Mixed tokenize method is suitable for mixed language case.

If you want to use bigram tokenize method for ASCII character, see
``TokenBigramSplitXXX`` type tokenizers such as
:ref:`token-bigram-split-symbol-alpha`.

Let's confirm ``TokenBigram`` behavior by example.

``TokenBigram`` uses one or more white-spaces as token delimiter for
ASCII characters:

.. groonga-command
.. include:: ../../example/reference/tokenizers/token-bigram-ascii-and-white-space-with-normalizer.log
.. tokenize TokenBigram "Hello World" NormalizerAuto

``TokenBigram`` uses character type change as token delimiter for
ASCII characters. Character type is one of them:

  * Alphabet
  * Digit
  * Symbol (such as ``(``, ``)`` and ``!``)
  * Hiragana
  * Katakana
  * Kanji
  * Others

The following example shows two token delimiters:

  * at between ``100`` (digits) and ``cents`` (alphabets)
  * at between ``cents`` (alphabets) and ``!!!`` (symbols)

.. groonga-command
.. include:: ../../example/reference/tokenizers/token-bigram-ascii-and-character-type-change-with-normalizer.log
.. tokenize TokenBigram "100cents!!!" NormalizerAuto

Here is an example that ``TokenBigram`` uses bigram tokenize method
for non-ASCII characters.

.. groonga-command
.. include:: ../../example/reference/tokenizers/token-bigram-non-ascii-with-normalizer.log
.. tokenize TokenBigram "日本語の勉強" NormalizerAuto