File: token_regexp.rst

package info (click to toggle)
groonga 9.0.0-1%2Bdeb10u1
  • links: PTS, VCS
  • area: main
  • in suites: buster
  • size: 101,496 kB
  • sloc: ansic: 608,707; ruby: 35,042; xml: 23,643; cpp: 10,319; sh: 7,453; yacc: 5,968; python: 3,033; makefile: 2,609; perl: 133
file content (56 lines) | stat: -rw-r--r-- 1,287 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
.. -*- rst -*-

.. highlightlang:: none

.. groonga-command
.. database: tokenizers

.. _token-regexp:

``TokenRegexp``
===============

Summary
-------

.. versionadded:: 5.0.1

.. caution::

   This tokenizer is experimental. Specification may be changed.

.. caution::

   This tokenizer can be used only with UTF-8. You can't use this
   tokenizer with EUC-JP, Shift_JIS and so on.

``TokenRegexp`` is a tokenizer for supporting regular expression
search by index.

Syntax
------

``TokenRegexp`` hasn't parameter::

  TokenRegexp

Usage
-----

In general, regular expression search is evaluated as sequential
search. But the following cases can be evaluated as index search:

  * Literal only case such as ``hello``
  * The beginning of text and literal case such as ``\A/home/alice``
  * The end of text and literal case such as ``\.txt\z``

In most cases, index search is faster than sequential search.

``TokenRegexp`` is based on bigram tokenize method. ``TokenRegexp``
adds the beginning of text mark (``U+FFEF``) at the begging of text
and the end of text mark (``U+FFF0``) to the end of text when you
index text:

.. groonga-command
.. include:: ../../example/reference/tokenizers/token-regexp-add.log
.. tokenize TokenRegexp "/home/alice/test.txt" NormalizerAuto --mode ADD