File: mlt-query.asciidoc

package info (click to toggle)
elasticsearch 1.6.2%2Bdfsg-1~bpo8%2B1
  • links: PTS, VCS
  • area: main
  • in suites: jessie-backports
  • size: 59,348 kB
  • sloc: java: 461,436; xml: 1,913; python: 1,402; sh: 1,183; ruby: 618; perl: 172; makefile: 46
file content (236 lines) | stat: -rw-r--r-- 8,282 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
[[query-dsl-mlt-query]]
=== More Like This Query

The More Like This Query (MLT Query) finds documents that are "like" a given
set of documents. In order to do so, MLT selects a set of representative terms
of these input documents, forms a query using these terms, executes the query
and returns the results. The user controls the input documents, how the terms
should be selected and how the query is formed. `more_like_this` can be
shortened to `mlt`.

The simplest use case consists of asking for documents that are similar to a
provided piece of text. Here, we are asking for all movies that have some text
similar to "Once upon a time" in their "title" and in their "description"
fields, limiting the number of selected terms to 12.

[source,js]
--------------------------------------------------
{
    "more_like_this" : {
        "fields" : ["title", "description"],
        "like_text" : "Once upon a time",
        "min_term_freq" : 1,
        "max_query_terms" : 12
    }
}
--------------------------------------------------

Another use case consists of asking for similar documents to ones already
existing in the index. In this case, the syntax to specify a document is
similar to the one used in the <<docs-multi-get,Multi GET API>>.

[source,js]
--------------------------------------------------
{
    "more_like_this" : {
        "fields" : ["title", "description"],
        "docs" : [
        {
            "_index" : "imdb",
            "_type" : "movies",
            "_id" : "1"
        },
        {
            "_index" : "imdb",
            "_type" : "movies",
            "_id" : "2"
        }],
        "min_term_freq" : 1,
        "max_query_terms" : 12
    }
}
--------------------------------------------------

Finally, users can also provide documents not necessarily
present in the index using a syntax is similar to
<<docs-termvectors-artificial-doc,artificial documents>>.

[source,js]
--------------------------------------------------
{
    "more_like_this" : {
        "fields" : ["name.first", "name.last"],
        "docs" : [
        {
            "_index" : "marvel",
            "_type" : "quotes",
            "doc" : {
                "name": {
                    "first": "Ben",
                    "last": "Grimm"
                },
                "tweet": "You got no idea what I'd... what I'd give to be invisible."
              }
            }
        },
        {
            "_index" : "marvel",
            "_type" : "quotes",
            "_id" : "2"
        }
        ],
        "min_term_freq" : 1,
        "max_query_terms" : 12
    }
}
--------------------------------------------------

==== How it Works

Suppose we wanted to find all documents similar to a given input document.
Obviously, the input document itself should be its best match for that type of
query. And the reason would be mostly, according to
link:https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[Lucene scoring formula],
due to the terms with the highest tf-idf. Therefore, the terms of the input
document that have the highest tf-idf are good representatives of that
document, and could be used within a disjunctive query (or `OR`) to retrieve
similar documents. The MLT query simply extracts the text from the input
document, analyzes it, usually using the same analyzer as the field, then
selects the top K terms with highest tf-idf to form a disjunctive query of
these terms.

IMPORTANT: The fields on which to perform MLT must be indexed and of type
`string`. Additionally, when using `like` with documents, either `_source`
must be enabled or the fields must be `stored` or have `term_vector` enabled.
In order to speed up analysis, it could help to store term vectors at index
time, but at the expense of disk usage.

For example, if we wish to perform MLT on the "title" and "tags.raw" fields,
we can explicitly store their `term_vector` at index time. We can still
perform MLT on the "description" and "tags" fields, as `_source` is enabled by
default, but there will be no speed up on analysis for these fields.

[source,js]
--------------------------------------------------
curl -s -XPUT 'http://localhost:9200/imdb/' -d '{
  "mappings": {
    "movies": {
      "properties": {
        "title": {
          "type": "string",
          "term_vector": "yes"
         },
         "description": {
          "type": "string"
        },
        "tags": {
          "type": "string",
          "fields" : {
            "raw": {
              "type" : "string",
              "index" : "not_analyzed",
              "term_vector" : "yes"
            }
          }
        }
      }
    }
  }
}
--------------------------------------------------

==== Parameters

The only required parameters are either `docs`, `ids` or `like_text`, all
other parameters have sensible defaults. There are three types of parameters:
one to specify the document input, the other one for term selection and for
query formation.

[float]
==== Document Input Parameters

[horizontal]
`docs`::
The list of documents to find documents like it. The syntax to specify
documents is similar to the one used by the <<docs-multi-get,Multi GET API>>.
The text is fetched from `fields` unless overridden in each document request.
The text is analyzed by the analyzer at the field, but could also be
overridden. The syntax to override the analyzer at the field follows a similar
syntax to the `per_field_analyzer` parameter of the
<<docs-termvectors-per-field-analyzer,Term Vectors API>>. Additionally, to
provide documents not necessarily present in the index,
<<docs-termvectors-artificial-doc,artificial documents>> are also supported.

`ids`::
A list of document ids, shortcut to `docs` if `_index` and `_type` are the
same as the request.

`like_text`::
The text to find documents like it. *required* if `ids` or `docs` are not
specified.

`fields`::
A list of the fields to run the more like this query against. Defaults to the
`_all` field for `like_text` and to all possible fields for `ids` or `docs`.

[float]
==== Term Selection Parameters

[horizontal]
`max_query_terms`::
The maximum number of query terms that will be selected. Increasing this value
gives greater accuracy at the expense of query execution speed. Defaults to
`25`.

`min_term_freq`::
The minimum term frequency below which the terms will be ignored from the
input document. Defaults to `2`.

`min_doc_freq`::
The minimum document frequency below which the terms will be ignored from the
input document. Defaults to `5`.

`max_doc_freq`::
The maximum document frequency above which the terms will be ignored from the
input document. This could be useful in order to ignore highly frequent words
such as stop words. Defaults to unbounded (`0`).

`min_word_length`::
The minimum word length below which the terms will be ignored. Defaults to `0`.

`max_word_length`::
The maximum word length above which the terms will be ignored. Defaults to unbounded (`0`).

`stop_words`::
An array of stop words. Any word in this set is considered "uninteresting" and
ignored. If the analyzer allows for stop words, you might want to tell MLT to
explicitly ignore them, as for the purposes of document similarity it seems
reasonable to assume that "a stop word is never interesting".

`analyzer`::
The analyzer that is used to analyze the free form text. Defaults to the
analyzer associated with the first field in `fields`.

[float]
==== Query Formation Parameters

[horizontal]
`minimum_should_match`::
After the disjunctive query has been formed, this parameter controls the
number of terms that must match. The syntax is the same as the
<<query-dsl-minimum-should-match,minimum should match>>. (Defaults to `"30%"`).

`percent_terms_to_match`:: deprecated[1.5.0,Replaced by `minimum_should_match`]

`boost_terms`::
Each term in the formed query could be further boosted by their tf-idf score.
This sets the boost factor to use when using this feature. Defaults to
deactivated (`0`). Any other positive value activates terms boosting with the
given boost factor.

`include`::
Specifies whether the input documents should also be included in the search
results returned. Defaults to `false`.

`boost`::
Sets the boost value of the whole query. Defaults to `1.0`.