File: advanced-config.asciidoc

package info (click to toggle)
ruby-elasticsearch 7.17.11-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 7,820 kB
  • sloc: ruby: 44,308; sh: 16; makefile: 2
file content (380 lines) | stat: -rw-r--r-- 11,569 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
[[advanced-config]]
=== Advanced configuration

The client supports many configurations options for setting up and managing 
connections, configuring logging, customizing the transport library, and so on.

[discrete]
[[setting-hosts]]
==== Setting hosts

To connect to a specific {es} host:

```ruby
Elasticsearch::Client.new(host: 'search.myserver.com')
```

To connect to a host with specific port:

```ruby
Elasticsearch::Client.new(host: 'myhost:8080')
```

To connect to multiple hosts:

```ruby
Elasticsearch::Client.new(hosts: ['myhost1', 'myhost2'])
```

Instead of strings, you can pass host information as an array of Hashes:

```ruby
Elasticsearch::Client.new(hosts: [ { host: 'myhost1', port: 8080 }, { host: 'myhost2', port: 8080 } ])
```

NOTE: When specifying multiple hosts, you probably want to enable the 
`retry_on_failure` or `retry_on_status` options to perform a failed request on 
another node (refer to <<retry-failures>>).

Common URL parts – scheme, HTTP authentication credentials, URL prefixes, and so 
on – are handled automatically:

```ruby
Elasticsearch::Client.new(url: 'https://username:password@api.server.org:4430/search')
```

You can pass multiple URLs separated by a comma:

```ruby
Elasticsearch::Client.new(urls: 'http://localhost:9200,http://localhost:9201')
```

Another way to configure URLs is to export the `ELASTICSEARCH_URL` variable.

The client is automatically going to use a round-robin algorithm across the 
hosts (unless you select or implement a different <<connection-selector>>).


[discrete]
[[default-port]]
==== Default port

The default port is `9200`. Specify a port for your host(s) if they differ from 
this default.

If you are using Elastic Cloud, the default port is port `9243`. You must supply 
your username and password separately, and optionally a port. Refer to 
<<auth-ec>>.


[discrete]
[[logging]]
==== Logging

To log requests and responses to standard output with the default logger (an 
instance of Ruby's `::Logger` class), set the log argument to true:

```ruby
Elasticsearch::Client.new(log: true)
```

You can also use https://github.com/elastic/ecs-logging-ruby[`ecs-logging`] 
which is a set of libraries that enables you to transform your application logs 
to structured logs that comply with the 
https://www.elastic.co/guide/en/ecs/current/ecs-reference.html[Elastic Common Schema (ECS)]:

[source,ruby]
------------------------------------
logger = EcsLogging::Logger.new($stdout)
Elasticsearch::Client.new(logger: logger)
------------------------------------

To trace requests and responses in the Curl format, set the `trace` argument:

```ruby
Elasticsearch::Client.new(trace: true)
```

You can customize the default logger or tracer:

[source,ruby]
------------------------------------
client.transport.logger.formatter = proc { |s, d, p, m| "#{s}: #{m}\n" }
client.transport.logger.level = Logger::INFO
------------------------------------

Or, you can use a custom `::Logger` instance:

```ruby
Elasticsearch::Client.new(logger: Logger.new(STDERR))
```

You can pass the client any conforming logger implementation:

[source,ruby]
------------------------------------
require 'logging' # https://github.com/TwP/logging/

log = Logging.logger['elasticsearch']
log.add_appenders Logging.appenders.stdout
log.level = :info

client = Elasticsearch::Client.new(logger: log)
------------------------------------


[discrete]
[[apm-integration]]
==== APM integration

This client integrates seamlessly with Elastic APM via the Elastic APM Agent. It 
automatically captures client requests if you are using the agent on your code. 
If you're using `elastic-apm` v3.8.0 or up, you can set 
`capture_elasticsearch_queries` to `true` in `config/elastic_apm.yml` to also 
capture the body from requests in {es}. Refer to 
https://github.com/elastic/elasticsearch-ruby/tree/master/docs/examples/apm[this example].


[discrete]
[[custom-http-headers]]
==== Custom HTTP Headers

You can set a custom HTTP header on the client's initializer:

[source,ruby]
------------------------------------
client = Elasticsearch::Client.new(
  transport_options: {
    headers:
      {user_agent: "My App"}
  }
)
------------------------------------

You can also pass in `headers` as a parameter to any of the API Endpoints to set 
custom headers for the request:

```ruby
client.search(index: 'myindex', q: 'title:test', headers: {user_agent: "My App"})
```


[discrete]
[[x-opaque-id]]
==== Identifying running tasks with X-Opaque-Id

The X-Opaque-Id header allows to track certain calls, or associate certain tasks 
with the client that started them (refer to 
https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html#_identifying_running_tasks[the documentation]). 
To use this feature, you need to set an id for `opaque_id` on the client on each 
request. Example:

[source,ruby]
------------------------------------
client = Elasticsearch::Client.new
client.search(index: 'myindex', q: 'title:test', opaque_id: '123456')
------------------------------------

The search request includes the following HTTP Header:

```ruby
X-Opaque-Id: 123456
```

You can also set a prefix for X-Opaque-Id when initializing the client. This is 
prepended to the id you set before each request if you're using X-Opaque-Id. 
Example:

[source,ruby]
------------------------------------
client = Elasticsearch::Client.new(opaque_id_prefix: 'eu-west1_')
client.search(index: 'myindex', q: 'title:test', opaque_id: '123456')
------------------------------------

The request includes the following HTTP Header:

```ruby
X-Opaque-Id: eu-west1_123456
```


[discrete]
[[setting-timeouts]]
==== Setting Timeouts

For many operations in {es}, the default timeouts of HTTP libraries are too low. 
To increase the timeout, you can use the `request_timeout` parameter:

```ruby
Elasticsearch::Client.new(request_timeout: 5*60)
```

You can also use the `transport_options` argument documented below.


[discrete]
[[randomizing-hosts]]
==== Randomizing Hosts

If you pass multiple hosts to the client, it rotates across them in a 
round-robin fashion by default. When the same client would be running in 
multiple processes (for example, in a Ruby web server such as Thin), it might 
keep connecting to the same nodes "at once". To prevent this, you can randomize 
the hosts collection on initialization and reloading:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], randomize_hosts: true)
```


[discrete]
[[retry-failures]]
==== Retrying on Failures

When the client is initialized with multiple hosts, it makes sense to retry a 
failed request on a different host:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: true)
```

By default, the client retries the request 3 times. You can specify how many 
times to retry before it raises an exception by passing a number to 
`retry_on_failure`:

```ruby
 Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: 5)
```

You can also use `retry_on_status` to retry when specific status codes are 
returned:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], retry_on_status: [502, 503])
```

These two parameters can also be used together:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], retry_on_status: [502, 503], retry_on_failure: 10)
```

You can also set a `delay_on_retry` value in milliseconds. This will add a delay to wait between retries:

```ruby
 Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: 5, delay_on_retry: 1000)
```

[discrete]
[[reload-hosts]]
==== Reloading Hosts

{es} dynamically discovers new nodes in the cluster by default. You can leverage 
this in the client, and periodically check for new nodes to spread the load.

To retrieve and use the information from the 
https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-info.html[Nodes Info API] 
on every 10,000th request:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], reload_connections: true)
```

You can pass a specific number of requests after which reloading should be 
performed:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], reload_connections: 1_000)
```

To reload connections on failures, use:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], reload_on_failure: true)
```

The reloading timeouts if not finished under 1 second by default. To change the 
setting:

```ruby
Elasticsearch::Client.new(hosts: ['localhost:9200', 'localhost:9201'], sniffer_timeout: 3)
```

NOTE: When using reloading hosts ("sniffing") together with authentication, pass 
the scheme, user and password with the host info – or, for more clarity, in the 
`http` options:

[source,ruby]
------------------------------------
Elasticsearch::Client.new(
  host: 'localhost:9200',
  http: { scheme: 'https', user: 'U', password: 'P' },
  reload_connections: true,
  reload_on_failure: true
)
------------------------------------


[discrete]
[[connection-selector]]
==== Connection Selector

By default, the client rotates the connections in a round-robin fashion, using 
the `Elasticsearch::Transport::Transport::Connections::Selector::RoundRobin` 
strategy.

You can implement your own strategy to customize the behaviour. For example, 
let's have a "rack aware" strategy, which prefers the nodes with a specific 
attribute. The strategy uses the other nodes, only when these are unavailable:

[source,ruby]
------------------------------------
class RackIdSelector
  include Elasticsearch::Transport::Transport::Connections::Selector::Base

  def select(options={})
    connections.select do |c|
      # Try selecting the nodes with a `rack_id:x1` attribute first
      c.host[:attributes] && c.host[:attributes][:rack_id] == 'x1'
    end.sample || connections.to_a.sample
  end
end

Elasticsearch::Client.new hosts: ['x1.search.org', 'x2.search.org'], selector_class: RackIdSelector
------------------------------------


[discrete]
[[serializer-implementations]]
==== Serializer Implementations

By default, the https://rubygems.org/gems/multi_json[MultiJSON] library is used 
as the serializer implementation, and it picks up the "right" adapter based on 
gems available.

The serialization component is pluggable, though, so you can write your own by 
including the `Elasticsearch::Transport::Transport::Serializer::Base` module, 
implementing the required contract, and passing it to the client as the 
`serializer_class` or `serializer` parameter.


[discrete]
[[exception-handling]]
==== Exception Handling

The library defines a 
https://github.com/elastic/elasticsearch-ruby/blob/master/elasticsearch-transport/lib/elasticsearch/transport/transport/errors.rb[number of exception classes] 
for various client and server errors, as well as unsuccessful HTTP responses, 
making it possible to rescue specific exceptions with desired granularity.

The highest-level exception is `Elasticsearch::Transport::Transport::Error` and 
is raised for any generic client or server errors.

`Elasticsearch::Transport::Transport::ServerError` is raised for server errors 
only.

As an example for response-specific errors, a 404 response status raises an 
`Elasticsearch::Transport::Transport::Errors::NotFound` exception.

Finally, `Elasticsearch::Transport::Transport::SnifferTimeoutError` is raised 
when connection reloading ("sniffing") times out.