File: api-optimizers.md

package info (click to toggle)
python-thinc 9.1.1-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 5,896 kB
  • sloc: python: 17,122; javascript: 1,559; ansic: 342; makefile: 15; sh: 13
file content (279 lines) | stat: -rw-r--r-- 13,131 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
---
title: Optimizers
next: /docs/api-initializers
---

An optimizer essentially performs stochastic gradient descent. It takes
one-dimensional arrays for the weights and their gradients, along with an
optional identifier key. The optimizer is expected to update the weights and
zero the gradients in place. The optimizers are registered in the
[function registry](/docs/api-config#registry) and can also be used via Thinc's
[config mechanism](/docs/usage-config).

## Optimizer functions

### SGD {#sgd tag="function"}

Function to create a SGD optimizer. If a hyperparameter specifies a schedule,
the step that is passed to the schedule will be incremented on each call to
[`Optimizer.step_schedules`](#step-schedules).

<grid>

```python
### Example {small="true"}
from thinc.api import SGD

optimizer = SGD(
    learn_rate=0.001,
    L2=1e-6,
    grad_clip=1.0
)
```

```ini
### config.cfg {small="true"}
[optimizer]
@optimizers = SGD.v1
learn_rate = 0.001
L2 = 1e-6
L2_is_weight_decay = true
grad_clip = 1.0
use_averages = true
```

</grid>

| Argument             | Type                                          | Description                                                                                        |
| -------------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `learn_rate`         | <tt>Union[float, List[float], Generator]</tt> | The initial learning rate.                                                                         |
| _keyword-only_       |                                               |                                                                                                    |
| `L2`                 | <tt>Union[float, List[float], Generator]</tt> | The L2 regularization term.                                                                        |
| `grad_clip`          | <tt>Union[float, List[float], Generator]</tt> | Gradient clipping.                                                                                 |
| `use_averages`       | <tt>bool</tt>                                 | Whether to track moving averages of the parameters.                                                |
| `L2_is_weight_decay` | <tt>bool</tt>                                 | Whether to interpret the L2 parameter as a weight decay term, in the style of the AdamW optimizer. |
| `ops`                | <tt>Optional[Ops]</tt>                        | A backend object. Defaults to the currently selected backend.                                      |

### Adam {#adam tag="function"}

Function to create an Adam optimizer. Returns an instance of
[`Optimizer`](#optimizer). If a hyperparameter specifies a schedule, the step
that is passed to the schedule will be incremented on each call to
[`Optimizer.step_schedules`](#step-schedules).

<grid>

```python
### Example {small="true"}
from thinc.api import Adam

optimizer = Adam(
    learn_rate=0.001,
    beta1=0.9,
    beta2=0.999,
    eps=1e-08,
    L2=1e-6,
    grad_clip=1.0,
    use_averages=True,
    L2_is_weight_decay=True
)
```

```ini
### config.cfg {small="true"}
[optimizer]
@optimizers = Adam.v1
learn_rate = 0.001
beta1 = 0.9
beta2 = 0.999
eps = 1e-08
L2 = 1e-6
L2_is_weight_decay = true
grad_clip = 1.0
use_averages = true
```

</grid>

| Argument             | Type                                          | Description                                                                                        |
| -------------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `learn_rate`         | <tt>Union[float, List[float], Generator]</tt> | The initial learning rate.                                                                         |
| _keyword-only_       |                                               |                                                                                                    |
| `L2`                 | <tt>Union[float, List[float], Generator]</tt> | The L2 regularization term.                                                                        |
| `beta1`              | <tt>Union[float, List[float], Generator]</tt> | First-order momentum.                                                                              |
| `beta2`              | <tt>Union[float, List[float], Generator]</tt> | Second-order momentum.                                                                             |
| `eps`                | <tt>Union[float, List[float], Generator]</tt> | Epsilon term for Adam etc.                                                                         |
| `grad_clip`          | <tt>Union[float, List[float], Generator]</tt> | Gradient clipping.                                                                                 |
| `use_averages`       | <tt>bool</tt>                                 | Whether to track moving averages of the parameters.                                                |
| `L2_is_weight_decay` | <tt>bool</tt>                                 | Whether to interpret the L2 parameter as a weight decay term, in the style of the AdamW optimizer. |
| `ops`                | <tt>Optional[Ops]</tt>                        | A backend object. Defaults to the currently selected backend.                                      |

### RAdam {#radam tag="function"}

Function to create an RAdam optimizer. Returns an instance of
[`Optimizer`](#optimizer). If a hyperparameter specifies a schedule, the step
that is passed to the schedule will be incremented on each call to
[`Optimizer.step_schedules`](#step-schedules).

<grid>

```python
### Example {small="true"}
from thinc.api import RAdam

optimizer = RAdam(
    learn_rate=0.001,
    beta1=0.9,
    beta2=0.999,
    eps=1e-08,
    weight_decay=1e-6,
    grad_clip=1.0,
    use_averages=True,
)
```

```ini
### config.cfg {small="true"}
[optimizer]
@optimizers = RAdam.v1
learn_rate = 0.001
beta1 = 0.9
beta2 = 0.999
eps = 1e-08
weight_decay = 1e-6
grad_clip = 1.0
use_averages = true
```

</grid>

| Argument       | Type                                          | Description                                                   |
| -------------- | --------------------------------------------- | ------------------------------------------------------------- |
| `learn_rate`   | <tt>Union[float, List[float], Generator]</tt> | The initial learning rate.                                    |
| _keyword-only_ |                                               |                                                               |
| `beta1`        | <tt>Union[float, List[float], Generator]</tt> | First-order momentum.                                         |
| `beta2`        | <tt>Union[float, List[float], Generator]</tt> | Second-order momentum.                                        |
| `eps`          | <tt>Union[float, List[float], Generator]</tt> | Epsilon term for Adam etc.                                    |
| `weight_decay` | <tt>Union[float, List[float], Generator]</tt> | Weight decay term.                                            |
| `grad_clip`    | <tt>Union[float, List[float], Generator]</tt> | Gradient clipping.                                            |
| `use_averages` | <tt>bool</tt>                                 | Whether to track moving averages of the parameters.           |
| `ops`          | <tt>Optional[Ops]</tt>                        | A backend object. Defaults to the currently selected backend. |

---

## Optimizer {tag="class"}

Do various flavors of stochastic gradient descent, with first and second order
momentum. Currently support "vanilla" SGD, Adam, and RAdam.

### Optimizer.\_\_init\_\_ {#init tag="method"}

Initialize an optimizer. If a hyperparameter specifies a schedule, the step that
is passed to the schedule will be incremented on each call to
[`Optimizer.step_schedules`](#step-schedules).

```python
### Example
from thinc.api import Optimizer

optimizer = Optimizer(learn_rate=0.001, L2=1e-6, grad_clip=1.0)
```

| Argument             | Type                                          | Description                                                                                        |
| -------------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `learn_rate`         | <tt>Union[float, List[float], Generator]</tt> | The initial learning rate.                                                                         |
| _keyword-only_       |                                               |                                                                                                    |
| `L2`                 | <tt>Union[float, List[float], Generator]</tt> | The L2 regularization term.                                                                        |
| `beta1`              | <tt>Union[float, List[float], Generator]</tt> | First-order momentum.                                                                              |
| `beta2`              | <tt>Union[float, List[float], Generator]</tt> | Second-order momentum.                                                                             |
| `eps`                | <tt>Union[float, List[float], Generator]</tt> | Epsilon term for Adam etc.                                                                         |
| `grad_clip`          | <tt>Union[float, List[float], Generator]</tt> | Gradient clipping.                                                                                 |
| `use_averages`       | <tt>bool</tt>                                 | Whether to track moving averages of the parameters.                                                |
| `use_radam`          | <tt>bool</tt>                                 | Whether to use the RAdam optimizer.                                                                |
| `L2_is_weight_decay` | <tt>bool</tt>                                 | Whether to interpret the L2 parameter as a weight decay term, in the style of the AdamW optimizer. |
| `ops`                | <tt>Optional[Ops]</tt>                        | A backend object. Defaults to the currently selected backend.                                      |

### Optimizer.\_\_call\_\_ {#call tag="method"}

Call the optimizer function, updating parameters using the current parameter
gradients. The `key` is the identifier for the parameter, usually the node ID
and parameter name.

| Argument       | Type                     | Description                                   |
| -------------- | ------------------------ | --------------------------------------------- |
| `key`          | <tt>Tuple[int, str]</tt> | The parameter identifier.                     |
| `weights`      | <tt>FloatsXd</tt>        | The model's current weights.                  |
| `gradient`     | <tt>FloatsXd</tt>        | The model's current gradient.                 |
| _keyword-only_ |                          |                                               |
| `lr_scale`     | <tt>float</tt>           | Rescale the learning rate. Defaults to `1.0`. |

### Optimizer.last_score {#last_score tag="property", new="9"}

Get or set the last evaluation score. The optimizer passes this score to the
learning rate schedule, so that the schedule can take training dynamics into
account (see e.g. the [`plateau`](/docs/api-schedules#plateau) schedule).

```python
### Example
from thinc.api import Optimizer, constant, plateau

schedule = plateau(2, 0.5, constant(1.0))
optimizer = Optimizer(learn_rate=schedule)
optimizer.last_score = (1000, 88.34)
```

| Argument    | Type                                 | Description                                |
| ----------- | ------------------------------------ | ------------------------------------------ |
| **RETURNS** | <tt>Optional[Tuple[int, float]]</tt> | The step and score of the last evaluation. |

### Optimizer.step_schedules {#step_schedules tag="method"}

Increase the current step of the optimizer. This step will be used by schedules
to determine their next value.

```python
### Example
from thinc.api import Optimizer, decaying

optimizer = Optimizer(learn_rate=decaying(0.001, 1e-4), grad_clip=1.0)
assert optimizer.learn_rate == 0.001
optimizer.step_schedules()
assert optimizer.learn_rate == 0.000999900009999  # using a schedule
assert optimizer.grad_clip == 1.0                 # not using a schedule
```

### Optimizer.to_gpu {#to_gpu tag="method"}

Transfer the optimizer to a given GPU device.

```python
### Example
optimizer.to_gpu()
```

### Optimizer.to_cpu {#to_cpu tag="method"}

Copy the optimizer to CPU.

```python
### Example
optimizer.to_cpu()
```

### Optimizer.to_gpu {#to_gpu tag="method"}

Transfer the optimizer to a given GPU device.

```python
### Example
optimizer.to_gpu()
```

### Optimizer.to_cpu {#to_cpu tag="method"}

Copy the optimizer to CPU.

```python
### Example
optimizer.to_cpu()
```