File: overview.rst

package info (click to toggle)
onnxruntime 1.21.0%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 333,732 kB
  • sloc: cpp: 3,153,079; python: 179,219; ansic: 109,131; asm: 37,791; cs: 34,424; perl: 13,070; java: 11,047; javascript: 6,330; pascal: 4,126; sh: 3,277; xml: 598; objc: 281; makefile: 59
file content (37 lines) | stat: -rw-r--r-- 1,696 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Overview
=========

`onnxruntime-training`'s `ORTModule` offers a high performance training engine for models defined using the `PyTorch` frontend. `ORTModule` is designed to accelerate the training of large models without needing to change either the model definition or the training code.

The aim of `ORTModule` is to provide a drop-in replacement for one or more `torch.nn.Module` objects in a user's `PyTorch` program, and execute the forward and backward passes of those modules using ORT.

As a result, the user will be able to accelerate their training script using ORT,
without having to modify their training loop.

Users will be able to use standard PyTorch debugging techniques for convergence issues, e.g. by probing the computed gradients on the model's parameters.

The following code example illustrates how ORTModule would be used in a user's training script, in the simple case where the entire model can be offloaded to ONNX Runtime:

.. code-block:: python

    from onnxruntime.training import ORTModule

    # Original PyTorch model
    class NeuralNet(torch.nn.Module):
        def __init__(self, input_size, hidden_size, num_classes):
            ...
        def forward(self, x):
            ...

    model = NeuralNet(input_size=784, hidden_size=500, num_classes=10)
    model = ORTModule(model) # The only change to the original PyTorch script
    criterion = torch.nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)

    # Training Loop is unchanged
    for data, target in data_loader:
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()