File: overview.rst

package info (click to toggle)
onnxruntime 1.23.2%2Bdfsg-3
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 340,744 kB
  • sloc: cpp: 3,222,135; python: 188,267; ansic: 114,318; asm: 37,927; cs: 36,849; java: 10,962; javascript: 6,811; pascal: 4,126; sh: 2,996; xml: 705; objc: 281; makefile: 67
file content (11 lines) | stat: -rw-r--r-- 1,250 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
Overview
=========

`On-Device Training` refers to the process of training a model on an edge device, such as mobile phones, embedded devices, gaming consoles, web browsers, etc. This is in contrast to training a model on a server or a cloud. Training on the edge is useful when the data is sensitive and cannot be shared with a server or a cloud. It is also useful for the task of personalization where the model needs to be trained on the user's device.

`onnxruntime-training` offers an easy way to efficiently train and infer a wide range of ONNX models on edge devices. The training process is divided into two phases:

- The offline phase: In this phase, training artifacts are prepared on a server, cloud or a desktop. These artifacts can be generated by using the `onnxruntime-training`'s :doc:`artifact generation python tools<training_artifacts>`.
- The training phase: Once these artifacts are generated, they can be deployed on an edge device. The onnxruntime-training's :doc:`training API<training_api>` can be used to train a model on the edge device.

Once training on the edge device is complete, an inference-ready onnx model can be generated on the edge device itself. This model can then be used with ONNX Runtime for inferencing.