File: quantize.py

package info (click to toggle)
pytorch-cuda 2.6.0%2Bdfsg-7
  • links: PTS, VCS
  • area: contrib
  • in suites: forky, sid, trixie
  • size: 161,620 kB
  • sloc: python: 1,278,832; cpp: 900,322; ansic: 82,710; asm: 7,754; java: 3,363; sh: 2,811; javascript: 2,443; makefile: 597; ruby: 195; xml: 84; objc: 68
file content (30 lines) | stat: -rw-r--r-- 804 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# flake8: noqa: F401
r"""
This file is in the process of migration to `torch/ao/quantization`, and
is kept here for compatibility while the migration process is ongoing.
If you are adding a new entry/functionality, please, add it to the
`torch/ao/quantization/quantize.py`, while adding an import statement
here.
"""

from torch.ao.quantization.quantize import (
    _add_observer_,
    _convert,
    _get_observer_dict,
    _get_unique_devices_,
    _is_activation_post_process,
    _observer_forward_hook,
    _propagate_qconfig_helper,
    _register_activation_post_process_hook,
    _remove_activation_post_process,
    _remove_qconfig,
    add_quant_dequant,
    convert,
    prepare,
    prepare_qat,
    propagate_qconfig_,
    quantize,
    quantize_dynamic,
    quantize_qat,
    swap_module,
)