File: pt_wrapper_module.py

package info (click to toggle)
pytorch 1.13.1%2Bdfsg-4
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 139,252 kB
  • sloc: cpp: 1,100,274; python: 706,454; ansic: 83,052; asm: 7,618; java: 3,273; sh: 2,841; javascript: 612; makefile: 323; xml: 269; ruby: 185; yacc: 144; objc: 68; lex: 44
file content (43 lines) | stat: -rw-r--r-- 1,947 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import torch

class WrapperModule(object):
    """ Wraps the instance of wrapped_type.
    For graph_mode traces the instance of wrapped_type.
    Randomaly initializes num_params tensors with single float element.
    Args:
        wrapped_type:
            - Object type to be wrapped.
                Expects the wrapped_type to:
                   - be constructed with pt_fn specified in module_config.
                   - provide forward method that takes module_config.num_params args.
        module_config:
            - Specified pt_fn to construct wrapped_type with, whether graph_mode
              is enabled, and number of parameters wrapped_type's forward method
              takes.
        debug:
            - Whether debug mode is enabled.
        save:
            - In graph mode, whether graph is to be saved.
    """
    def __init__(self, wrapped_type, module_config, debug, save=False):
        pt_fn = module_config.pt_fn
        self.module = wrapped_type(pt_fn)
        self.tensor_inputs = []
        self.module_name = wrapped_type.__name__
        for _ in range(module_config.num_params):
            self.tensor_inputs.append(torch.randn(1))
        if module_config.graph_mode:
            self.module = torch.jit.trace(self.module, self.tensor_inputs)
            if save:
                file_name = self.module_name + "_" + pt_fn.__name__ + ".pt"
                torch.jit.save(self.module, file_name)
                print("Generated graph is saved in {}".format(file_name))
        print("Benchmarking module {} with fn {}: Graph mode:{}".format(self.module_name, pt_fn.__name__, module_config.graph_mode))
        if (debug and isinstance(self.module, torch.jit.ScriptModule)):
            print(self.module.graph)
            print(self.module.code)

    def forward(self, niters):
        with torch.no_grad():
            for _ in range(niters):
                self.module.forward(*self.tensor_inputs)