File: README.md

package info (click to toggle)
swiftlang 6.0.3-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 2,519,992 kB
  • sloc: cpp: 9,107,863; ansic: 2,040,022; asm: 1,135,751; python: 296,500; objc: 82,456; f90: 60,502; lisp: 34,951; pascal: 19,946; sh: 18,133; perl: 7,482; ml: 4,937; javascript: 4,117; makefile: 3,840; awk: 3,535; xml: 914; fortran: 619; cs: 573; ruby: 573
file content (86 lines) | stat: -rw-r--r-- 3,479 bytes parent folder | download | duplicates (18)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# MBR - MLIR Benchmark Runner
MBR is a tool to run benchmarks. It measures compilation and running times of
benchmark programs. It uses MLIR's python bindings for MLIR benchmarks.

## Installation
To build and enable MLIR benchmarks, pass `-DMLIR_ENABLE_PYTHON_BENCHMARKS=ON`
while building MLIR. If you make some changes to the `mbr` files itself, build
again with `-DMLIR_ENABLE_PYTHON_BENCHMARKS=ON`.

## Writing benchmarks
As mentioned in the intro, this tool measures compilation and running times.
An MBR benchmark is a python function that returns two callables, a compiler
and a runner. Here's an outline of a benchmark; we explain its working after
the example code.

```python
def benchmark_something():
    # Preliminary setup
    def compiler():
        # Compiles a program and creates an "executable object" that can be
        # called to invoke the compiled program.
        ...

    def runner(executable_object):
        # Sets up arguments for executable_object and calls it. The
        # executable_object is returned by the compiler.
        # Returns an integer representing running time in nanoseconds.
        ...

    return compiler, runner
```

The benchmark function's name must be prefixed by `"benchmark_"` and benchmarks
must be in the  python files prefixed by `"benchmark_` for them to be
discoverable. The file and function prefixes are configurable using the
configuration file `mbr/config.ini` relative to this  README's directory.

A benchmark returns two functions, a `compiler` and a `runner`. The `compiler`
returns a callable which is accepted as an argument by the runner function.
So the two functions work like this
1. `compiler`: configures and returns a callable.
2. `runner`: takes that callable in as input, sets up its arguments, and calls
    it. Returns an int representing running time in nanoseconds.

The `compiler` callable is optional if there is no compilation step, for
example, for benchmarks involving numpy. In that case, the benchmarks look
like this.

```python
def benchmark_something():
    # Preliminary setup
    def runner():
        # Run the program and return the running time in nanoseconds.
        ...

    return None, runner
```
In this case, the runner does not take any input as there is no compiled object
to invoke.

## Running benchmarks
MLIR benchmarks can be run like this

```bash
PYTHONPATH=<path_to_python_mlir_core> <other_env_vars> python <llvm-build-path>/bin/mlir-mbr --machine <machine_identifier> --revision <revision_string> --result-stdout <path_to_start_search_for_benchmarks>
```
For a description of command line arguments, run

```bash
python mlir/utils/mbr/mbr/main.py -h
```
And to learn more about the other arguments, check out the LNT's
documentation page [here](https://llvm.org/docs/lnt/concepts.html).

If you want to run only specific benchmarks, you can use the positional argument
`top_level_path` appropriately.

1. If you want to run benchmarks in a specific directory or a file, set
   `top_level_path` to that.
2. If you want to run a specific benchmark function, set the `top_level_path` to 
   the file containing that benchmark function, followed by a `::`, and then the
   benchmark function name. For example, `mlir/benchmark/python/benchmark_sparse.py::benchmark_sparse_mlir_multiplication`.

## Configuration
Various aspects about the framework can be configured using the configuration
file in the `mbr/config.ini` relative to the directory of this README.