File: README.md

package info (click to toggle)
harfbuzz 12.2.0-1
  • links: PTS, VCS
  • area: main
  • in suites: forky
  • size: 100,084 kB
  • sloc: ansic: 77,785; cpp: 61,949; python: 4,961; xml: 4,651; sh: 426; makefile: 105
file content (83 lines) | stat: -rw-r--r-- 2,988 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
# Building and Running

Benchmarks are implemented using [Google Benchmark](https://github.com/google/benchmark).

To build the benchmarks in this directory you need to set the benchmark
option while configuring the build with meson:
```
meson build -Dbenchmark=enabled --buildtype=debugoptimized
```

The default build type is `debugoptimized`, which is good enough for
benchmarking, but you can also get the fastest mode with `release`
build type:
```
meson build -Dbenchmark=enabled --buildtype=release
```

You should, of course, enable features you want to benchmark, like
`-Dfreetype`, `-Dfontations`, `-Dcoretext`, etc.

Then build a specific benchmark binaries with ninja, eg.:
```
ninja -Cbuild perf/benchmark-set
```
or just build the whole project:
```
ninja -Cbuild
```

Finally, to run one of the benchmarks:

```
./build/perf/benchmark-set
```

It's possible to filter the benchmarks being run and customize the output
via flags to the benchmark binary. See the
[Google Benchmark User Guide](https://github.com/google/benchmark/blob/main/docs/user_guide.md#user-guide) for more details.

The most useful benchmark is `benchmark-font`. You can provide custom fonts to it too.
For example, to run only the "paint" benchmarks, against a given font, you can do:
```
./build/perf/benchmark-font NotoColorEmoji-Regular.ttf --benchmark_filter="paint"
```

Some useful options are: `--benchmark_repetitions=5` to run the benchmark 5 times,
`--benchmark_min_time=.1s` to run the benchmark for at least .1 seconds (defaults
to .5s), and `--benchmark_filter=...` to filter the benchmarks by regular expression.

To compare before/after benchmarks, you need to save the benchmark results in files
for both runs. Use `--benchmark_out=results.json` to output the results in JSON format.
Then you can use:
```
./subprojects/benchmark-1.8.4/tools/compare.py benchmarks before.json after.json
```
Substitute your version of benchmark instead of 1.8.4.

# Profiling

If you like to disable optimizations and enable frame pointers for better profiling output,
you can do so with the following meson command:
```
CXXFLAGS="-fno-omit-frame-pointer" meson --reconfigure build -Dbenchmark=enabled --buildtype=debug
ninja -Cbuild
```
However, this will slow down the benchmarks significantly and might give you inaccurate
information as to where to optimize. It's better to profile the `debugoptimized` build (the default).

Then run the benchmark with perf:
```
perf record -g build/perf/benchmark-subset --benchmark_filter="BM_subset_codepoints/subset_notocjk/100000" --benchmark_repetitions=5
```
You probably want to filter to a specific benchmark of interest and set the number of
repititions high enough to get a good sampling of profile data.

Finally view the profile with:

```
perf report
```

Another useful `perf` tool is the `perf stat` command, which can give you a quick overview
of the performance of a benchmark, as well as stalled cycles, cache misses, and mispredicted branches.