File: bench.man

package info (click to toggle)
tcllib 1.20%2Bdfsg-1
  • links: PTS
  • area: main
  • in suites: bullseye
  • size: 68,064 kB
  • sloc: tcl: 216,842; ansic: 14,250; sh: 2,846; xml: 1,766; yacc: 1,145; pascal: 881; makefile: 107; perl: 84; f90: 84; python: 33; ruby: 13; php: 11
file content (296 lines) | stat: -rw-r--r-- 9,149 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
[comment {-*- tcl -*- doctools manpage}]
[manpage_begin bench n 0.4]
[see_also bench_intro]
[see_also bench_lang_intro]
[see_also bench_lang_spec]
[see_also bench_read]
[see_also bench_wcsv]
[see_also bench_wtext]
[keywords benchmark]
[keywords merging]
[keywords normalization]
[keywords performance]
[keywords testing]
[copyright {2007-2008 Andreas Kupries <andreas_kupries@users.sourceforge.net>}]
[moddesc   {Benchmarking/Performance tools}]
[titledesc {bench - Processing benchmark suites}]
[category  {Benchmark tools}]
[require Tcl 8.2]
[require bench [opt 0.4]]
[description]

This package provides commands for the execution of benchmarks written
in the bench language, and for the processing of results generated by
such execution.

[para]

A reader interested in the bench language itself should start with the
[term {bench language introduction}] and proceed from there to the
formal [term {bench language specification}].

[para]

[section {PUBLIC API}]
[subsection {Benchmark execution}]

[list_begin definitions]

[call [cmd ::bench::locate] [arg pattern] [arg paths]]

This command locates Tcl interpreters and returns a list containing
their paths. It searches them in the list of [arg paths] specified by
the caller, using the glob [arg pattern].

[para]

The command resolves soft links to find the actual executables
matching the pattern. Note that only interpreters which are marked as
executable and are actually executable on the current platform are put
into the result.

[call [cmd ::bench::run] [opt [arg "option value"]...] [arg interp_list] [arg file]...]

This command executes the benchmarks declared in the set of files,
once per Tcl interpreter specified via the [arg interp_list], and per
the configuration specified by the options, and then returns the
accumulated timing results. The format of this result is described in
section [sectref {Result format}].

[para]

It is assumed that the contents of the files are written in the bench
language.

[para]

The available options are

[list_begin options]
[opt_def -errors [arg flag]]

The argument is a boolean value. If set errors in benchmarks are
propagated to the command, aborting benchmark execution. Otherwise
they are recorded in the timing result via a special result code. The
default is to propagate and abort.

[opt_def -threads [arg n]]

The argument is a non-negative integer value declaring the number of
threads to use while executing the benchmarks. The default value is
[const 0], to not use threads.

[opt_def -match [arg pattern]]

The argument is a glob pattern. Only benchmarks whose description
matches the pattern are executed. The default is the empty string, to
execute all patterns.

[opt_def -rmatch [arg pattern]]

The argument is a regular expression pattern. Only benchmarks whose
description matches the pattern are executed. The default is the empty
string, to execute all patterns.

[opt_def -iters [arg n]]

The argument is positive integer number, the maximal number of
iterations for any benchmark. The default is [const 1000]. Individual
benchmarks can override this.

[opt_def -pkgdir [arg path]]

The argument is a path to an existing, readable directory. Multiple
paths can be specified, simply use the option multiple times, each
time with one of the paths to use.

[para]

If no paths were specified the system will behave as before.
If one or more paths are specified, say [var N], each of the specified
interpreters will be invoked [var N] times, with one of the specified
paths. The chosen path is put into the interpreters' [var auto_path],
thus allowing it to find specific versions of a package.

[para]

In this way the use of [option -pkgdir] allows the user to benchmark
several different versions of a package, against one or more interpreters.

[para]

[emph Note:] The empty string is allowed as a path and causes the system to
run the specified interpreters with an unmodified [var auto_path]. In case
the package in question is available there as well.

[list_end]
[para]

[call [cmd ::bench::versions] [arg interp_list]]

This command takes a list of Tcl interpreters, identified by their
path, and returns a dictionary mapping from the interpreters to their
versions. Interpreters which are not actually executable, or fail when
interrogated, are not put into the result. I.e the result may contain
less interpreters than there in the input list.

[para]

The command uses builtin command [cmd {info patchlevel}] to determine
the version of each interpreter.

[list_end]

[subsection {Result manipulation}]

[list_begin definitions]

[call [cmd ::bench::del] [arg bench_result] [arg column]]

This command removes a column, i.e. all benchmark results for a
specific Tcl interpreter, from the specified benchmark result and
returns the modified result.

[para]
The benchmark results are in the format described in section
[sectref {Result format}].
[para]
The column is identified by an integer number.

[call [cmd ::bench::edit] [arg bench_result] [arg column] [arg newvalue]]

This command renames a column in the specified benchmark result and
returns the modified result. This means that the path of the Tcl
interpreter in the identified column is changed to an arbitrary
string.

[para]
The benchmark results are in the format described in section
[sectref {Result format}].
[para]
The column is identified by an integer number.

[call [cmd ::bench::merge] [arg bench_result]...]

This commands takes one or more benchmark results, merges them into
one big result, and returns that as its result.

[para]
All benchmark results are in the format described in section
[sectref {Result format}].

[call [cmd ::bench::norm] [arg bench_result] [arg column]]

This command normalizes the timing results in the specified benchmark
result and returns the modified result. This means that the cell
values are not times anymore, but factors showing how much faster or
slower the execution was relative to the baseline.

[para]

The baseline against which the command normalizes are the timing
results in the chosen column. This means that after the normalization
the values in this column are all [const 1], as these benchmarks are
neither faster nor slower than the baseline.

[para]

A factor less than [const 1] indicates a benchmark which was faster
than the baseline, whereas a factor greater than [const 1] indicates a
slower execution.

[para]
The benchmark results are in the format described in section
[sectref {Result format}].
[para]
The column is identified by an integer number.

[call [cmd ::bench::out::raw] [arg bench_result]]

This command formats the specified benchmark result for output to a
file, socket, etc. This specific command does no formatting at all,
it passes the input through unchanged.

[para]

For other formatting styles see the packages [package bench::out::text]
and [package bench::out::csv] which provide commands to format
benchmark results for human consumption, or as CSV data importable by
spread sheets, respectively.

[para]

Complementary, to read benchmark results from files, sockets etc. look
for the package [package bench::in] and the commands provided by it.

[list_end]

[subsection {Result format}]

After the execution of a set of benchmarks the raw result returned by
this package is a Tcl dictionary containing all the relevant
information.

The dictionary is a compact representation, i.e. serialization, of a
2-dimensional table which has Tcl interpreters as columns and
benchmarks as rows. The cells of the table contain the timing
results.

The Tcl interpreters / columns are identified by their paths.
The benchmarks / rows are identified by their description.

[para]

The possible keys are all valid Tcl lists of two or three elements and
have one of the following forms:

[list_begin definitions]

[def {{interp *}}]

The set of keys matching this glob pattern capture the information
about all the Tcl interpreters used to run the benchmarks. The second
element of the key is the path to the interpreter.

[para]

The associated value is the version of the Tcl interpreter.

[def {{desc *}}]

The set of keys matching this glob pattern capture the information
about all the benchmarks found in the executed benchmark suite. The
second element of the key is the description of the benchmark, which
has to be unique.

[para]

The associated value is irrelevant, and set to the empty string.

[def {{usec * *}}]

The set of keys matching this glob pattern capture the performance
information, i.e. timing results. The second element of the key is the
description of the benchmark, the third element the path of the Tcl
interpreter which was used to run it.

[para]

The associated value is either one of several special result codes, or
the time it took to execute the benchmark, in microseconds. The
possible special result codes are

[list_begin definitions]
[def ERR]
Benchmark could not be executed, failed with a Tcl error.

[def BAD_RES]
The benchmark could be executed, however the result from its body did
not match the declared expectations.

[list_end]
[list_end]

[vset CATEGORY bench]
[include ../common-text/feedback.inc]
[manpage_end]