File: testsuite.xml

package info (click to toggle)
pike7.8 7.8.866-7
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 69,304 kB
  • ctags: 28,082
  • sloc: ansic: 252,877; xml: 36,537; makefile: 4,214; sh: 2,879; lisp: 655; asm: 591; objc: 212; pascal: 157; sed: 34
file content (389 lines) | stat: -rw-r--r-- 11,746 bytes parent folder | download | duplicates (7)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
<chapter title="Pike Test Suite">

  <p><fixme>The goals of the test suite and an overview of it</fixme></p>

<section title="Running Tests">

  <p>The most common way of running tests from the test suite is to
  use the top level make target <tt>verify</tt> which installs a test
  Pike in the build directory and use it while running the entire test
  suite. The following test-related make targets are defined in the top
  level make file.</p>

  <dl>

  <dt>tinstall</dt>
  <dd>Makes and installs a test Pike binary in the build directory. If
  a test Pike binary was already installed, it will first be
  removed.</dd>

  <dt>just_verify</dt>
  <dd>Runs the test suite with the flags "<tt>-F -a -v</tt>", without
  installing a new test Pike binary.</dd>

  <dt>testsuites</dt>
  <dd>Creates testsuite files in the build tree from the
  testsuite.in-files in the src/ and lib/ trees.</dd>

  <dt>verify</dt>
  <dd>Runs the <tt>testsuites</tt>, <tt>tinstall</tt> and
  <tt>just_verify</tt> targets.</dd>

  <dt>verify_installed</dt>
  <dd>Runs the test suit with the flags "<tt>-F -a -v</tt>", with the
  Pike binary installed on the system.</dd>

  <dt>check</dt>
  <dd>Alias for the <tt>verify</tt> make target.</dd>

  <dt>sure</dt>
  <dd>Alias for the <tt>verify</tt> make target.</dd>

  <dt>verbose_verify</dt>
  <dd>Runs the <tt>tinstall</tt> make target and then runs the test
  suite with the flags "<tt>-F -v -v -a</tt>".</dd>

  <dt>gdb_verify</dt>
  <dd>Runs the test suite inside of gdb. The test suite is started
  with the flags "<tt>-F -v -v -a</tt>".</dd>

  <dt>valgrind_verify</dt>
  <dd>Runs the test suite inside of valgrind. The test suite is
  started with the flags "<tt>-F -v -a</tt>".</dd>

  <dt>valgrind_just_verify</dt>
  <dd>Runs the test suite inside of valgrind, without installing a
  test pike. The test suite is started with the flags "<tt>-F -v
  -a</tt>".</dd>

  </dl>

  <p>It is possible to alter the flags given to the
  <tt>test_install</tt> program by using the <tt>TESTARGS</tt> make
  variable.</p>

<example>
make verify TESTARGS="-a -v4 -l2 -t1 -c1 -m"
</example>

<subsection title="The Test Program">

  <p>The actual testing is done by the program
  <tt>bin/test_pike.pike</tt>, which can be run as a stand alone
  application to test any Pike binary with any test suite or test
  suites. The Pike binary that executes the test program will be
  tested, and it will be tested with the test suites provided as
  arguments to the test program.</p>

<example>
/home/visbur/Pike/7.2/bin/pike bin/test_pike.pike testsuite1 testsuite2
</example>

  <p>The individual testsuite files are generated from testsuite.in files
  scattered about the lib/ and src/ trees. When you run the make targets
  described above, those are made for you automagically, but to do it by hand
  (i e if you added a test to one of them), cd to the top directory and run</p>

<example>
make testsuites
</example>

  <p>The testsuite files have now appeared in build/<i>arch</i> in
  locations corresponding to where they lived in the pike tree, except
  those from the lib/ hierarchy; those end up in
  build/<i>arch</i>/tlib.</p>

  <p>The test_pike.pike program takes the following attributes.</p>

  <dl>

  <dt>-h, --help</dt>
  <dd>Displays a help message listing all possible arguments.</dd>

  <dt>-a, --auto</dt>
  <dd>Let the test program find the testsuits self. It will search for
  files named <tt>testsuite</tt> or <tt>module_testsuite</tt> in the
  current working directory and all subdirectories.</dd>

  <dt>--no-watchdog</dt>
  <dd>Normally the the test program has a watchdog activated that
  aborts testing if a test takes more than 20 minutes to complete (or
  80 minutes if Pike is compiled with dmalloc). With this argument the
  watchdog will not be used.</dd>

  <dt>--watchdog=pid</dt>
  <dd>Run only the watchdog and monitor the process with the given pid.</dd>

  <dt>-v[level], --verbose[=level]</dt>

  <dd>Select the level of verbosity. Every verbose level includes the
  printouts from the levels below.
  <matrix>
  <r><c>0</c><c>No extra printouts.</c></r>
  <r><c>1</c><c>Some additional information printed out after every
                finished block of tests.</c></r>
  <r><c>2</c><c>Some extra information about test that will or won't be
                run.</c></r>
  <r><c>3</c><c>Every test is printed out.</c></r>
  <r><c>4</c><c>Time spent in individual tests are printed out.</c></r>
  <r><c>10</c><c>The actual pike code compiled, including wrappers, is
                 printed. Note that the code will be quoted.</c></r>
  </matrix>

<pre>
$ pike bin/test_pike.pike -v1 testsuite 
Doing tests in testsuite (1 tests)
Total tests: 1  (0 tests skipped)       
</pre>

<pre>
$ pike bin/test_pike.pike -v2 testsuite
Doing tests in testsuite (1 tests)
Doing test 1 (1 total) at /home/nilsson/Pike/7.3/lib/modules/ADT.pmod/testsuite.in:9
Failed tests: 0.                        
Total tests: 1  (0 tests skipped)
</pre>

<pre>
$ pike bin/test_pike.pike -v4 testsuite 
Doing tests in testsuite (1 tests)
Doing test 1 (1 total) at /home/nilsson/Pike/7.3/lib/modules/ADT.pmod/testsuite.in:9
  0: mixed a() { 
  1:   object s = ADT.Stack();
  2:   s-&gt;push(1);
  3:   return s-&gt;pop();
  4: ; }
  5: mixed b() { return 1; }

Time in a(): 0.000, Time in b(): 0.000000
Failed tests: 0.                        
Total tests: 1  (0 tests skipped)
</pre>

<pre>
$ pike bin/test_pike.pike -v10 testsuite 
Doing tests in testsuite (1 tests)
Doing test 1 (1 total) at /home/nilsson/Pike/7.3/lib/modules/ADT.pmod/testsuite.in:9
  0: mixed a() { 
  1:   object s = ADT.Stack();
  2:   s-&gt;push(1);
  3:   return s-&gt;pop();
  4: ; }
  5: mixed b() { return 1; }

  0: mixed a() { 
  1:   object s = ADT.Stack();
  2:   s-&gt;push(1);
  3:   return s-&gt;pop();
  4: ; }
  5: mixed b() { return 1; }
  6: int __cpp_line=__LINE__; int __rtl_line=[int]backtrace()[-1][1];
  7: 
  8: int \30306\30271\30310=0;
  9: 

Time in a(): 0.000, Time in b(): 0.000000
Failed tests: 0.                        
Total tests: 1  (0 tests skipped)
</pre>
  </dd>

  <dt>-p, --prompt</dt>
  <dd>The user will be asked before every test is run.</dd>

  <dt>-sX, --start-test=X</dt>
  <dd>Where in the testsuite testing should start, e.g. ignores X
  tests in every testsuite.</dd>

  <dt>-eX, --end-after=X</dt>
  <dd>How many tests should be run.</dd>

  <dt>-f, --fail</dt>
  <dd>If set, the test program exits on first failure.</dd>

  <dt>-F, --fork</dt>
  <dd>If set, each testsuite will run in a separate process.</dd>

  <dt>-lX, --loop=X</dt>
  <dd>The number of times the testsuite should be run. Default is 1.</dd>

  <dt>-tX, --trace=X</dt>
  <dd>Run tests with trace level X.</dd>

  <dt>-c[X], --check[=X]</dt>
  <dd>The level of extra pike consistency checks performed.
  <matrix>
  <r><c>1</c><c>_verify_internals is run before every test.</c></r>
  <r><c>2</c><c>_verify_internals is run after every compilation.</c></r>
  <r><c>3</c><c>_verify_internals is run after every test.</c></r>
  <r><c>4</c><c>An extra gc and _verify_internals is run before
    every test.</c></r>
  <r><c>X&lt;0</c><c>For values below zero, _verify_internals will be run
    before every n:th test, where n=abs(X).</c></r>
  </matrix>
  </dd>

  <dt>-m, --mem, --memory</dt>
  <dd>Prints out memory allocations after the tests.</dd>

  <dt>-T, --notty</dt>
  <dd>Format output for non-tty.</dd>

  <dt>-d, --debug</dt>
  <dd>Opens a debug port.</dd>

  </dl>

</subsection>

</section>

<section title="Writing New Tests">

  <p>Whenever you write a new function in a module or in Pike itself
  it is good to add a few test cases in the test suite to ensure that
  regressions are spotted as soon as they appear or to aid in finding
  problems when porting Pike to another platform. Since you have
  written the code, you are the one best suited to come up with tricky
  tests cases. A good test suite for a function includes both some
  trivial tests to ensure that the basic functionality works and some
  nasty tests to test the borderlands of what the function is capable
  of, e.g. empty in parameters.</p>

  <p>Also, when a bug in Pike has been found, a minimized test case
  the triggers the bug should also be added to the test suite. After
  all, this test case has proven to be a useful once.</p>

  <subsection title="test_any">
    <p>The test_any macro tests if the result of two pike expressions
    are similar, e.g. if a==b. Technically the actual test preformed
    is !(a!=b). The first expression should be a complete block, that
    returns a value, while the other expression should be a simple
    pike statement.</p>
    <example>
test_any([[
  int f (int i) {i = 0; return i;};
  return f (1);
]],0)
</example>
  </subsection>

  <subsection title="test_any_equal">
    <p>The test_any_equal macro tests if the result of two pike
    expressions are identical, e.g. if equal(a,b). The first
    expression should be a complete block, that returns a value, while
    the other expression should be a simple pike statement.</p>
    <example>
test_any_equal([[
  mixed a=({1,2,3});
  a[*] += 1;
  return a;
]], [[ ({2,3,4}) ]])
</example>
  </subsection>

  <subsection title="test_eq">
    <p>The test_eq macro tests if the result of two pike statements
    are similar, e.g. if a==b. Technicaly the actual test performed is
    !(a!=b).</p>
<example>
test_eq(1e1,10.0);
</example>
  </subsection>

  <subsection title="test_equal">
    <p>The test_equal macro tests if the result of two pike statements
    are identical, e.g. if equal(a,b).</p>
<example>
test_equal([[ ({10,20})[*] + 30  ]], [[ ({40, 50}) ]])
</example>
  </subsection>

  <subsection title="test_do">
    <p>test_do simply executes its code. This test fails if there is
    any compilation error or if an error is thrown during
    execution.</p>
<example>
test_do([[
  int x;
  if (time())
    x = 1;
  else
    foo: break foo;
]])
</example>
  </subsection>

  <subsection title="test_true">
    <p>This test succeeds if the pike expression is evaluated into a
    non-zero value.</p>
<example>
test_true([[1.0e-40]]);
</example>
  </subsection>

  <subsection title="test_false">
    <p>This test succeeds if the pike expression is evaluated into a
    zero value.</p>
<example>
test_false(glob("*f","foo"))
</example>
  </subsection>

  <subsection title="test_compile">
    <p>The test_compile macro only tries to compile an expression. It
    fails upon compilarion warnings or errors.</p>
<example>
test_compile([[Stdio.File foo=Stdio.File();]])
</example>
  </subsection>

  <subsection title="test_compile_any">
    <p>Tests if the code compiles, just as <tt>test_compile</tt>, but
    is a complete block of code and not just an expression.</p>
<example>
test_compile_any([[
  void foo() 
  {
    Stdio.File bar(int x, int y)
    { 
      return 0;
    }; 
  } 
]])
</example>
  </subsection>

  <subsection title="test_compile_error">
    <p>Does the inverse of <tt>test_compile</tt>; verifies that the
    code does not compile.</p>
<example>
test_compile_error([[ int a="a"; ]])
</example>
  </subsection>

  <subsection title="test_compile_error_any">
    <p>Does the inverse of <tt>test_compile_any</tt>; verifies that
    the code does not compile.</p>
<example>
test_compile_error_any([[
  int a=5;
  string b="a";
  a=b;
]])
</example>
  </subsection>

  <subsection title="test_compile_warning" />
  <subsection title="test_compile_warning_any" />
  <subsection title="test_eval_error" />
  <subsection title="test_define_program" />
  <subsection title="test_program" />
  <subsection title="cond" />
  <subsection title="ifefun" />
  <subsection title="nonregression" />

</section>

</chapter>