File: test-framework.rst

package info (click to toggle)
ns3 3.47-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 106,848 kB
  • sloc: cpp: 633,577; python: 15,491; ansic: 6,773; makefile: 1,959; sh: 1,021; pascal: 632; javascript: 167; perl: 102
file content (888 lines) | stat: -rw-r--r-- 36,562 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
.. include:: replace.txt
.. highlight::  bash

Testing framework
-----------------

|ns3| consists of a simulation core engine, a set of models, example programs,
and tests.  Over time, new contributors contribute models, tests, and
examples.  A Python test program ``test.py`` serves as the test
execution manager; ``test.py`` can run test code and examples to
look for regressions, can output the results into a number of forms, and
can manage code coverage analysis tools.  On top of this, we layer
*buildslaves* that are automated build robots that perform
robustness testing by running the test framework on different systems
and with different configuration options.

Buildslaves
***********

At the highest level of |ns3| testing are the buildslaves (build robots).
If you are unfamiliar with
this system look at `<https://ns-buildmaster.ee.washington.edu:8010/>`_.
This is an open-source automated system that allows |ns3| to be rebuilt
and tested daily.  By running the buildbots on a number
of different systems we can ensure that |ns3| builds and executes
properly on all of its supported systems.

Users (and developers) typically will not interact with the buildslave system other
than to read its messages regarding test results.  If a failure is detected in
one of the automated build and test jobs, the buildbot will send an email to the
*ns-commits* mailing list.  This email will look something like

.. sourcecode::  text

  [Ns-commits] Build failed in Jenkins: daily-ubuntu-without-valgrind ยป Ubuntu-64-15.04 #926

  ...
  281 of 285 tests passed (281 passed, 3 skipped, 1 failed, 0 crashed, 0 valgrind errors)
  List of SKIPped tests:
    ns3-tcp-cwnd
    ns3-tcp-interoperability
    nsc-tcp-loss
  List of FAILed tests:
    random-variable-stream-generators
  + exit 1
  Build step 'Execute shell' marked build as failure

In the full details URL shown in the email, one can find links to the detailed test output.

The buildslave system will do its job quietly if there are no errors, and the
system will undergo build and test cycles every day to verify that all is well.

Test.py
*******
The buildbots use a Python program, ``test.py``, that is responsible for
running all of the tests and collecting the resulting reports into a human-
readable form.  This program is also available for use by users and developers
as well.

``test.py`` is very flexible in allowing the user to specify the number
and kind of tests to run; and also the amount and kind of output to generate.

Before running ``test.py``, make sure that ns3's examples and tests
have been built by doing the following

::

  $ ./ns3 configure --enable-examples --enable-tests
  $ ./ns3 build

By default, ``test.py`` will run all available tests and report status
back in a very concise form.  Running the command

::

  $ ./test.py

will result in a number of ``PASS``, ``FAIL``, ``CRASH`` or ``SKIP``
indications followed by the kind of test that was run and its display name.

.. sourcecode:: text

  Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
  Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
  'build' finished successfully (0.939s)
  FAIL: TestSuite propagation-loss-model
  PASS: TestSuite object-name-service
  PASS: TestSuite pcap-file-object
  PASS: TestSuite ns3-tcp-cwnd
  ...
  PASS: TestSuite ns3-tcp-interoperability
  PASS: Example csma-broadcast
  PASS: Example csma-multicast

This mode is intended to be used by users who are interested in determining if
their distribution is working correctly, and by developers who are interested
in determining if changes they have made have caused any regressions.

There are a number of options available to control the behavior of ``test.py``.
if you run ``test.py --help`` you should see a command summary like:

.. sourcecode:: text

  usage: test.py [-h] [-b BUILDPATH] [-c CONSTRAIN] [-d] [-e EXAMPLE] [-u]
                 [-f {QUICK,EXTENSIVE,TAKES_FOREVER} | -of {QUICK,EXTENSIVE,TAKES_FOREVER}]
                 [-g] [-k] [-l] [-m] [-n] [-p PYEXAMPLE] [-r] [-s SUITE] [-t TEXT-FILE]
                 [-v] [--verbose-failed] [-w HTML-FILE] [-x XML-FILE] [--nocolor]
                 [--jobs PROCESS_LIMIT] [--rerun-failed]

  options:
    -h, --help            show this help message and exit
    -b, --buildpath BUILDPATH
                          specify the path where ns-3 was built (defaults to the build
                          directory for the current variant)
    -c, --constrain CONSTRAIN
                          constrain the test-runner by kind of test
    -d, --duration        print the duration of each test suite and example
    -e, --example EXAMPLE
                          specify a single example to run (no relative path is needed)
    -u, --update-data     If examples use reference data files, get them to re-generate
                          them
    -f, --fullness {QUICK,EXTENSIVE,TAKES_FOREVER}
                          choose the duration of tests to run: QUICK, EXTENSIVE, or
                          TAKES_FOREVER, where EXTENSIVE includes QUICK and TAKES_FOREVER
                          includes QUICK and EXTENSIVE (only QUICK tests are run by
                          default)
    -of, --only-fullness {QUICK,EXTENSIVE,TAKES_FOREVER}
                          choose the duration of tests to run: QUICK, EXTENSIVE, or
                          TAKES_FOREVER (only tests marked with fullness will be executed)
    -g, --grind           run the test suites and examples using valgrind
    -k, --kinds           print the kinds of tests available
    -l, --list            print the list of known tests
    -m, --multiple        report multiple failures from test suites and test cases
    -n, --no-build        do not build before starting testing
    -p, --pyexample PYEXAMPLE
                          specify a single python example to run (with relative path)
    -r, --retain          retain all temporary files (which are normally deleted)
    -s, --suite SUITE     specify a single test suite to run
    -t, --text TEXT-FILE  write detailed test results into TEXT-FILE.txt
    -v, --verbose         print progress and informational messages
    --verbose-failed      print progress and informational messages for failed jobs
    -w, --web, --html HTML-FILE
                          write detailed test results into HTML-FILE.html
    -x, --xml XML-FILE    write detailed test results into XML-FILE.xml
    --nocolor             do not use colors in the standard output
    --jobs PROCESS_LIMIT  limit number of worker threads
    --rerun-failed        rerun failed tests

If one specifies an optional output style, one can generate detailed descriptions
of the tests and status.  Available styles are ``text`` and ``HTML``.
The buildbots will select the HTML option to generate HTML test reports for the
nightly builds using

::

  $ ./test.py --html=nightly.html

In this case, an HTML file named ``nightly.html`` would be created with a pretty
summary of the testing done.  A ''human readable'' format is available for users
interested in the details.

::

  $ ./test.py --text=results.txt

In the example above, the test suite checking the |ns3| wireless
device propagation loss models failed.  By default no further information is
provided.

To further explore the failure, ``test.py`` allows a single test suite
to be specified.  Running the command

::

  $ ./test.py --suite=propagation-loss-model

or equivalently

::

  $ ./test.py -s propagation-loss-model

results in that single test suite being run.

.. sourcecode:: text

  FAIL: TestSuite propagation-loss-model

To find detailed information regarding the failure, one must specify the kind
of output desired.  For example, most people will probably be interested in
a text file::

  $ ./test.py --suite=propagation-loss-model --text=results.txt

This will result in that single test suite being run with the test status written to
the file ''results.txt''.

You should find something similar to the following in that file

.. sourcecode:: text

  FAIL: Test Suite ''propagation-loss-model'' (real 0.02 user 0.01 system 0.00)
  PASS: Test Case "Check ... Friis ... model ..." (real 0.01 user 0.00 system 0.00)
  FAIL: Test Case "Check ... Log Distance ... model" (real 0.01 user 0.01 system 0.00)
    Details:
      Message:   Got unexpected SNR value
      Condition: [long description of what actually failed]
      Actual:    176.395
      Limit:     176.407 +- 0.0005
      File:      ../src/test/ns3wifi/propagation-loss-models-test-suite.cc
      Line:      360

Notice that the Test Suite is composed of two Test Cases.  The first test case
checked the Friis propagation loss model and passed.  The second test case
failed checking the Log Distance propagation model.  In this case, an SNR of
176.395 was found, and the test expected a value of 176.407 correct to three
decimal places.  The file which implemented the failing test is listed as well
as the line of code which triggered the failure.

If you desire, you could just as easily have written an HTML file using the
``--html`` option as described above.

Typically a user will run all tests at least once after downloading
|ns3| to ensure that his or her environment has been built correctly
and is generating correct results according to the test suites.  Developers
will typically run the test suites before and after making a change to ensure
that they have not introduced a regression with their changes.  In this case,
developers may not want to run all tests, but only a subset.  For example,
the developer might only want to run the unit tests periodically while making
changes to a repository.  In this case, ``test.py`` can be told to constrain
the types of tests being run to a particular class of tests.  The following
command will result in only the unit tests being run::

  $ ./test.py --constrain=unit

To see a quick list of the legal kinds of constraints, you can ask for them
to be listed.  The following command

::

  $ ./test.py --kinds

will result in the following list being displayed:

.. sourcecode:: text

  Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
  Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
  'build' finished successfully (0.939s)Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
  core:        Run all TestSuite-based tests (exclude examples)
  example:     Examples (to see if example programs run successfully)
  performance: Performance Tests (check to see if the system is as fast as expected)
  system:      System Tests (spans modules to check integration of modules)
  unit:        Unit Tests (within modules to check basic functionality)

Any of these kinds of test can be provided as a constraint using the ``--constraint``
option.

To see a quick list of all of the test suites available, you can ask for them
to be listed.  The following command,

::

  $ ./test.py --list

will result in a list of the test suite being displayed, similar to

.. sourcecode:: text

  Test Type    Test Name
  ---------    ---------
  performance  many-uniform-random-variables-one-get-value-call
  performance  one-uniform-random-variable-many-get-value-calls
  performance  type-id-perf
  system       buildings-pathloss-test
  system       buildings-shadowing-test
  system       devices-mesh-dot11s-regression
  system       devices-mesh-flame-regression
  system       epc-gtpu
  ...
  unit         wifi-primary-channels
  unit         wifi-ru-allocation
  unit         wifi-spectrum-wifi-phy
  unit         wifi-transmit-mask
  unit         wifi-txop
  example      adhoc-aloha-ideal-phy
  example      adhoc-aloha-ideal-phy-matrix-propagation-loss-model
  example      adhoc-aloha-ideal-phy-with-microwave-oven
  example      aodv
  ...

Any of these listed suites can be selected to be run by itself using the
``--suite`` option as shown above.

To run multiple test suites at once it is possible to use a 'Unix filename pattern matching'
style, e.g.,

::

     $ ../test.py -s 'ipv6*'

Note the use of quotes. The result is similar to

.. sourcecode:: text

  PASS: TestSuite ipv6-protocol
  PASS: TestSuite ipv6-packet-info-tag
  PASS: TestSuite ipv6-list-routing
  PASS: TestSuite ipv6-extension-header
  PASS: TestSuite ipv6-address-generator
  PASS: TestSuite ipv6-raw
  PASS: TestSuite ipv6-dual-stack
  PASS: TestSuite ipv6-fragmentation
  PASS: TestSuite ipv6-address-helper
  PASS: TestSuite ipv6-address
  PASS: TestSuite ipv6-forwarding
  PASS: TestSuite ipv6-ripng

Similarly to test suites, one can run a single C++ example program
using the ``--example`` option.  Note that the relative path for the
example does not need to be included and that the executables built
for C++ examples do not have extensions.  Furthermore, the example
must be registered as an example to the test framework; it is not
sufficient to create an example and run it through test.py; it must
be added to the relevant ``examples-to-run.py`` file, explained below.
Entering

::

  $ ./test.py --example=udp-echo

results in that single example being run.

.. sourcecode:: text

  PASS: Example examples/udp/udp-echo

You can also execute all instances of declared examples using different parameters,
as defined in ``examples-to-run.py`` by including a star at the end of the example name.

::

  $ ./test.py --example=wifi-phy-configuration*

results in that example being run with the following parameters:

.. sourcecode:: text

  [1/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=0
  [2/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=8
  [3/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=10
  [4/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=12
  [5/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=1
  [6/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=13
  [7/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=4
  [8/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=3
  [9/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=14
  [10/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=5
  [11/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=15
  [12/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=6
  [13/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=7
  [14/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=11
  [15/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=2
  [16/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=9
  [17/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=16
  [18/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=19
  [19/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=18
  [20/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=17
  [21/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=20
  21 of 21 tests passed (21 passed, 0 skipped, 0 failed, 0 crashed, 0 valgrind errors)

You can specify the directory where |ns3| was built using the
``--buildpath`` option as follows.

::

  $ ./test.py --buildpath=/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build/debug --example=wifi-simple-adhoc

One can run a single Python example program using the ``--pyexample``
option.  Note that the relative path for the example must be included
and that Python examples do need their extensions.  Entering

::

  $ ./test.py --pyexample=examples/tutorial/first.py

results in that single example being run.

.. sourcecode:: text

  PASS: Example examples/tutorial/first.py

Because Python examples are not built, you do not need to specify the
directory where |ns3| was built to run them.

Normally when example programs are executed, they write a large amount of trace
file data.  This is normally saved to the base directory of the distribution
(e.g., /home/user/ns-3-dev).  When ``test.py`` runs an example, it really
is completely unconcerned with the trace files.  It just wants to to determine
if the example can be built and run without error.  Since this is the case, the
trace files are written into a ``/tmp/unchecked-traces`` directory.  If you
run the above example, you should be able to find the associated
``udp-echo.tr`` and ``udp-echo-n-1.pcap`` files there.

The list of available examples is defined by the contents of the ''examples''
directory in the distribution.  If you select an example for execution using
the ``--example`` option, ``test.py`` will not make any attempt to decide
if the example has been configured or not, it will just try to run it and
report the result of the attempt.

When ``test.py`` runs, by default it will first ensure that the system has
been completely built.  This can be defeated by selecting the ``--no-build``
option.

::

  $ ./test.py --list --no-build

will result in a list of the currently built test suites being displayed, similar to:

.. sourcecode:: text

  propagation-loss-model
  ns3-tcp-cwnd
  ns3-tcp-interoperability
  pcap-file
  object-name-service
  random-variable-stream-generators

Note the absence of the ``ns3`` build messages.

``test.py`` also supports running the test suites and examples under valgrind.
Valgrind is a flexible program for debugging and profiling Linux executables.  By
default, valgrind runs a tool called memcheck, which performs a range of memory-
checking functions, including detecting accesses to uninitialised memory, misuse
of allocated memory (double frees, access after free, etc.) and detecting memory
leaks.  This can be selected by using the ``--grind`` option.

::

  $ ./test.py --grind

As it runs, ``test.py`` and the programs that it runs indirectly, generate large
numbers of temporary files.  Usually, the content of these files is not interesting,
however in some cases it can be useful (for debugging purposes) to view these files.
``test.py`` provides a ``--retain`` option which will cause these temporary
files to be kept after the run is completed.  The files are saved in a directory
named ``testpy-output`` under a subdirectory named according to the current Coordinated
Universal Time (also known as Greenwich Mean Time).

::

  $ ./test.py --retain

Finally, ``test.py`` provides a ``--verbose`` option which will print
large amounts of information about its progress.  It is not expected that this
will be terribly useful unless there is an error.  In this case, you can get
access to the standard output and standard error reported by running test suites
and examples.  Select verbose in the following way::

  $ ./test.py --verbose

All of these options can be mixed and matched.  For example, to run all of the
|ns3| core test suites under valgrind, in verbose mode, while generating an HTML
output file, one would do::

  $ ./test.py --verbose --grind --constrain=core --html=results.html

TestTaxonomy
************

As mentioned above, tests are grouped into a number of broadly defined
classifications to allow users to selectively run tests to address the different
kinds of testing that need to be done.

* Build Verification Tests
* Unit Tests
* System Tests
* Examples
* Performance Tests

Moreover, each test is further classified according to the expected time needed to
run it. Tests are classified as:

* QUICK
* EXTENSIVE
* TAKES_FOREVER

Note that specifying EXTENSIVE fullness will also run tests in QUICK category.
Specifying TAKES_FOREVER will run tests in EXTENSIVE and QUICK categories.
By default, only QUICK tests are ran.

Alternatively, one can specify only-fullness option, which runs tests
from just that particular category. This allows for tiered execution, both in
time constrained jobs (starting from TAKES_FOREVER to give them more time,
then EXTENSIVE, and then QUICK), or to progressively run more expensive tests
(QUICK, then EXTENSIVE, then TAKES_FOREVER).

As a rule of thumb, tests that must be run to ensure |ns3| coherence should be
QUICK (i.e., take a few seconds). Tests that could be skipped, but are nice to do
can be EXTENSIVE; these are tests that typically need minutes. TAKES_FOREVER is
left for tests that take a really long time, in the order of several minutes.
The main classification goal is to be able to run the buildbots in a reasonable
time, and still be able to perform more extensive tests when needed.

Unit Tests
++++++++++

Unit tests are more involved tests that go into detail to make sure that a
piece of code works as advertised in isolation.  There is really no reason
for this kind of test to be built into an |ns3| module.  It turns out, for
example, that the unit tests for the object name service are about the same
size as the object name service code itself.  Unit tests are tests that
check a single bit of functionality that are not built into the |ns3| code,
but live in the same directory as the code it tests.  It is possible that
these tests check integration of multiple implementation files in a module
as well.  The file src/core/test/names-test-suite.cc is an example of this kind
of test.  The file src/network/test/pcap-file-test-suite.cc is another example
that uses a known good pcap file as a test vector file.  This file is stored
locally in the src/network directory.

System Tests
++++++++++++

System tests are those that involve more than one module in the system.  We
have some of this kind of test running in our current regression framework,
but they are typically overloaded examples.  We provide a new place
for this kind of test in the directory ``src/test``.  The file
``src/test/ns3tcp/ns3tcp-loss-test-suite.cc`` is an example of this kind of
test.  It uses NSC TCP to test the |ns3| TCP implementation.  Often there
will be test vectors required for this kind of test, and they are stored in
the directory where the test lives.  For example,
``ns3tcp-loss-NewReno0-response-vectors.pcap`` is a file consisting of a number of TCP
headers that are used as the expected responses of the |ns3| TCP under test.

Note that Unit Tests are often preferable to System Tests, as they are more
independent from small changes in the modules that are not the goal of the test.

Examples
++++++++

The examples are tested by the framework to make sure they built and
will run.  Limited checking is done on examples; currently the pcap
files are just written off into /tmp to be discarded.  If the example
runs (don't crash) and the exit status is zero, the example will pass
the smoke test.

Performance Tests
+++++++++++++++++

Performance tests are those which exercise a particular part of the system
and determine if the tests have executed to completion in a reasonable time.

Running Tests
*************

Tests are typically run using the high level ``test.py`` program. To get a list of the available command-line options, run ``test.py --help``

The test program ``test.py`` will run both tests and those examples that
have been added to the list to check.  The difference between tests
and examples is as follows.  Tests generally check that specific simulation
output or events conforms to expected behavior.  In contrast, the output
of examples is not checked, and the test program merely checks the exit
status of the example program to make sure that it runs without error.

Briefly, to run all tests, first one must configure tests during configuration
stage, and also (optionally) examples if examples are to be checked:

::

   $ ./ns3 configure --enable-examples --enable-tests

Then, build |ns3|, and after it is built, just run ``test.py``.  ``test.py -h``
will show a number of configuration options that modify the behavior
of test.py.

The program ``test.py`` invokes, for C++ tests and examples, a lower-level
C++ program called ``test-runner`` to actually run the tests.  As discussed
below, this ``test-runner`` can be a helpful way to debug tests.

There are two options to control the scope of tests that are run:

* ``--fullness`` (abbreviated ``-f``):  Runs all tests up to that classification
   * ``./test.py`` (runs only QUICK)
   * ``./test.py -f QUICK`` (runs only QUICK)
   * ``./test.py -f EXTENSIVE`` (runs QUICK and EXTENSIVE)
   * ``./test.py -f TAKES_FOREVER`` (runs QUICK, EXTENSIVE, and TAKES_FOREVER)

* ``--only-fullness`` (abbreviated ``-of``):  Runs only tests of that classification

   * ``./test.py -of QUICK`` (runs only QUICK)
   * ``./test.py -of EXTENSIVE`` (runs only EXTENSIVE)
   * ``./test.py -of TAKES_FOREVER`` (runs only TAKES_FOREVER)

The change in test scope can be further controlled by selecting individual
tests by using the ``-s`` option.  For example, the following command
will cause all test cases of only one test suite to be executed:

::

  $ ./test.py -s wifi-primary-channels -f TAKES_FOREVER

Debugging Tests
***************

The debugging of the test programs is best performed running the low-level
test-runner program. The test-runner is the bridge from generic Python
code to |ns3| code. It is written in C++ and uses the automatic test
discovery process in the |ns3| code to find and allow execution of all
of the various tests.

The main reason why ``test.py`` is not suitable for debugging is that it is
not allowed for logging to be turned on using the ``NS_LOG`` environmental
variable when test.py runs.  This limitation does not apply to the test-runner
executable. Hence, if you want to see logging output from your tests, you
have to run them using the test-runner directly.

In order to execute the test-runner, you run it like any other |ns3| executable
-- using ``ns3``.  To get a list of available options, you can type::

  $ ./ns3 run "test-runner --help"

You should see something like the following

.. sourcecode:: text

  Usage: /mnt/c/tools/sources/ns-3-dev/build/utils/ns3-dev-test-runner-debug [OPTIONS]

  Options:
    --help                   : print these options
    --print-test-name-list   : print the list of names of tests available
    --list                   : an alias for --print-test-name-list
    --print-test-types       : print the type of tests along with their names
    --print-test-type-list   : print the list of types of tests available
    --print-temp-dir         : print name of temporary directory before running
                               the tests
    --test-type=TYPE         : process only tests of type TYPE
    --test-name=NAME         : process only test whose name matches NAME
    --suite=NAME             : an alias (here for compatibility reasons only)
                               for --test-name=NAME
    --assert-on-failure      : when a test fails, crash immediately (useful
                               when running under a debugger
    --stop-on-failure        : when a test fails, stop immediately
    --fullness=FULLNESS      : choose the duration of tests to run: QUICK,
                               EXTENSIVE, or TAKES_FOREVER, where EXTENSIVE
                               includes QUICK and TAKES_FOREVER includes
                               QUICK and EXTENSIVE (only QUICK tests are
                               run by default)
    --only-fullness=FULLNESS : choose the duration of tests to run: QUICK,
                               EXTENSIVE, TAKES_FOREVER (only tests marked
                               with fullness will be executed)
    --verbose                : print details of test execution
    --xml                    : format test run output as xml
    --tempdir=DIR            : set temp dir for tests to store output files
    --datadir=DIR            : set data dir for tests to read reference files
    --out=FILE               : send test result to FILE instead of standard output
    --append=FILE            : append test result to FILE instead of standard output


There are a number of things available to you which will be familiar to you if
you have looked at ``test.py``.  This should be expected since the test-
runner is just an interface between ``test.py`` and |ns3|.  You
may notice that example-related commands are missing here.  That is because
the examples are really not |ns3| tests.  ``test.py`` runs them
as if they were to present a unified testing environment, but they are really
completely different and not to be found here.

The first new option that appears here, but not in test.py is the ``--assert-on-failure``
option.  This option is useful when debugging a test case when running under a
debugger like ``gdb``.  When selected, this option tells the underlying
test case to cause a segmentation violation if an error is detected.  This has
the nice side-effect of causing program execution to stop (break into the
debugger) when an error is detected.  If you are using gdb, you could use this
option something like,

::

  $ ./ns3 shell
  $ cd build/utils
  $ gdb ns3-dev-test-runner-debug
  $ run --suite=global-value --assert-on-failure

If an error is then found in the global-value test suite, a segfault would be
generated and the (source level) debugger would stop at the ``NS_TEST_ASSERT_MSG``
that detected the error.

To run one of the tests directly from the test-runner
using ``ns3``, you will need to specify the test suite to run.
So you could use the shell and do::

  $ ./ns3 run "test-runner --suite=pcap-file"

|ns3| logging is available when you run it this way, such as::

  $ NS_LOG="Packet" ./ns3 run "test-runner --suite=pcap-file"

Test output
+++++++++++

Many test suites need to write temporary files (such as pcap files)
in the process of running the tests.  The tests then need a temporary directory
to write to.  The Python test utility (test.py) will provide a temporary file
automatically, but if run stand-alone this temporary directory must be provided.
It can be annoying to continually have to provide
a ``--tempdir``, so the test runner will figure one out for you if you don't
provide one.  It first looks for environment variables named ``TMP`` and
``TEMP`` and uses those.  If neither ``TMP`` nor ``TEMP`` are defined
it picks ``/tmp``.  The code then tacks on an identifier indicating what
created the directory (ns-3) then the time (hh.mm.ss) followed by a large random
number.  The test runner creates a directory of that name to be used as the
temporary directory.  Temporary files then go into a directory that will be
named something like

::

  /tmp/ns-3.10.25.37.61537845

The time is provided as a hint so that you can relatively easily reconstruct
what directory was used if you need to go back and look at the files that were
placed in that directory.

Another class of output is test output like pcap traces that are generated
to compare to reference output.  The test program will typically delete
these after the test suites all run.  To disable the deletion of test
output, run ``test.py`` with the "retain" option:

::

   $ ./test.py -r

and test output can be found in the ``testpy-output/`` directory.

Reporting of test failures
++++++++++++++++++++++++++

When you run a test suite using the test-runner it will run the test
and report PASS or FAIL.
To run more quietly, you need to specify an output file to which the tests will write their status using the ``--out`` option.
Try,

::

  $ ./ns3 run "test-runner --suite=pcap-file --out=myfile.txt"


Debugging test suite failures
+++++++++++++++++++++++++++++

To debug test crashes, such as

.. sourcecode:: text

  CRASH: TestSuite wifi-interference

You can access the underlying test-runner program via gdb as follows, and
then pass the "--basedir=`pwd`" argument to run (you can also pass other
arguments as needed, but basedir is the minimum needed)::

  $ ./ns3 run "test-runner" --command-template="gdb %s"
  Waf: Entering directory `/home/tomh/hg/sep09/ns-3-allinone/ns-3-dev-678/build'
  Waf: Leaving directory `/home/tomh/hg/sep09/ns-3-allinone/ns-3-dev-678/build'
  'build' finished successfully (0.380s)
  GNU gdb 6.8-debian
  Copyright (C) 2008 Free Software Foundation, Inc.
  L cense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
  This is free software: you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
  and "show warranty" for details.
  This GDB was configured as "x86_64-linux-gnu"...
  (gdb) r --suite=
  Starting program: <..>/build/utils/ns3-dev-test-runner-debug --suite=wifi-interference
  [Thread debugging using libthread_db enabled]
  assert failed. file=../src/core/model/type-id.cc, line=138, cond="uid <= m_information.size() && uid != 0"
  ...

Here is another example of how to use valgrind to debug a memory problem
such as::

  VALGR: TestSuite devices-mesh-dot11s-regression

  $ ./ns3 run test-runner --command-template="valgrind %s --suite=devices-mesh-dot11s-regression"

Class TestRunner
****************

The executables that run dedicated test programs use a TestRunner class.  This
class provides for automatic test registration and listing, as well as a way to
execute the individual tests.  Individual test suites use C++ global
constructors
to add themselves to a collection of test suites managed by the test runner.
The test runner is used to list all of the available tests and to select a test
to be run.  This is a quite simple class that provides three static methods to
provide or Adding and Getting test suites to a collection of tests.  See the
doxygen for class ``ns3::TestRunner`` for details.

Test Suite
**********

All |ns3| tests are classified into Test Suites and Test Cases.  A
test suite is a collection of test cases that completely exercise a given kind
of functionality.  As described above, test suites can be classified as,

* Build Verification Tests
* Unit Tests
* System Tests
* Examples
* Examples-as-Tests
* Performance Tests

This classification is exported from the TestSuite class.  This class is quite
simple, existing only as a place to export this type and to accumulate test
cases.  From a user perspective, in order to create a new TestSuite in the
system one only has to define a new class that inherits from class ``TestSuite``
and perform these two duties.

The following code will define a new class that can be run by ``test.py``
as a ''unit'' test with the display name, ``my-test-suite-name``.

.. sourcecode:: cpp

  class MySuite : public TestSuite
  {
  public:
    MyTestSuite();
  };

  MyTestSuite::MyTestSuite()
    : TestSuite("my-test-suite-name", Type::UNIT)
  {
    AddTestCase(new MyTestCase, TestCase::Duration::QUICK);
  }

  static MyTestSuite myTestSuite;

The base class takes care of all of the registration and reporting required to
be a good citizen in the test framework.

Avoid putting initialization logic into the test suite or test case
constructors.  This is
because an instance of the test suite is created at run time
(due to the static variable above) regardless of whether the test is being
run or not.  Instead, the TestCase provides a virtual ``DoSetup`` method
that can be specialized to perform setup before ``DoRun`` is called.

Test Case
*********

Individual tests are created using a TestCase class.  Common models for the use
of a test case include "one test case per feature", and "one test case per method."
Mixtures of these models may be used.

In order to create a new test case in the system, all one has to do is to inherit
from the  ``TestCase`` base class, override the constructor to give the test
case a name and override the ``DoRun`` method to run the test.  Optionally,
override also the ``DoSetup`` method.

.. sourcecode:: cpp

  class MyTestCase : public TestCase
  {
    MyTestCase();
    virtual void DoSetup();
    virtual void DoRun();
  };

  MyTestCase::MyTestCase()
    : TestCase("Check some bit of functionality")
  {
  }

  void
  MyTestCase::DoRun()
  {
    NS_TEST_ASSERT_MSG_EQ(true, true, "Some failure message");
  }

Utilities
*********

There are a number of utilities of various kinds that are also part of the
testing framework.  Examples include a generalized pcap file useful for
storing test vectors; a generic container useful for transient storage of
test vectors during test execution; and tools for generating presentations
based on validation and verification testing results.

These utilities are not documented here, but for example, please see
how the TCP tests found in ``src/test/ns3tcp/`` use pcap files and reference
output.