File: testsuite.tex

package info (click to toggle)
ns2 2.35%2Bdfsg-2.1
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 78,780 kB
  • ctags: 27,490
  • sloc: cpp: 172,923; tcl: 107,130; perl: 6,391; sh: 6,143; ansic: 5,846; makefile: 816; awk: 525; csh: 355
file content (155 lines) | stat: -rw-r--r-- 4,663 bytes parent folder | download | duplicates (8)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
%
% How to write a testsuite for ns.
% Drafted by Xuan Chen (xuanc@usc.edu) 
% Fri Nov 10 11:14:57 PST 2000
%
\chapter{Test Suite Support}
\label{chap:testsuite}

The ns distribution contains many test suites under \nsf{tcl/test}, which
used by validation programs (\nsf{validate, validate-wired, validate-wireless, 
and validate.win32}) to verify that the installation of ns is correct. If you 
modify or add new modules to ns, you are encouraged to run the validation 
programs to make sure that your changes do not affect other parts in ns. 

\section{Test Suite Components}
\label{sec:testsuitecomponents}

Each test suite under \nsf{tcl/test} is written to verify the correctness
of a certain module in ns. It has 3 components: 

\begin{itemize}
\item A shell script (test-all-xxx) to start the test;
\item A ns tcl script (test-suite-xxx.tcl) to actually run through the tests 
   defined.
\item A subdirectory (test-output-xxx) under \nsf{tcl/test}, which contains 
 thecorrect trace files generated by the test suite. These files are used to 
 verify if the test suite runs correctly with your ns.
\end{itemize}

(Note: xxx stands for the name of the test suite.)

\section{Write a Test Suite}
\label{sec:writeatestsuite}

You can take one of the test suites under \nsf{tcl/test} as a template when
you are writing your own, for example the test suite written for wireless lan
(test-all-wireless-lan, test-suite-wireless-lan.tcl, and 
test-output-wireless-lan). 

To write a test suite, you first need to write the shell script (test-all-xxx).
In the shell script, you specify the module to be tested, the name of the ns 
tcl script and the output subdirectory. You can run this shell script in quiet 
mode. Below is the example (test-all-wireless-lan):

\begin{program}
   \# To run in quiet mode:  "./test-all-wireless-lan quiet".

   f="wireless-lan"		\# Specify the name of the module to test.
   file="test-suite-\$f.tcl"	\# The name of the ns script.
   directory="test-output-\$f" 	\# Subdirectory to hold the test results
   version="v2"			\# Speficy the ns version.
   
   \# Pass the arguments to test-all-template1, which will run through
   \# all the test cases defined in test-suite-wireless-lan.tcl.
   ./test-all-template1 \$file \$directory \$version \$@
\end{program}


You also need to create several test cases in the ns script (test-suite-xxx.tcl)
by defining a subclass of TestSuite for each different test. For example, in 
test-suite-wireless-lan.tcl, each test case uses a different Ad Hoc routing 
protocol. They are defined as:

\begin{program}	
   Class TestSuite

   \# wireless model using destination sequence distance vector
   Class Test/dsdv -superclass TestSuite

   \# wireless model using dynamic source routing
   Class Test/dsr -superclass TestSuite
   ... ...

\end{program}


Each test case is basically a simulation scenario. In the super class 
TestSuite, you can define some functions, like init and finish to do the work 
required by each test case, for example setting up the network topology and ns
trace. The test specific configurations are defined within the corresponding 
sub-class. Each sub-class also has a run function to start the simulation.

\begin{program}	
   TestSuite instproc init \{\} \{
     global opt tracefd topo chan prop 
     global node_ god_ 
     \$self instvar ns_ testName_
     set ns_         [new Simulator]
      ... ...
   \} 

   TestSuite instproc finish \{\} \{
     \$self instvar ns_
     global quiet

     \$ns_ flush-trace

     puts "finishing.."
     exit 0
   \}
        
   Test/dsdv instproc init \{\} \{
     global opt node_ god_
     \$self instvar ns_ testName_
     set testName_       dsdv
     ... ...    

     \$self next
     ... ...

     \$ns_ at $opt(stop).1 "$self finish"
   \}

   Test/dsdv instproc run \{\} \{
     \$self instvar ns_
     puts "Starting Simulation..."
     \$ns_ run
   \}
\end{program}

All the tests are started by the function runtest in the ns script.

\begin{program}
   proc runtest \{arg\} \{
     global quiet
     set quiet 0

     set b [llength \$arg]
     if \{\$b == 1\} \{
        set test \$arg
     \} elseif \{\$b == 2\} \{
        set test [lindex \$arg 0]
        if \{[lindex \$arg 1] == "QUIET"\} \{
         set quiet 1
        \}
      \} else \{
         usage
     \}
     set t [new Test/\$test]
     \$t run
\}

global argv arg0
runtest \$argv
\end{program}


When you run the tests, trace files are generated and saved to the output 
subdirectory. These trace files are compared to the those correct trace coming 
with the test suite. If the comparation shows difference, the test is failed.