File: tests.html

package info (click to toggle)
python-libtrace 1.6%2Bgit20161027-1
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 2,124 kB
  • ctags: 1,357
  • sloc: ansic: 6,890; python: 3,228; makefile: 70; sh: 49
file content (132 lines) | stat: -rw-r--r-- 8,616 bytes parent folder | download | duplicates (5)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html><head>





<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /><title>python-libtrace example programs</title>
<meta name="generator" content="KompoZer" />
<link rel="stylesheet" type="text/css" href="plt-doc.css" /></head><body>

<h2 style="text-align: left;">Test Suite</h2>The library comes with a
test suite which can be used for verifying installation. It is also
useful for the developers who want to make changes in the library. <br />
The suite is located in the <span style="font-style: italic;">test</span> subdirectory and contains the following files and folders:<br />
<br />
test/run_test.py: This is the driver for the test suite.<br />
test/v2-test-cases: This folder contains a set of test cases for python2.<br />

test/v3-test-cases: This folder contains a set of test cases for python3.<br />
<br />
<span style="font-weight: bold; text-decoration: underline;">run_test.py</span><br />
<br />
<span style="font-weight: bold; font-style: italic;">r</span><span style="font-style: italic;">un_test </span>is
a utility, implemented in python, which can be used as a driver for the
test suite. Using run_test, one or more test program can be executed. <br />
run_test executes the test script and compares the results with the
expected results. If they do not match, then test is considered as <span style="font-style: italic;">Failed</span> and the differences<br />
between actual and expected results are printed out. If they match, the test is considered as <span style="font-style: italic;">Passed</span>.<br />
<br />
Each test program is a python program which tests one or more features
of a module. The result of each test can be a sentence or a value or a
set of values and <br />
should be written in standard output. Thus, running a
test program may generate one or more lines of text.&nbsp; Each result
(line) starts with a tag. If a test fails, the tag <br />
helps the tester
to easily find out which part of the test program has failed. The tag contains function names, line
numbers and in some cases other information which<br />
is helpful in locating the error. The results should not contains values which might change in
different runs. In other words, a test program should always generate
<br />
the same result(s). For each test program, an expected result file should be created. This
is a text file contains the output which is expected to be generated by
the <br />
test program. It can be created by using run_test or
by redirecting the standard output of the test program to a file. The
result file has the same name as the test program, <br />
but the
extension is <span style="font-style: italic;">res</span>. Test program names start with <span style="font-style: italic;">test-</span>. For example, if name of the test program is
test-tcp.py, name of the expected result file should be test-tcp.res.<br />
<br />
A set of command-line options/arguments can be passed to run_test to control its behaviour:<br />
<br />
<span style="font-style: italic;"><span style="font-weight: bold;">run_test</span> &#8211;d &lt;dir&gt;&nbsp; [-t|-g]&nbsp; [-r]&nbsp; [-n &lt;num&gt;]&nbsp; [-f &lt;file&gt;]&nbsp; [-w &lt;working-dir&gt;]</span><br />
<br />
<span style="font-weight: bold;">-d &lt;dir&gt;:</span> the folder which contains test program(s).<br />
<span style="font-weight: bold;">-g:</span> generate result file(s) and stores it/them in the same folder as the test program(s).<br />
<span style="font-weight: bold;">-t:</span> runs test program(s) and compares the result(s) with the expected result(s) (.res).<br />
<span style="font-weight: bold;">-r:</span> recursively goes through &lt;dir&gt; and traverses all subfolders.<br />
<span style="font-weight: bold;">-n &lt;num&gt;:</span> number of times
that the test(s) should run (can be used with &#8211;t only). Default is 1.
If num=0, it runs the test(s) in an infinite loop (suitable for
regression test). <br />
<span style="font-weight: bold;">-f &lt;file&gt;:</span> runs a single test program <span style="font-style: italic;">&lt;dir&gt;/&lt;file&gt;</span>.<br />
<span style="font-weight: bold;">-w &lt;working-dir&gt;:</span> working directory where temporary files are created. Default is <span style="font-style: italic;">/tmp</span>.<br />
<br />
When run_test runs with &#8211;t, it runs the test program and stores the
output in a temporary file. Then it compares the temporary file with
the expected result file for the test program, <br />
If there is not difference, the test is considered as <span style="font-style: italic;">Passed</span> otherwise the test is considered as <span style="font-style: italic;">Failed</span> and the differences are shown on the standard output so that it can be redirected <br />
to a file if the tester would like to run the test off-line.<br />
<br />
<span style="font-weight: bold; text-decoration: underline;">v2-test-cases and v3-test-cases subdirectories</span><br />
<br />
These folders contain a set of test cases, result files and packet
traces.Trace files are used in the test cases. For each test case,
there is a result file which contains the expected<br />
output for that test case. v2-test-cases contains test cases which can
be run with python2 and v3-test-cases contains test cases for python3.
Functionality of the test cases are the <br />same
for python2 and python3, the only difference is the syntax. There are
some differences, in syntax, between python2 and python3. <br />
<br />
<span style="font-weight: bold; text-decoration: underline;">run_test examples</span><br />
<br />
Following examples show how run_test can be used:<br />
<span style="font-weight: bold;">
<br />&lt;</span><span style="font-style: italic;">plt-folder</span>&gt;/test$
python&nbsp; run_test&nbsp;&nbsp; -t&nbsp;&nbsp; -d&nbsp; v2-test-cases/
&nbsp;&nbsp;&nbsp; # runs all test cases exist in v2-test-cases<br />
&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$
python&nbsp; run_test&nbsp;&nbsp; -t&nbsp;&nbsp; -d&nbsp; v2-test-cases/ -f
test-ip.py&nbsp; # runs test-ip.py and compares the actual output with
the expected results (test-ip.res)<br />
&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$ python&nbsp; 
run_test&nbsp;&nbsp; -g&nbsp; -d&nbsp; v2-test-cases/&nbsp; -f test-ip.py&nbsp; # re-generates test-ip.res<br />
&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$
python&nbsp; run_test&nbsp;&nbsp; -g&nbsp; -d&nbsp; v2-test-cases/ &nbsp; #
re-generates .res files for all test programs inside the v2-test-cases
folder<br />
&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$
python&nbsp; run_test&nbsp;&nbsp; -t&nbsp;&nbsp; -d&nbsp; v2-test-cases/ &#8211;n
0&nbsp;&nbsp; # runs all tests inside the v2-test-cases folder in an
infinite loop<span style="font-weight: bold;"><span style="font-weight: bold;"><br />
<br />
</span></span>similar tests<span style="font-weight: bold;"><span style="font-weight: bold;"> can be run for v3-test-cases:<br />
<br />
</span></span><span style="font-weight: bold;">&lt;</span><span style="font-style: italic;">plt-folder</span>&gt;/test$
python3&nbsp; run_test&nbsp;&nbsp; -t&nbsp;&nbsp; -d&nbsp; v3-test-cases/
&nbsp;&nbsp;&nbsp; # runs all test cases exist in v3-test-cases<br />

&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$
python3&nbsp; run_test&nbsp;&nbsp; -t&nbsp;&nbsp; -d&nbsp; v3-test-cases/ -f
test-ip.py&nbsp; # runs test-ip.py and compares the actual output with
the expected results (test-ip.res)<br />

&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$ python3&nbsp; 
run_test&nbsp;&nbsp; -g&nbsp; -d&nbsp; v3-test-cases/&nbsp; -f test-ip.py&nbsp; # re-generates test-ip.res<br />

&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$
python3&nbsp; run_test&nbsp;&nbsp; -g&nbsp; -d&nbsp; v3-test-cases/ &nbsp; #
re-generates .res files for all test programs inside the v3-test-cases
folder<br />

&lt;<span style="font-style: italic;">plt-folder</span>&gt;/test$
python3&nbsp; run_test&nbsp;&nbsp; -t&nbsp;&nbsp; -d&nbsp; v3-test-cases/ &#8211;n
0&nbsp;&nbsp; # runs all tests inside the v3-test-cases folder in an
infinite loop<span style="font-weight: bold;"><span style="font-weight: bold;"><br />
</span></span><br />
<span style="font-weight: bold;"><span style="font-weight: bold;"><span style="font-style: italic;"><span style="font-style: italic;"><br />Habib Naderi<br />Tue, 10 Jul 14 (PDT)
</span></span></span></span></body></html>