File: tutorial.rst

package info (click to toggle)
funkload 1.16.1-4
  • links: PTS
  • area: main
  • in suites: jessie, jessie-kfreebsd
  • size: 1,176 kB
  • ctags: 932
  • sloc: python: 8,102; makefile: 334; perl: 23
file content (286 lines) | stat: -rw-r--r-- 8,523 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
First Steps with FunkLoad
==========================

A FunkLoad test is made of a typical unittest and a configuration
file. Let's look at a simple test script that is comming with the
FunkLoad examples.

To get the demo examples you just need to run::

  fl-install-demo
  # Extract FunkLoad examples into ./funkload-demo : ...  done.
  cd funkload-demo/simple


The test case
----------------------

Here is an extract of the simple demo test case ``test_Simple.py``::

  import unittest
  from random import random
  from funkload.FunkLoadTestCase import FunkLoadTestCase
  
  class Simple(FunkLoadTestCase):
      """This test use a configuration file Simple.conf."""
      def setUp(self):
          """Setting up test."""
          self.server_url = self.conf_get('main', 'url')
  
      def test_simple(self):
          # The description should be set in the configuration file
          server_url = self.server_url
          # begin of test ---------------------------------------------
          nb_time = self.conf_getInt('test_simple', 'nb_time')
          for i in range(nb_time):
              self.get(server_url, description='Get url')
          # end of test -----------------------------------------------
    
  if __name__ in ('main', '__main__'):
      unittest.main()

The Simple test case extend ``FunkLoadTestCase`` and implement a test
case named test_simple. this test case loop on a get request.  

The ``FunkLoadTestCase`` extends the ``unittest.TestCase`` to add methods:

* to send HTTP request (get, post, put, delete or xmlrpc)
* to help building assertion with the response (getBody, getLastUrl, ...)
* to customize the test by accessing a configuration file (conf_getInt)
* ...

The target url, the number of requests are defined in the
configuration files.

By convention the name of the configuration file is the name of the
test case class with ".conf" extension in our case: ``Simple.conf``.
  
The configuration file
----------------------------

It is a plain text file with sections::

  # main section for the test case
  [main]
  title=Simple FunkLoad tests
  description=Simply testing a default static page
  url=http://localhost/index.html

  # a section for each test 
  [test_simple]
  description=Access %(nb_time)s times the main url
  nb_time=20
  
  <<snip>>
  # a section to configure the test mode
  [ftest]
  log_to = console file
  log_path = simple-test.log
  result_path = simple-test.xml
  sleep_time_min = 0
  sleep_time_max = 0

  # a section to configure the bench mode
  [bench]
  cycles = 50:75:100:125
  duration = 10
  startup_delay = 0.01
  sleep_time = 0.01
  cycle_time = 1
  log_to =
  log_path = simple-bench.log
  result_path = simple-bench.xml
  sleep_time_min = 0
  sleep_time_max = 0.5

Runing the test
------------------

Check that the url present in the ``main`` section is reachable, then
invoking ``fl-run-test`` will run all the tests present in the
test_Simple module::

  $ fl-run-test -dv test_Simple.py
  test_simple (test_Simple.Simple) ... test_simple: Starting -----------------------------------
          Access 20 times the main url
  test_simple: GET: http://localhost/index.html
          Page 1: Get url ...
  test_simple:  Done in 0.006s
  test_simple:  Load css and images...
  test_simple:   Done in 0.002s
  test_simple: GET: http://localhost/index.html
          Page 2: Get url ...
  <<snip>>
         Page 20: Get url ...
  test_simple:  Done in 0.000s
  test_simple:  Load css and images...
  test_simple:   Done in 0.000s
  Ok
  ----------------------------------------------------------------------
  Ran 1 test in 0.051s
  
  OK


Runing a benchmark
--------------------

To run a benchmark you invoke ``fl-run-bench`` instead of the test
runner, you also need to select which test case to run.

The result of the bench will be saved in a single xml file
``simple-bench.xml``, the name of this result file is set in the
configuration file in the ``bench`` section.

You can override the configuration file using command line option,
here we ask for 3 cycles with 1, 10 and 20 concurrents users (CUs).

::

  $ fl-run-bench -c 1:10:20 test_Simple.py Simple.test_simple
  ========================================================================
  Benching Simple.test_simple
  ========================================================================
  Access 20 times the main url
  ------------------------------------------------------------------------
  
  Configuration
  =============
  
  * Current time: 2011-01-26T23:22:51.267757
  * Configuration file: /tmp/funkload-demo/simple/Simple.conf
  * Log xml: /tmp/funkload-demo/simple/simple-bench.xml
  * Server: http://localhost/index.html
  * Cycles: [1, 10, 20]
  * Cycle duration: 10s
  * Sleeptime between request: from 0.0s to 0.5s
  * Sleeptime between test case: 0.01s
  * Startup delay between thread: 0.01s
  
  Benching
  ========
  
  * setUpBench hook: ... done.
  
  Cycle #0 with 1 virtual users
  -----------------------------
  
  * setUpCycle hook: ... done.
  * Start monitoring localhost: ... failed, server is down.
  * Current time: 2011-01-26T23:22:51.279718
  * Starting threads: . done.
  * Logging for 10s (until 2011-01-26T23:23:01.301664): .. done.
  * Waiting end of threads: . done.
  * Waiting cycle sleeptime 1s: ... done.
  * tearDownCycle hook: ... done.
  * End of cycle, 14.96s elapsed.
  * Cycle result: **SUCCESSFUL**, 2 success, 0 failure, 0 errors.
  
  Cycle #1 with 10 virtual users
  ------------------------------
  
  * setUpCycle hook: ... done.
  * Current time: 2011-01-26T23:23:06.234422
  * Starting threads: .......... done.
  * Logging for 10s (until 2011-01-26T23:23:16.360602): .............. done.
  * Waiting end of threads: .......... done.
  * Waiting cycle sleeptime 1s: ... done.
  * tearDownCycle hook: ... done.
  * End of cycle, 16.67s elapsed.
  * Cycle result: **SUCCESSFUL**, 14 success, 0 failure, 0 errors.
  
  Cycle #2 with 20 virtual users
  ------------------------------
    
  * setUpCycle hook: ... done.
  * Current time: 2011-01-26T23:23:06.234422
  * Starting threads: .......... done.
  * Logging for 10s (until 2011-01-26T23:23:16.360602): .............. done.
  * Waiting end of threads: .......... done.
  * Waiting cycle sleeptime 1s: ... done.
  * tearDownCycle hook: ... done.
  * End of cycle, 16.67s elapsed.
  * Cycle result: **SUCCESSFUL**, 14 success, 0 failure, 0 errors.
  
  * tearDownBench hook: ... done.
  
  Result
  ======
  
  * Success: 40
  * Failures: 0
  * Errors: 0
  
  Bench status: **SUCCESSFUL**
  

Generating a report
--------------------

The xml result file can be turn into an html report this way::

  $ fl-build-report --html simple-bench.xml
  Creating html report: ...done: 
  /tmp/funkload-demo/simple/test_simple-20110126T232251/index.html

It should generate something like this: 
   http://funkload.nuxeo.org/report-example/test_simple-20110126T232251/

Note that there were no monitoring in our simple benchmark.


Write your own test
-------------------

The process to write a new test is the following:

* Use the recorder_ to initialize the test case and the configuration
  files and to grab requests.

* Play the test and display each response in firefox, this will help
  you to add assertion and check the response::

     fl-run-test -dV test_BasicNavigation.py


* Implement the dynamic part:

  - For each request add an assertion to make sure the page is the one
    you expect. this can be done by checking if a term is present in
    a response::

       self.assert_('logout' in self.getBody(), "Login failure")


  - Generates random input, you can use the FunkLoad.Lipsum module::

       from FunkLoad import Lipsum
       ...
       lipsum = Lipsum()
       # Get a random title
       title = lipsum.getSubject()


  - Extracts a token from a previous response::

       from FunkLoad.utils import extract_token
       ...
       jsf_state = extract_token(self.getBody(), ' id="javax.faces.ViewState" value="', '"')

    	 
  - Uses a credential_ server if you want to make a bench with different users
    or simply don't want to hard code your login/password::

       from funkload.utils import xmlrpc_get_credential	
       ...
       # get an admin user
       login, pwd = xmlrpc_get_credential(host, port, "admin")


* Configure the monitoring_ and automate your benchmark using a Makefile_.


.. _recorder: recorder.html
.. _credential: credential.html
.. _monitoring: monitoring.html
.. _Makefile: makefile.html