1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213
|
GStreamer QA system (Insanity)
------------------------------
Goals of Insanity
-----------------
* Provide a modular QA/Testing system.
* Be able to run tests locally or remotely
* Be able to separate the various testing, monitoring, scheduling
and reporting steps.
* Be able to run ANY kind of tests, including non-gstreamer tests
and non-python tests.
* Provide, when requested, as much information as possible for
post-mortem or offsite analysis.
Glossary
--------
* Test :
Standalone process validating its execution against a given
checklist.
* Scenario :
A Test that runs several other tests, optionnaly with programmatic
choices and processing of sub-test results. A Scenario can add
sub-tests at runtime.
* TestRun :
A run of several tests/scenarios with various arguments.
This will also collect the state of the environment.
The smallest testrun is a single test with a single argument.
It will instantiate each of the tests with the proper arguments and
monitors. It also provides each test instance access to the
DataStorage instance.
* Arguments :
Parameters for one test
* Generators :
Creates a list of arguments.
* Monitor :
Script or application that retrieves extra information about a
running test. Monitors can be applied on any Test or only on a
certain variant (like gst-debugging monitor for gst tests).
Modular system
--------------
The full-fledged QA system consists of 4 parts:
* A testing client (insanity.client.Client), this is the only
requirement to be able to run tests. This client can either be
a standalone CLI, a daemon, a network-controlled client, a GUI,
...
* A Storage system. Currently we have mysql/sqlite support, but this
could be extended to a remote/offline/json storage system also.
* A reporting system, allowing the production of html/text/pdf/XXX
reports based on the test results.
* A Central Scheduler, providing control over the remote clients,
aggregation of the testrun results, a search interface to the
database. This part is not yet implemented.
Most of the logic is contained in a python module called 'insanity'.
The goal is to make the creation of the various parts of the full QA
system as lightweight as possible, putting most of the logic in base
classes contained in that python module.
Client
------
The client is a python event-based application, which is meant to be
executed on the host environment (embedded device, testing server,
...). It is therefore meant to be as lightweight as possible while
still providing the basic requirements for proper test execution.
The majority of the logic is contained in the client.py:Client base
class, which can be subclassed with minimum code to provide:
* a CLI client
* a daemon for remote control
* UI client for end-user testing
* ....
Client -- TestRun
-----------------
A TestRun is a batch of tests, arguments and (optional) monitors.
For each TestRun, the environment will be captured and associated
with all the test results.
A TestRun is responsible for spawning and controlling instances of
each Test with the different argument(s) and (optional) monitors.
The TestRun will by default run one instance at a time, unless
otherwise specified (to make best use of multi-core/cpu
machines). It can also, when possible, avoid re-spawning new
subprocesses for each Test but instead re-use existing
subprocesses.
Client -- Test
--------------
A Test is the smallest unit of testing.
Tests will be run in separate subprocesses for safety reason. If
specified, a new subprocess will be created for each Test instance
in order to be able to capture a completely new environment.
Each test provides a check-list, or steps, which will be validated
during the test execution.
The steps are not sequential, so tests can validates some steps even
if the previous step was not validated.
Each test can also store information based on the analysis. That
information doesn't (in)validate the test. For example, a
typefinding test can store the information it detects from the file
being analyzed, or the duration of the test, or the tags being
emitted by any gstreamer pipeline, ....
Python base classes will be provided to quickly create tests. Two
main subclasses are available:
* GstreamerTest : For all tests that use GStreamer.
* CmdLineTest : For spawning any command-line application.
Client -- Monitor
-----------------
Monitors allows retrieval of extra information during each test
execution.
Examples of information it can retrieve :
* GST_DEBUG
* cpu/mem usage over time
* valgrind report
* remote process
* ...
Monitors can modify the test instance properties. For example, a
monitor that will knowingly make the test run slower can modify the
timeout duration of a test.
Monitors will offer the possibility to post-process the information
they retrieve. This allows retrieving vast quantities of information
without hindering the performance of the underlying test, as well as
being able to offer information which is easier and faster to
process in the reporting system (Ex : number of leaks when running a
valgrind monitor).
Storage System
--------------
Depending on the usage scenario, the result data can be stored in
various ways.
Either to a standalone file, for single-run end-user reporting. That
file can then be used to either do standalone reporting, or send
later on to a central database.
We can also have a locally accessible database in which everything
is stored, for a local complete system.
We might also need, in the case of the fully distributed system, to
send the data via network to a central database.
This requires an abstraction of the storage system for :
* Reading/Writing test results and progress
* Storing/Accessing monitor data
The minimal required two subclasses are:
* LocalFileSystemStorage : Stores everything on the local file system
* DatabaseStorage : Stores everything on a SGBD (local or remote).
Reporting System (NOT DONE)
---------------------------
Reporting is the process of taking the test results/information
stored and producing a task-centric report.
The basic functionnality of the reporting system should be a 1-to-1
html (or other) representation of each of the following classes:
* TestRun
* Tests (and subclasses)
* Monitors
In addition to those classes, some templates will allow showing the
differences between one or more TestRuns/Tests (Therefore enabling
easy visualisation of regression).
The reporting system can work in a standalone fashion, allowing it
to be fed some testresults from the database and producing the
graphical/html reports.
The modules in the reporting system can also be used in order to
provide dynamically generated html content based on search results
in the Storage System. This feature will be used in the full-fledged
central QA system.
Central Scheduler (NOT DONE)
----------------------------
Based on buildbot (http://buildbot.net/), which offers a
widely-used system for remote execution of processes, the central
scheduler will allow executing/scheduling TestRuns on remote
machines. Apart from the use of buildbot, this involves the creation
of a special Client sub-class for the remote machines.
TODO
----
- Add notion of expected failures for Tests and checklist.
|