File: testing.rst

package info (click to toggle)
translate-toolkit 3.17.5-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 7,780 kB
  • sloc: python: 69,719; sh: 1,412; makefile: 186; xml: 48
file content (272 lines) | stat: -rw-r--r-- 9,442 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
.. _testing:

Testing
=======

Our aim is that all new functionality is adequately tested. Adding tests for
existing functionality is highly recommended before any major reimplementation
(refactoring, etcetera).

We use `pytest`_ for (unit) testing.

To run tests:

.. code-block:: console

    $ make test  # runs all tests with coverage
    $ uv run pytest  # runs all tests
    $ uv run pytest tests/translate/storage/test_dtd.py  # runs just a single test module

We use several pytest features to simplify testing, and to suppress errors in
circumstances where the tests cannot possibly succeed (limitations of
tests and missing dependencies).


Skipping tests
--------------

Pytest allows tests, test classes, and modules to be skipped or marked as
"expected to fail" (xfail).
Generally you should *skip* only if the test cannot run at all (throws uncaught
exception); otherwise *xfail* is preferred as it provides more test coverage.

importorskip
^^^^^^^^^^^^

.. the ~ in this :func: reference suppresses all but the last component

Use the builtin :func:`~pytest:_pytest.runner.importorskip` function
to skip a test module if a dependency cannot be imported:

.. code-block:: python

    from pytest import importorskip
    importorskip("vobject", exc_type=ImportError)

If *vobject* can be imported, it will be; otherwise it raises an exception
that causes pytest to skip the entire module rather than failing.

skipif
^^^^^^

Use the ``skipif`` decorator to :ref:`mark tests to be skipped <pytest:skipif>`
unless certain criteria are met.  The following skips a test if the version of
*mymodule* is too old:

.. code-block:: python

    import mymodule

    @pytest.mark.skipif("mymodule.__version__ < '1.2'")
    def test_function():
        ...

You can apply this decorator to classes as well as functions and methods.

It is also possible to skip an entire test module by creating a ``pytestmark``
static variable in the module:

.. code-block:: python

    # mark entire module as skipped for py.test if no indexer available
    pytestmark = pytest.mark.skipif("noindexer")

xfail
^^^^^

Use the ``xfail`` decorator to :ref:`mark tests as expected to fail
<pytest:xfail>`. This allows you to do the following:

* Build tests for functionality that we haven't implemented yet
* Mark tests that will fail on certain platforms or Python versions
* Mark tests that we should fix but haven't got round to fixing yet

The simplest form is the following:

.. code-block:: python

    from pytest import pytest.mark

    @mark.xfail
    def test_function():
        ...

You can also pass parameters to the decorator to mark expected failure only
under some condition (like *skipif*), to document the reason failure is
expected, or to actually skip the test:

.. code-block:: python

    @mark.xfail("sys.version_info >= (3,0)")  # only expect failure for Python 3
    @mark.xfail(..., reason="Not implemented")  # provide a reason for the xfail
    @mark.xfail(..., run=False)  # skip the test but still regard it as xfailed


Testing for Warnings
--------------------

deprecated_call
^^^^^^^^^^^^^^^

The builtin :func:`~pytest:pytest.deprecated_call` function checks that a
function that we run raises a DeprecationWarning:

.. code-block:: python

    from pytest import deprecated_call

    def test_something():
        deprecated_call(function_to_run, arguments_for_function)

recwarn
^^^^^^^

The |recwarn plugin|_ allows us to test for other warnings. Note that
``recwarn`` is a funcargs plugin, which means that you need it in your test
function parameters:

.. code-block:: python

    def test_example(recwarn):
        # do something
        w = recwarn.pop()
        # w.{message,category,filename,lineno}
        assert 'something' in str(w.message)


.. _pytest: http://pytest.org/

.. _recwarn plugin: http://pytest.org/latest/recwarn.html
.. |recwarn plugin| replace:: *recwarn plugin*
.. we use |recwarn plugin| here and in ref above for italics like :ref:


Command Line Functional Testing
-------------------------------

Functional tests allow us to validate the operation of the tools on the command
line.  The execution by a user is simulated using reference data files and the
results are captured for comparison.

The tests are simple to craft and use some naming magic to make it easy to
refer to test files, stdout and stderr.

File name magic
---------------

We use a special naming convention to make writing tests quick and easy.  Thus
in the case of testing the following command:

.. code-block:: console

   $ moz2po -t template.dtd translations.po translated.dtd

Our test would be written like this:

.. code-block:: console

   $ moz2po -t $one $two $out

Where ``$one`` and ``$two`` are the input files and ``$out`` is the result file
that the test framework will validate.

The files would be called:

===========================    ============   =========   ===================
File                            Function       Variable    File naming conventions
===========================    ============   =========   ===================
test_moz2po_help.sh             Test script    -           test_${command}_${description}.sh
test_moz2po_help/one.dtd        Input          $one        ${testname}/${variable}.${extension}
test_moz2po_help/two.po         Input          $two        ${testname}/${variable}.${extension}
test_moz2po_help/out.dtd        Output         $out        ${testname}/${variable}.${extension}
test_moz2po_help/stdout.txt     Output         $stdout     ${testname}/${variable}.${extension}
test_moz2po_help/stderr.txt     Output         $stderr     ${testname}/${variable}.${extension}
===========================    ============   =========   ===================

.. note:: A test filename must start with ``test_`` and end in ``.sh``.  The
   rest of the name may only use ASCII alphanumeric characters and underscore
   ``_``.

The test file is placed in the ``tests/`` directory while data files are placed
in the ``tests/data/${testname}`` directory.

There are three standard output files:

1. ``$out`` - the output from the command
2. ``$stdout`` - any output given to the user
3. ``$stderr`` - any error output

The output files are available for checking at the end of the test execution
and a test will fail if there are differences between the reference output and
that achieved in the test run.

You do not need to define reference output for all three, if one is missing
then checks will be against ``/dev/null``.

There can be any number of input files.  They need to be named using only ASCII
characters without any punctuation.  While you can give them any name we
recommend using numbered positions such as one, two, three.  These are
converted into variables in the test framework so ensure that none of your
choices clash with existing bash commands and variables.

Your test script can access variables for all of your files so e.g.
``moz2po_conversion/one.dtd`` will be referenced as ``$one`` and output
``moz2po_conversion/out.dtd`` as ``$out``.


Writing
-------

The tests are normal bash scripts so they can be executed on their own.  A
template for a test is as follows:

.. literalinclude:: ../../tests/cli/example_test.sh
   :language: bash

For simple tests, where we diff output and do the correct checking of output
files, simply use ``check_results``.  More complex tests need to wrap tests in
``start_checks`` and ``end_checks``.

.. code-block:: bash

   start_checks
   has $out
   containsi_stdout "Parsed:"
   end_checks

You can make use of the following commands in the ``start_checks`` scenario:

=========================== ===========================================
Command                      Description
=========================== ===========================================
has $file                    $file was output and it not empty
has_stdout                   stdout is not empty
has_stderr                   stderr is not empty
startswith $file "String"    $file starts with "String"
startswithi $file "String"   $file starts with "String" ignoring case
startswith_stdout "String"   stdout starts with "String"
startswithi_stdout "String"  stdout starts with "String" ignoring case
startswith_stderr "String"   stderr starts with "String"
startswithi_stderr "String"  stderr starts with "String" ignoring case
contains $file "String"      $file contains "String"
containsi $file "String"     $file contains "String" ignoring case
contains_stdout "String"     stdout contains "String"
containsi_stdout "String"    stdout contains "String" ignoring case
contains_stderr "String"     stderr contains "String"
containsi_stderr "String"    stderr contains "String" ignoring case
endswith $file "String"      $file ends with "String"
endswithi $file "String"     $file ends with "String" ignoring case
endswith_stdout "String"     stdout ends with "String"
endswithi_stdout "String"    stdout ends with "String" ignoring case
endswith_stderr "String"     stderr ends with "String"
endswithi_stderr "String"    stderr ends with "String" ignoring case
=========================== ===========================================


--prep
^^^^^^

If you use the --prep options on any test then the test will change behavior.
It won't validate the results against your reference data but will instead
create your reference data.  This makes it easy to generate your expected
result files when you are setting up your test.