File: overview.rst

package info (click to toggle)
pytest-regressions 2.5.0%2Bds-3
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 1,012 kB
  • sloc: python: 2,367; makefile: 18
file content (178 lines) | stat: -rw-r--r-- 6,145 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
Overview
========

``pytest-regressions`` provides some fixtures that make it easy to maintain tests that
generate lots of data or specific data files like images.

This plugin uses a *data directory* (courtesy of `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`_) to
store expected data files, which are stored and used as baseline for future test runs.

Example
-------

Let's use ``data_regression`` as an example, but the workflow is the same for the other ``*_regression`` fixtures.

Suppose we have a ``summary_grids`` function which outputs a dictionary containing information about discrete grids
for simulation. Of course your function would actually return some computed/read value, but here it is using an inline
result for this example:

.. code-block:: python

    def summary_grids():
        return {
            "Main Grid": {
                "id": 0,
                "cell_count": 1000,
                "active_cells": 300,
                "properties": [
                    {"name": "Temperature", "min": 75, "max": 85},
                    {"name": "Porosity", "min": 0.3, "max": 0.4},
                ],
            },
            "Refin1": {
                "id": 1,
                "cell_count": 48,
                "active_cells": 44,
                "properties": [
                    {"name": "Temperature", "min": 78, "max": 81},
                    {"name": "Porosity", "min": 0.36, "max": 0.39},
                ],
            },
        }


We could test the results of this function like this:


.. code-block:: python

    def test_grids():
        data = summary_grids()
        assert data["Main Grid"]["id"] == 0
        assert data["Main Grid"]["cell_count"] == 1000
        assert data["Main Grid"]["active_cells"] == 300
        assert data["Main Grid"]["properties"] == [
            {"name": "Temperature", "min": 75, "max": 85},
            {"name": "Porosity", "min": 0.3, "max": 0.4},
        ]
        ...


But this presents a number of problems:

* Gets old quickly.
* Error-prone.
* If a check fails, we don't know what else might be wrong with the obtained data.
* Does not scale for large data.
* **Maintenance burden**: if the data changes in the future (and it will) it will be a major headache to update the values,
  specially if there are a lot of similar tests like this one.


Using data_regression
---------------------

The ``data_regression`` fixture provides a method to check general dictionary data like the one in the previous example.

There is no need to import anything, just declare the ``data_regression`` fixture in your test's
arguments and call the ``check`` method in the test:

.. code-block:: python

    def test_grids2(data_regression):
        data = summary_grids()
        data_regression.check(data)


The first time your run this test, it will *fail* with a message like this::


    >           pytest.fail(msg)
    E           Failed: File not found in data directory, created:
    E           - C:\Users\bruno\pytest-regressions\tests\test_grids\test_grids2.yml

The fixture will generate a ``test_grids2.yml`` file (same name as the test) in the *data directory* with the contents of the dictionary:

.. code-block:: yaml

    Main Grid:
      active_cells: 300
      cell_count: 1000
      id: 0
      properties:
      - max: 85
        min: 75
        name: Temperature
      - max: 0.4
        min: 0.3
        name: Porosity
    Refin1:
      active_cells: 44
      cell_count: 48
      id: 1
      properties:
      - max: 81
        min: 78
        name: Temperature
      - max: 0.39
        min: 0.36
        name: Porosity

This file should be committed to version control.

The next time you run this test, it will compare the results of ``summary_grids()`` with the contents of the YAML file.
If they match, the test passes. If they don't match the test will fail, showing a nice diff of the text differences.

``--force-regen``
~~~~~~~~~~~~~~~~~

If the test fails because the new data is correct (the implementation might be returning more information about the
grids for example), then you can use the ``--force-regen`` flag to update the expected file::

    $ pytest --force-regen


This will fail the same test but with a different message saying that the file has been updated. Commit the new file.

This workflow makes it very simple to keep the files up to date and to check all the information we need.

``--regen-all``
~~~~~~~~~~~~~~~~~

If a single change will fail several regression tests, you can also use the ``--regen-all`` command-line flag::

    $ pytest --regen-all


With this flag, the regression fixtures will regenerate all files but will not fail the tests themselves. This make it very
easy to update all regression files in a single pytest run when individual tests contain multiple regressions.


Parametrized tests
------------------

When using parametrized tests, pytest will give each parametrization of your test a unique name.
This means that ``pytest-regressions`` will create a new file for each parametrization too.

Suppose we have an additional function ``summary_grids_2`` that generates longer data, we can
re-use the same test with the ``@pytest.mark.parametrize`` decorator:

.. code-block:: python

    @pytest.mark.parametrize('data', [summary_grids(), summary_grids_2()])
    def test_grids3(data_regression, data):
        data_regression.check(data)

Pytest will automatically name these as ``test_grids3[data0]`` and ``test_grids3[data1]``, so files
``test_grids3_data0.yml`` and ``test_grids3_data1.yml`` will be created.

The names of these files can be controlled using the ``ids`` `keyword for parametrize
<https://docs.pytest.org/en/stable/example/parametrize.html#different-options-for-test-ids>`_, so
instead of ``data0``, you can define more useful names such as ``short`` and ``long``:

.. code-block:: python

    @pytest.mark.parametrize('data', [summary_grids(), summary_grids_2()], ids=['short', 'long'])
    def test_grids3(data_regression, data):
        data_regression.check(data)

which creates ``test_grids3_short.yml`` and ``test_grids3_long.yml`` respectively.