File: 2016-08-09-hypothesis-pytest-fixtures.md

package info (click to toggle)
python-hypothesis 6.138.0-1
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 15,272 kB
  • sloc: python: 62,853; ruby: 1,107; sh: 253; makefile: 41; javascript: 6
file content (149 lines) | stat: -rw-r--r-- 3,922 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
tags: technical, faq, python
date: 2016-08-09 10:00
title: How do I use pytest fixtures with Hypothesis?
author: drmaciver
---

[pytest](http://doc.pytest.org/en/latest/) is a great test runner, and is the one
Hypothesis itself uses for testing (though Hypothesis works fine with other test
runners too).

It has a fairly elaborate [fixture system](http://doc.pytest.org/en/latest/fixture.html),
and people are often unsure how that interacts with Hypothesis. In this article we'll
go over the details of how to use the two together.

<!--more-->

Mostly, Hypothesis and py.test fixtures don't interact: Each just ignores the other's
presence.

When using a @given decorator, any arguments that are not provided in the @given
will be left visible in the final function:

```python
from inspect import signature

from hypothesis import given, strategies as st


@given(a=st.none(), c=st.none())
def test_stuff(a, b, c, d):
    pass


print(signature(test_stuff))
```

This then outputs the following:

```
<Signature (b, d)>
```

We've hidden the arguments 'a' and 'c', but the unspecified arguments 'b' and 'd'
are still left to be passed in. In particular, they can be provided as py.test
fixtures:

```python
from pytest import fixture

from hypothesis import given, strategies as st


@fixture
def stuff():
    return "kittens"


@given(a=st.none())
def test_stuff(a, stuff):
    assert a is None
    assert stuff == "kittens"
```

This also works if we want to use @given with positional arguments:

```python
from pytest import fixture

from hypothesis import given, strategies as st


@fixture
def stuff():
    return "kittens"


@given(t.none())
def test_stuff(stuff, a):
    assert a is None
    assert stuff == "kittens"
```

The positional argument fills in from the right, replacing the 'a'
argument and leaving us with 'stuff' to be provided by the fixture.

Personally I don't usually do this because I find it gets a bit
confusing - if I'm going to use fixtures then I always use the named
variant of given. There's no reason you *can't* do it this way if
you prefer though.

@given also works fine in combination with parametrized tests:

```python
import pytest

from hypothesis import given, strategies as st


@pytest.mark.parametrize("stuff", [1, 2, 3])
@given(a=st.none())
def test_stuff(a, stuff):
    assert a is None
    assert 1 <= stuff <= 3
```

This will run 3 tests, one for each value for 'stuff'.

There is one unfortunate feature of how this interaction works though: In pytest
you can declare fixtures which do set up and tear down per function. These will
"work" with Hypothesis, but they will run once for the entire test function
rather than once for each time given calls your test function. So the following
will fail:

```python
from pytest import fixture

from hypothesis import given, strategies as st

counter = 0


@fixture(scope="function")
def stuff():
    global counter
    counter = 0


@given(a=st.none())
def test_stuff(a, stuff):
    global counter
    counter += 1
    assert counter == 1
```

The counter will not get reset at the beginning of each call to the test function,
so it will be incremented each time and the test will start failing after the
first call.

There currently aren't any great ways around this unfortunately. The best you can
really do is do manual setup and teardown yourself in your tests using
Hypothesis (e.g. by implementing a version of your fixture as a context manager).

Long-term, I'd like to resolve this by providing a mechanism for allowing fixtures
to be run for each example (it's probably not correct to have *every* function scoped
fixture run for each example), but for now it's stalled because it [requires changes
on the py.test side as well as the Hypothesis side](https://github.com/pytest-dev/pytest/issues/916)
and we haven't quite managed to find the time and place to collaborate on figuring
out how to fix this yet.