1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
|
Replaying failed tests
======================
Replaying failures found by your Hypothesis tests is almost as important as finding failures in the first place. Hypothesis therefore contains several ways to replay failures: they are automatically saved to (and replayed from) a local |ExampleDatabase|, and can be manually replayed via |@example| or |@reproduce_failure|.
The Hypothesis database
-----------------------
When a test fails, Hypothesis automatically saves the failure so it can be replayed later. For instance, the first time you run the following code, it will take up to a few seconds to fail:
.. code-block:: python
import time
from hypothesis import strategies as st
@given(st.integers())
def f(n):
assert n < 50
time.sleep(0.1)
f()
But the next time you run this code, it will fail instantly. When Hypothesis saw the failure the first time, it automatically saved that failing input. On future runs, Hypothesis retries any failing inputs (in |Phase.explain|) before generating new inputs (in |Phase.generate|)
Hypothesis saves failures to the |settings.database| setting. By default, this is a |DirectoryBasedExampleDatabase| in the local ``.hypothesis`` directory.
Disabling the Hypothesis database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can disable the Hypothesis database by passing ``None`` to |settings.database|:
.. code-block:: python
import time
from hypothesis import settings, strategies as st
@given(st.integers())
@settings(database=None)
def f(n):
assert n < 50
time.sleep(0.1)
f()
Always run a specific input
---------------------------
If you want Hypothesis to always run a specific input, you can use the |@example| decorator. |@example| adds an explicit input which Hypothesis will run every time, in addition to the randomly generated examples. You can think of explicit examples as combining unit-testing with property-based testing.
For instance, suppose we write a test using |st.integers|, but want to make sure we try a few special prime numbers every time we run the test. We can add these inputs with an explicit |@example|:
.. code-block:: python
# two mersenne primes
@example(2**17 - 1)
@example(2**19 - 1)
@given(st.integers())
def test_integers(n):
pass
test_integers()
Hypothesis runs all explicit examples first, in the |Phase.explicit| phase, before generating additional random examples in the |Phase.generate| phase.
Explicit examples do not shrink
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note that unlike examples generated by Hypothesis, examples provided using |@example| do not shrink. We can see this by adding a failing assertion:
.. code-block:: python
@example(2**17 - 1)
@given(st.integers())
def test_something_with_integers(n):
assert n < 100
Hypothesis will print ``Falsifying explicit example: test_something_with_integers(n=131071)``, instead of shrinking to ``n=100``.
Prefer |@example| over the database for correctness
---------------------------------------------------
.. TODO_DOCS: link to /explanation/database-keys
While the database is useful for quick local iteration, Hypothesis may invalidate it when upgrading (because e.g. the internal format may have changed). Changes to the source code of a test function may also change its database key, invalidating its stored entries. We therefore recommend against relying on the database for the correctness of your tests. If you want to ensure an input is run every time, use |@example|.
Replaying examples with |@reproduce_failure|
--------------------------------------------
If |settings.print_blob| is set to ``True`` (the default in the ``ci`` settings profile), and a test fails, Hypothesis will print an |@reproduce_failure| decorator containing an opaque blob as part of the error message:
.. code-block:: pycon
>>> from hypothesis import settings, given
>>> import hypothesis.strategies as st
>>> @given(st.floats())
... @settings(print_blob=True)
... def test(f):
... assert f == f
...
>>> test()
...
Falsifying example: test(
f=nan,
)
You can reproduce this example by temporarily adding @reproduce_failure('6.131.23', b'ACh/+AAAAAAAAA==') as a decorator on your test case
You can add this decorator to your test to reproduce the failure. This can be useful for locally replaying failures found by CI. Note that the binary blob is not stable across Hypothesis versions, so you should not leave this decorator on your tests permanently. Use |@example| with an explicit input instead.
Sharing failures using the database
-----------------------------------
If you work with multiple developers, or want to share failures across environments (such as locally replaying a failure found by CI), another option is to share the Hypothesis database.
For instance, by setting |settings.database| to an instance of a networked database like |RedisExampleDatabase|, any developer connecting to that networked database will automatically replay any failures found by other developers.
Similarly, setting |settings.database| to |GitHubArtifactDatabase| will automatically replay any failures found by the connected CI artifact.
|