1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175
|
.. _pytest-plugins:
pytest plugin
=============
We use special ``pytest`` plugin to improve the testing side of this project.
For example: it is a popular request to ensure
that your container does have its error pass handled.
Because otherwise, developers might forget to do it properly.
It is impossible to fix with types, but is really simple to check with tests.
Installation
------------
You will need to install ``pytest`` separately.
Usage
-----
There's no need to install anything special.
``pytest`` will automatically find and use this plugin.
To use it in your tests, request ``returns`` fixture like so:
.. code:: python
def test_my_container(returns):
...
assert_equal
~~~~~~~~~~~~
We have a special helper to compare containers' equality.
It might be an easy task for two ``Result`` or ``Maybe`` containers,
but it is not very easy for two ``ReaderResult`` or ``FutureResult`` instances.
Take a look:
.. code:: python
>>> from returns.result import Result
>>> from returns.context import Reader
>>> assert Result.from_value(1) == Result.from_value(1)
>>> Reader.from_value(1) == Reader.from_value(1)
False
So, we can use :func:`~returns.primitives.asserts.assert_equal`
or ``returns.assert_equal`` method from our ``pytest`` fixture:
.. code:: python
>>> from returns.result import Success
>>> from returns.context import Reader
>>> from returns.contrib.pytest import ReturnsAsserts
>>> def test_container_equality(returns: ReturnsAsserts):
... returns.assert_equal(Success(1), Success(1))
... returns.assert_equal(Reader.from_value(1), Reader.from_value(1))
>>> # We only run these tests manually, because it is a doc example:
>>> returns_fixture = getfixture('returns')
>>> test_container_equality(returns_fixture)
is_error_handled
~~~~~~~~~~~~~~~~
The first helper we define is ``is_error_handled`` function.
It tests that containers do handle error track.
.. code:: python
>>> from returns.result import Failure, Success
>>> from returns.contrib.pytest import ReturnsAsserts
>>> def test_error_handled(returns: ReturnsAsserts):
... assert not returns.is_error_handled(Failure(1))
... assert returns.is_error_handled(
... Failure(1).lash(lambda _: Success('default value')),
... )
>>> # We only run these tests manually, because it is a doc example:
>>> returns_fixture = getfixture('returns')
>>> test_error_handled(returns_fixture)
We recommend to unit test big chunks of code this way.
This is helpful for big pipelines where
you need at least one error handling at the very end.
This is how it works internally:
- Methods like ``fix`` and ``lash`` mark errors
inside the container as handled
- Methods like ``map`` and ``alt`` just copies
the error handling state from the old container to a new one,
so there's no need to re-handle the error after these methods
- Methods like ``bind`` create new containers with unhandled errors
.. note::
We use monkeypathing of containers inside tests to make this check possible.
They are still purely functional inside.
It does not affect production code.
assert_trace
~~~~~~~~~~~~
Sometimes we have to know if a container is created correctly in a specific
point of our flow.
``assert_trace`` helps us to check exactly this by
identifying when a container is
created and looking for the desired function.
.. code:: python
>>> from returns.result import Result, Success, Failure
>>> from returns.contrib.pytest import ReturnsAsserts
>>> def desired_function(arg: str) -> Result[int, str]:
... if arg.isnumeric():
... return Success(int(arg))
... return Failure('"{0}" is not a number'.format(arg))
>>> def test_if_failure_is_created_at_convert_function(
... returns: ReturnsAsserts,
... ):
... with returns.assert_trace(Failure, desired_function):
... Success('not a number').bind(desired_function)
>>> def test_if_success_is_created_at_convert_function(
... returns: ReturnsAsserts,
... ):
... with returns.assert_trace(Success, desired_function):
... Success('42').bind(desired_function)
>>> # We only run these tests manually, because it is a doc example:
>>> returns_fixture = getfixture('returns')
>>> test_if_failure_is_created_at_convert_function(returns_fixture)
>>> test_if_success_is_created_at_convert_function(returns_fixture)
markers
~~~~~~~
We also ship a bunch of pre-defined markers with ``returns``:
- ``returns_lawful`` is used to mark all tests generated
by our :ref:`hypothesis-plugins`
Further reading
---------------
- `pytest docs <https://docs.pytest.org/en/latest/contents.html>`_
API Reference
-------------
.. autoclasstree:: returns.contrib.pytest.plugin
:strict:
.. automodule:: returns.contrib.pytest.plugin
:members:
.. automodule:: returns.primitives.asserts
:members:
|