1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
|
# -*- coding: utf-8 -*-
from setuptools import setup
packages = \
['hypothesis_auto']
package_data = \
{'': ['*']}
install_requires = \
['hypothesis>=4.36,<6.0.0', 'pydantic>=0.32.2,<2.0.0']
extras_require = \
{'pytest': ['pytest>=4.0.0']}
setup_kwargs = {
'name': 'hypothesis-auto',
'version': '1.1.5',
'description': 'Extends Hypothesis to add fully automatic testing of type annotated functions',
'long_description': '[](https://timothycrosley.github.io/hypothesis-auto/)\n_________________\n\n[](http://badge.fury.io/py/hypothesis-auto)\n[](https://travis-ci.org/timothycrosley/hypothesis-auto)\n[](https://codecov.io/gh/timothycrosley/hypothesis-auto)\n[](https://gitter.im/timothycrosley/hypothesis-auto?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n[](https://pypi.python.org/pypi/hypothesis-auto/)\n[](https://pepy.tech/project/hypothesis-auto)\n_________________\n\n[Read Latest Documentation](https://timothycrosley.github.io/hypothesis-auto/) - [Browse GitHub Code Repository](https://github.com/timothycrosley/hypothesis-auto/)\n_________________\n\n**hypothesis-auto** is an extension for the [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) project that enables fully automatic tests for type annotated functions.\n\n[](https://github.com/timothycrosley/hypothesis-auto/blob/master/art/demo.gif)\n\nKey Features:\n\n* **Type Annotation Powered**: Utilize your function\'s existing type annotations to build dozens of test cases automatically.\n* **Low Barrier**: Start utilizing property-based testing in the lowest barrier way possible. Just run `auto_test(FUNCTION)` to run dozens of test.\n* **pytest Compatible**: Like Hypothesis itself, hypothesis-auto has built-in compatibility with the popular [pytest](https://docs.pytest.org/en/latest/) testing framework. This means that you can turn your automatically generated tests into individual pytest test cases with one line.\n* **Scales Up**: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of [Hypothesis](https://hypothesis.readthedocs.io/en/latest/), including custom strategies per a parameter.\n\n## Installation:\n\nTo get started - install `hypothesis-auto` into your projects virtual environment:\n\n`pip3 install hypothesis-auto`\n\nOR\n\n`poetry add hypothesis-auto`\n\nOR\n\n`pipenv install hypothesis-auto`\n\n## Usage Examples:\n\n!!! warning\n In old usage examples you will see `_` prefixed parameters like `_auto_verify=`. This was done to avoid conflicting with existing function parameters.\n Based on community feedback the project switched to `_` suffixes, such as `auto_verify_=` to keep the likely hood of conflicting low while\n avoiding the connotation of private parameters.\n\n### Framework independent usage\n\n#### Basic `auto_test` usage:\n\n```python3\nfrom hypothesis_auto import auto_test\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\nauto_test(add) # 50 property based scenarios are generated and ran against add\nauto_test(add, auto_runs_=1_000) # Let\'s make that 1,000\n```\n\n#### Adding an allowed exception:\n\n```python3\nfrom hypothesis_auto import auto_test\n\n\ndef divide(number_1: int, number_2: int) -> int:\n return number_1 / number_2\n\nauto_test(divide)\n\n-> 1012 raise the_error_hypothesis_found\n 1013\n 1014 for attrib in dir(test):\n\n<ipython-input-2-65a3aa66e9f9> in divide(number_1, number_2)\n 1 def divide(number_1: int, number_2: int) -> int:\n----> 2 return number_1 / number_2\n 3\n\n0/0\n\nZeroDivisionError: division by zero\n\n\nauto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, ))\n```\n\n#### Using `auto_test` with a custom verification method:\n\n```python3\nfrom hypothesis_auto import Scenario, auto_test\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\ndef my_custom_verifier(scenario: Scenario):\n if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:\n assert scenario.result > scenario.kwargs["number_1"]\n assert scenario.result > scenario.kwargs["number_2"]\n elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:\n assert scenario.result < scenario.kwargs["number_1"]\n assert scenario.result < scenario.kwargs["number_2"]\n else:\n assert scenario.result >= min(scenario.kwargs.values())\n assert scenario.result <= max(scenario.kwargs.values())\n\n\nauto_test(add, auto_verify_=my_custom_verifier)\n```\n\nCustom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors.\n\nFor the full set of parameters, you can pass into auto_test see its [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/).\n\n### pytest usage\n\n#### Using `auto_pytest_magic` to auto-generate dozens of pytest test cases:\n\n```python3\nfrom hypothesis_auto import auto_pytest_magic\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\nauto_pytest_magic(add)\n```\n\n#### Using `auto_pytest` to run dozens of test case within a temporary directory:\n\n```python3\nfrom hypothesis_auto import auto_pytest\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\n@auto_pytest()\ndef test_add(test_case, tmpdir):\n tmpdir.mkdir().chdir()\n test_case()\n```\n\n#### Using `auto_pytest_magic` with a custom verification method:\n\n```python3\nfrom hypothesis_auto import Scenario, auto_pytest\n\n\ndef add(number_1: int, number_2: int = 1) -> int:\n return number_1 + number_2\n\n\ndef my_custom_verifier(scenario: Scenario):\n if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:\n assert scenario.result > scenario.kwargs["number_1"]\n assert scenario.result > scenario.kwargs["number_2"]\n elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:\n assert scenario.result < scenario.kwargs["number_1"]\n assert scenario.result < scenario.kwargs["number_2"]\n else:\n assert scenario.result >= min(scenario.kwargs.values())\n assert scenario.result <= max(scenario.kwargs.values())\n\n\nauto_pytest_magic(add, auto_verify_=my_custom_verifier)\n```\n\nCustom verification methods should take a single [Scenario](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/tester/#scenario) and raise an exception to signify errors.\n\nFor the full reference of the pytest integration API see the [API reference documentation](https://timothycrosley.github.io/hypothesis-auto/reference/hypothesis_auto/pytest/).\n\n## Why Create hypothesis-auto?\n\nI wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me.\n\nI hope you too find `hypothesis-auto` useful!\n\n~Timothy Crosley\n',
'author': 'Timothy Crosley',
'author_email': 'timothy.crosley@gmail.com',
'maintainer': 'None',
'maintainer_email': 'None',
'url': 'None',
'packages': packages,
'package_data': package_data,
'install_requires': install_requires,
'extras_require': extras_require,
'python_requires': '>=3.6',
}
setup(**setup_kwargs)
|