1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853
|
=======================
Advanced testing topics
=======================
The request factory
===================
.. currentmodule:: django.test
.. class:: RequestFactory
The :class:`~django.test.RequestFactory` shares the same API as
the test client. However, instead of behaving like a browser, the
RequestFactory provides a way to generate a request instance that can
be used as the first argument to any view. This means you can test a
view function the same way as you would test any other function -- as
a black box, with exactly known inputs, testing for specific outputs.
The API for the :class:`~django.test.RequestFactory` is a slightly
restricted subset of the test client API:
* It only has access to the HTTP methods :meth:`~Client.get()`,
:meth:`~Client.post()`, :meth:`~Client.put()`,
:meth:`~Client.delete()`, :meth:`~Client.head()`,
:meth:`~Client.options()`, and :meth:`~Client.trace()`.
* These methods accept all the same arguments *except* for
``follow``. Since this is just a factory for producing
requests, it's up to you to handle the response.
* It does not support middleware. Session and authentication
attributes must be supplied by the test itself if required
for the view to function properly.
Example
-------
The following is a unit test using the request factory::
from django.contrib.auth.models import AnonymousUser, User
from django.test import RequestFactory, TestCase
from .views import MyView, my_view
class SimpleTest(TestCase):
def setUp(self):
# Every test needs access to the request factory.
self.factory = RequestFactory()
self.user = User.objects.create_user(
username='jacob', email='jacob@…', password='top_secret')
def test_details(self):
# Create an instance of a GET request.
request = self.factory.get('/customer/details')
# Recall that middleware are not supported. You can simulate a
# logged-in user by setting request.user manually.
request.user = self.user
# Or you can simulate an anonymous user by setting request.user to
# an AnonymousUser instance.
request.user = AnonymousUser()
# Test my_view() as if it were deployed at /customer/details
response = my_view(request)
# Use this syntax for class-based views.
response = MyView.as_view()(request)
self.assertEqual(response.status_code, 200)
AsyncRequestFactory
-------------------
``RequestFactory`` creates WSGI-like requests. If you want to create ASGI-like
requests, including having a correct ASGI ``scope``, you can instead use
``django.test.AsyncRequestFactory``.
This class is directly API-compatible with ``RequestFactory``, with the only
difference being that it returns ``ASGIRequest`` instances rather than
``WSGIRequest`` instances. All of its methods are still synchronous callables.
Testing class-based views
=========================
In order to test class-based views outside of the request/response cycle you
must ensure that they are configured correctly, by calling
:meth:`~django.views.generic.base.View.setup` after instantiation.
For example, assuming the following class-based view:
.. code-block:: python
:caption: ``views.py``
from django.views.generic import TemplateView
class HomeView(TemplateView):
template_name = 'myapp/home.html'
def get_context_data(self, **kwargs):
kwargs['environment'] = 'Production'
return super().get_context_data(**kwargs)
You may directly test the ``get_context_data()`` method by first instantiating
the view, then passing a ``request`` to ``setup()``, before proceeding with
your test's code:
.. code-block:: python
:caption: ``tests.py``
from django.test import RequestFactory, TestCase
from .views import HomeView
class HomePageTest(TestCase):
def test_environment_set_in_context(self):
request = RequestFactory().get('/')
view = HomeView()
view.setup(request)
context = view.get_context_data()
self.assertIn('environment', context)
.. _topics-testing-advanced-multiple-hosts:
Tests and multiple host names
=============================
The :setting:`ALLOWED_HOSTS` setting is validated when running tests. This
allows the test client to differentiate between internal and external URLs.
Projects that support multitenancy or otherwise alter business logic based on
the request's host and use custom host names in tests must include those hosts
in :setting:`ALLOWED_HOSTS`.
The first option to do so is to add the hosts to your settings file. For
example, the test suite for docs.djangoproject.com includes the following::
from django.test import TestCase
class SearchFormTestCase(TestCase):
def test_empty_get(self):
response = self.client.get('/en/dev/search/', HTTP_HOST='docs.djangoproject.dev:8000')
self.assertEqual(response.status_code, 200)
and the settings file includes a list of the domains supported by the project::
ALLOWED_HOSTS = [
'www.djangoproject.dev',
'docs.djangoproject.dev',
...
]
Another option is to add the required hosts to :setting:`ALLOWED_HOSTS` using
:meth:`~django.test.override_settings()` or
:attr:`~django.test.SimpleTestCase.modify_settings()`. This option may be
preferable in standalone apps that can't package their own settings file or
for projects where the list of domains is not static (e.g., subdomains for
multitenancy). For example, you could write a test for the domain
``http://otherserver/`` as follows::
from django.test import TestCase, override_settings
class MultiDomainTestCase(TestCase):
@override_settings(ALLOWED_HOSTS=['otherserver'])
def test_other_domain(self):
response = self.client.get('http://otherserver/foo/bar/')
Disabling :setting:`ALLOWED_HOSTS` checking (``ALLOWED_HOSTS = ['*']``) when
running tests prevents the test client from raising a helpful error message if
you follow a redirect to an external URL.
.. _topics-testing-advanced-multidb:
Tests and multiple databases
============================
.. _topics-testing-primaryreplica:
Testing primary/replica configurations
--------------------------------------
If you're testing a multiple database configuration with primary/replica
(referred to as master/slave by some databases) replication, this strategy of
creating test databases poses a problem.
When the test databases are created, there won't be any replication,
and as a result, data created on the primary won't be seen on the
replica.
To compensate for this, Django allows you to define that a database is
a *test mirror*. Consider the following (simplified) example database
configuration::
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'myproject',
'HOST': 'dbprimary',
# ... plus some other settings
},
'replica': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'myproject',
'HOST': 'dbreplica',
'TEST': {
'MIRROR': 'default',
},
# ... plus some other settings
}
}
In this setup, we have two database servers: ``dbprimary``, described
by the database alias ``default``, and ``dbreplica`` described by the
alias ``replica``. As you might expect, ``dbreplica`` has been configured
by the database administrator as a read replica of ``dbprimary``, so in
normal activity, any write to ``default`` will appear on ``replica``.
If Django created two independent test databases, this would break any
tests that expected replication to occur. However, the ``replica``
database has been configured as a test mirror (using the
:setting:`MIRROR <TEST_MIRROR>` test setting), indicating that under
testing, ``replica`` should be treated as a mirror of ``default``.
When the test environment is configured, a test version of ``replica``
will *not* be created. Instead the connection to ``replica``
will be redirected to point at ``default``. As a result, writes to
``default`` will appear on ``replica`` -- but because they are actually
the same database, not because there is data replication between the
two databases.
.. _topics-testing-creation-dependencies:
Controlling creation order for test databases
---------------------------------------------
By default, Django will assume all databases depend on the ``default``
database and therefore always create the ``default`` database first.
However, no guarantees are made on the creation order of any other
databases in your test setup.
If your database configuration requires a specific creation order, you
can specify the dependencies that exist using the :setting:`DEPENDENCIES
<TEST_DEPENDENCIES>` test setting. Consider the following (simplified)
example database configuration::
DATABASES = {
'default': {
# ... db settings
'TEST': {
'DEPENDENCIES': ['diamonds'],
},
},
'diamonds': {
# ... db settings
'TEST': {
'DEPENDENCIES': [],
},
},
'clubs': {
# ... db settings
'TEST': {
'DEPENDENCIES': ['diamonds'],
},
},
'spades': {
# ... db settings
'TEST': {
'DEPENDENCIES': ['diamonds', 'hearts'],
},
},
'hearts': {
# ... db settings
'TEST': {
'DEPENDENCIES': ['diamonds', 'clubs'],
},
}
}
Under this configuration, the ``diamonds`` database will be created first,
as it is the only database alias without dependencies. The ``default`` and
``clubs`` alias will be created next (although the order of creation of this
pair is not guaranteed), then ``hearts``, and finally ``spades``.
If there are any circular dependencies in the :setting:`DEPENDENCIES
<TEST_DEPENDENCIES>` definition, an
:exc:`~django.core.exceptions.ImproperlyConfigured` exception will be raised.
Advanced features of ``TransactionTestCase``
============================================
.. attribute:: TransactionTestCase.available_apps
.. warning::
This attribute is a private API. It may be changed or removed without
a deprecation period in the future, for instance to accommodate changes
in application loading.
It's used to optimize Django's own test suite, which contains hundreds
of models but no relations between models in different applications.
By default, ``available_apps`` is set to ``None``. After each test, Django
calls :djadmin:`flush` to reset the database state. This empties all tables
and emits the :data:`~django.db.models.signals.post_migrate` signal, which
recreates one content type and four permissions for each model. This
operation gets expensive proportionally to the number of models.
Setting ``available_apps`` to a list of applications instructs Django to
behave as if only the models from these applications were available. The
behavior of ``TransactionTestCase`` changes as follows:
- :data:`~django.db.models.signals.post_migrate` is fired before each
test to create the content types and permissions for each model in
available apps, in case they're missing.
- After each test, Django empties only tables corresponding to models in
available apps. However, at the database level, truncation may cascade to
related models in unavailable apps. Furthermore
:data:`~django.db.models.signals.post_migrate` isn't fired; it will be
fired by the next ``TransactionTestCase``, after the correct set of
applications is selected.
Since the database isn't fully flushed, if a test creates instances of
models not included in ``available_apps``, they will leak and they may
cause unrelated tests to fail. Be careful with tests that use sessions;
the default session engine stores them in the database.
Since :data:`~django.db.models.signals.post_migrate` isn't emitted after
flushing the database, its state after a ``TransactionTestCase`` isn't the
same as after a ``TestCase``: it's missing the rows created by listeners
to :data:`~django.db.models.signals.post_migrate`. Considering the
:ref:`order in which tests are executed <order-of-tests>`, this isn't an
issue, provided either all ``TransactionTestCase`` in a given test suite
declare ``available_apps``, or none of them.
``available_apps`` is mandatory in Django's own test suite.
.. attribute:: TransactionTestCase.reset_sequences
Setting ``reset_sequences = True`` on a ``TransactionTestCase`` will make
sure sequences are always reset before the test run::
class TestsThatDependsOnPrimaryKeySequences(TransactionTestCase):
reset_sequences = True
def test_animal_pk(self):
lion = Animal.objects.create(name="lion", sound="roar")
# lion.pk is guaranteed to always be 1
self.assertEqual(lion.pk, 1)
Unless you are explicitly testing primary keys sequence numbers, it is
recommended that you do not hard code primary key values in tests.
Using ``reset_sequences = True`` will slow down the test, since the primary
key reset is a relatively expensive database operation.
.. _topics-testing-enforce-run-sequentially:
Enforce running test classes sequentially
=========================================
If you have test classes that cannot be run in parallel (e.g. because they
share a common resource), you can use ``django.test.testcases.SerializeMixin``
to run them sequentially. This mixin uses a filesystem ``lockfile``.
For example, you can use ``__file__`` to determine that all test classes in the
same file that inherit from ``SerializeMixin`` will run sequentially::
import os
from django.test import TestCase
from django.test.testcases import SerializeMixin
class ImageTestCaseMixin(SerializeMixin):
lockfile = __file__
def setUp(self):
self.filename = os.path.join(temp_storage_dir, 'my_file.png')
self.file = create_file(self.filename)
class RemoveImageTests(ImageTestCaseMixin, TestCase):
def test_remove_image(self):
os.remove(self.filename)
self.assertFalse(os.path.exists(self.filename))
class ResizeImageTests(ImageTestCaseMixin, TestCase):
def test_resize_image(self):
resize_image(self.file, (48, 48))
self.assertEqual(get_image_size(self.file), (48, 48))
.. _testing-reusable-applications:
Using the Django test runner to test reusable applications
==========================================================
If you are writing a :doc:`reusable application </intro/reusable-apps>`
you may want to use the Django test runner to run your own test suite
and thus benefit from the Django testing infrastructure.
A common practice is a *tests* directory next to the application code, with the
following structure::
runtests.py
polls/
__init__.py
models.py
...
tests/
__init__.py
models.py
test_settings.py
tests.py
Let's take a look inside a couple of those files:
.. code-block:: python
:caption: ``runtests.py``
#!/usr/bin/env python
import os
import sys
import django
from django.conf import settings
from django.test.utils import get_runner
if __name__ == "__main__":
os.environ['DJANGO_SETTINGS_MODULE'] = 'tests.test_settings'
django.setup()
TestRunner = get_runner(settings)
test_runner = TestRunner()
failures = test_runner.run_tests(["tests"])
sys.exit(bool(failures))
This is the script that you invoke to run the test suite. It sets up the
Django environment, creates the test database and runs the tests.
For the sake of clarity, this example contains only the bare minimum
necessary to use the Django test runner. You may want to add
command-line options for controlling verbosity, passing in specific test
labels to run, etc.
.. code-block:: python
:caption: ``tests/test_settings.py``
SECRET_KEY = 'fake-key'
INSTALLED_APPS = [
"tests",
]
This file contains the :doc:`Django settings </topics/settings>`
required to run your app's tests.
Again, this is a minimal example; your tests may require additional
settings to run.
Since the *tests* package is included in :setting:`INSTALLED_APPS` when
running your tests, you can define test-only models in its ``models.py``
file.
.. _other-testing-frameworks:
Using different testing frameworks
==================================
Clearly, :mod:`unittest` is not the only Python testing framework. While Django
doesn't provide explicit support for alternative frameworks, it does provide a
way to invoke tests constructed for an alternative framework as if they were
normal Django tests.
When you run ``./manage.py test``, Django looks at the :setting:`TEST_RUNNER`
setting to determine what to do. By default, :setting:`TEST_RUNNER` points to
``'django.test.runner.DiscoverRunner'``. This class defines the default Django
testing behavior. This behavior involves:
#. Performing global pre-test setup.
#. Looking for tests in any file below the current directory whose name matches
the pattern ``test*.py``.
#. Creating the test databases.
#. Running ``migrate`` to install models and initial data into the test
databases.
#. Running the :doc:`system checks </topics/checks>`.
#. Running the tests that were found.
#. Destroying the test databases.
#. Performing global post-test teardown.
If you define your own test runner class and point :setting:`TEST_RUNNER` at
that class, Django will execute your test runner whenever you run
``./manage.py test``. In this way, it is possible to use any test framework
that can be executed from Python code, or to modify the Django test execution
process to satisfy whatever testing requirements you may have.
.. _topics-testing-test_runner:
Defining a test runner
----------------------
.. currentmodule:: django.test.runner
A test runner is a class defining a ``run_tests()`` method. Django ships
with a ``DiscoverRunner`` class that defines the default Django testing
behavior. This class defines the ``run_tests()`` entry point, plus a
selection of other methods that are used by ``run_tests()`` to set up, execute
and tear down the test suite.
.. class:: DiscoverRunner(pattern='test*.py', top_level=None, verbosity=1, interactive=True, failfast=False, keepdb=False, reverse=False, debug_mode=False, debug_sql=False, parallel=0, tags=None, exclude_tags=None, test_name_patterns=None, pdb=False, buffer=False, enable_faulthandler=True, timing=True, **kwargs)
``DiscoverRunner`` will search for tests in any file matching ``pattern``.
``top_level`` can be used to specify the directory containing your
top-level Python modules. Usually Django can figure this out automatically,
so it's not necessary to specify this option. If specified, it should
generally be the directory containing your ``manage.py`` file.
``verbosity`` determines the amount of notification and debug information
that will be printed to the console; ``0`` is no output, ``1`` is normal
output, and ``2`` is verbose output.
If ``interactive`` is ``True``, the test suite has permission to ask the
user for instructions when the test suite is executed. An example of this
behavior would be asking for permission to delete an existing test
database. If ``interactive`` is ``False``, the test suite must be able to
run without any manual intervention.
If ``failfast`` is ``True``, the test suite will stop running after the
first test failure is detected.
If ``keepdb`` is ``True``, the test suite will use the existing database,
or create one if necessary. If ``False``, a new database will be created,
prompting the user to remove the existing one, if present.
If ``reverse`` is ``True``, test cases will be executed in the opposite
order. This could be useful to debug tests that aren't properly isolated
and have side effects. :ref:`Grouping by test class <order-of-tests>` is
preserved when using this option.
``debug_mode`` specifies what the :setting:`DEBUG` setting should be
set to prior to running tests.
``parallel`` specifies the number of processes. If ``parallel`` is greater
than ``1``, the test suite will run in ``parallel`` processes. If there are
fewer test cases than configured processes, Django will reduce the number
of processes accordingly. Each process gets its own database. This option
requires the third-party ``tblib`` package to display tracebacks correctly.
``tags`` can be used to specify a set of :ref:`tags for filtering tests
<topics-tagging-tests>`. May be combined with ``exclude_tags``.
``exclude_tags`` can be used to specify a set of
:ref:`tags for excluding tests <topics-tagging-tests>`. May be combined
with ``tags``.
If ``debug_sql`` is ``True``, failing test cases will output SQL queries
logged to the :ref:`django.db.backends logger <django-db-logger>` as well
as the traceback. If ``verbosity`` is ``2``, then queries in all tests are
output.
``test_name_patterns`` can be used to specify a set of patterns for
filtering test methods and classes by their names.
If ``pdb`` is ``True``, a debugger (``pdb`` or ``ipdb``) will be spawned at
each test error or failure.
If ``buffer`` is ``True``, outputs from passing tests will be discarded.
If ``enable_faulthandler`` is ``True``, :py:mod:`faulthandler` will be
enabled.
If ``timing`` is ``True``, test timings, including database setup and total
run time, will be shown.
Django may, from time to time, extend the capabilities of the test runner
by adding new arguments. The ``**kwargs`` declaration allows for this
expansion. If you subclass ``DiscoverRunner`` or write your own test
runner, ensure it accepts ``**kwargs``.
Your test runner may also define additional command-line options.
Create or override an ``add_arguments(cls, parser)`` class method and add
custom arguments by calling ``parser.add_argument()`` inside the method, so
that the :djadmin:`test` command will be able to use those arguments.
.. versionadded:: 3.1
The ``buffer`` argument was added.
.. versionadded:: 3.2
The ``enable_faulthandler`` and ``timing`` arguments were added.
Attributes
~~~~~~~~~~
.. attribute:: DiscoverRunner.test_suite
The class used to build the test suite. By default it is set to
``unittest.TestSuite``. This can be overridden if you wish to implement
different logic for collecting tests.
.. attribute:: DiscoverRunner.test_runner
This is the class of the low-level test runner which is used to execute
the individual tests and format the results. By default it is set to
``unittest.TextTestRunner``. Despite the unfortunate similarity in
naming conventions, this is not the same type of class as
``DiscoverRunner``, which covers a broader set of responsibilities. You
can override this attribute to modify the way tests are run and reported.
.. attribute:: DiscoverRunner.test_loader
This is the class that loads tests, whether from TestCases or modules or
otherwise and bundles them into test suites for the runner to execute.
By default it is set to ``unittest.defaultTestLoader``. You can override
this attribute if your tests are going to be loaded in unusual ways.
Methods
~~~~~~~
.. method:: DiscoverRunner.run_tests(test_labels, extra_tests=None, **kwargs)
Run the test suite.
``test_labels`` allows you to specify which tests to run and supports
several formats (see :meth:`DiscoverRunner.build_suite` for a list of
supported formats).
``extra_tests`` is a list of extra ``TestCase`` instances to add to the
suite that is executed by the test runner. These extra tests are run
in addition to those discovered in the modules listed in ``test_labels``.
This method should return the number of tests that failed.
.. classmethod:: DiscoverRunner.add_arguments(parser)
Override this class method to add custom arguments accepted by the
:djadmin:`test` management command. See
:py:meth:`argparse.ArgumentParser.add_argument()` for details about adding
arguments to a parser.
.. method:: DiscoverRunner.setup_test_environment(**kwargs)
Sets up the test environment by calling
:func:`~django.test.utils.setup_test_environment` and setting
:setting:`DEBUG` to ``self.debug_mode`` (defaults to ``False``).
.. method:: DiscoverRunner.build_suite(test_labels=None, extra_tests=None, **kwargs)
Constructs a test suite that matches the test labels provided.
``test_labels`` is a list of strings describing the tests to be run. A test
label can take one of four forms:
* ``path.to.test_module.TestCase.test_method`` -- Run a single test method
in a test case.
* ``path.to.test_module.TestCase`` -- Run all the test methods in a test
case.
* ``path.to.module`` -- Search for and run all tests in the named Python
package or module.
* ``path/to/directory`` -- Search for and run all tests below the named
directory.
If ``test_labels`` has a value of ``None``, the test runner will search for
tests in all files below the current directory whose names match its
``pattern`` (see above).
``extra_tests`` is a list of extra ``TestCase`` instances to add to the
suite that is executed by the test runner. These extra tests are run
in addition to those discovered in the modules listed in ``test_labels``.
Returns a ``TestSuite`` instance ready to be run.
.. method:: DiscoverRunner.setup_databases(**kwargs)
Creates the test databases by calling
:func:`~django.test.utils.setup_databases`.
.. method:: DiscoverRunner.run_checks(databases)
Runs the :doc:`system checks </topics/checks>` on the test ``databases``.
.. versionadded:: 3.1
The ``databases`` parameter was added.
.. method:: DiscoverRunner.run_suite(suite, **kwargs)
Runs the test suite.
Returns the result produced by the running the test suite.
.. method:: DiscoverRunner.get_test_runner_kwargs()
Returns the keyword arguments to instantiate the
``DiscoverRunner.test_runner`` with.
.. method:: DiscoverRunner.teardown_databases(old_config, **kwargs)
Destroys the test databases, restoring pre-test conditions by calling
:func:`~django.test.utils.teardown_databases`.
.. method:: DiscoverRunner.teardown_test_environment(**kwargs)
Restores the pre-test environment.
.. method:: DiscoverRunner.suite_result(suite, result, **kwargs)
Computes and returns a return code based on a test suite, and the result
from that test suite.
Testing utilities
-----------------
``django.test.utils``
~~~~~~~~~~~~~~~~~~~~~
.. module:: django.test.utils
:synopsis: Helpers to write custom test runners.
To assist in the creation of your own test runner, Django provides a number of
utility methods in the ``django.test.utils`` module.
.. function:: setup_test_environment(debug=None)
Performs global pre-test setup, such as installing instrumentation for the
template rendering system and setting up the dummy email outbox.
If ``debug`` isn't ``None``, the :setting:`DEBUG` setting is updated to its
value.
.. function:: teardown_test_environment()
Performs global post-test teardown, such as removing instrumentation from
the template system and restoring normal email services.
.. function:: setup_databases(verbosity, interactive, *, time_keeper=None, keepdb=False, debug_sql=False, parallel=0, aliases=None, **kwargs)
Creates the test databases.
Returns a data structure that provides enough detail to undo the changes
that have been made. This data will be provided to the
:func:`teardown_databases` function at the conclusion of testing.
The ``aliases`` argument determines which :setting:`DATABASES` aliases test
databases should be setup for. If it's not provided, it defaults to all of
:setting:`DATABASES` aliases.
.. versionchanged:: 3.2
The ``time_keeper`` kwarg was added, and all kwargs were made
keyword-only.
.. function:: teardown_databases(old_config, parallel=0, keepdb=False)
Destroys the test databases, restoring pre-test conditions.
``old_config`` is a data structure defining the changes in the database
configuration that need to be reversed. It's the return value of the
:meth:`setup_databases` method.
``django.db.connection.creation``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. currentmodule:: django.db.connection.creation
The creation module of the database backend also provides some utilities that
can be useful during testing.
.. function:: create_test_db(verbosity=1, autoclobber=False, serialize=True, keepdb=False)
Creates a new test database and runs ``migrate`` against it.
``verbosity`` has the same behavior as in ``run_tests()``.
``autoclobber`` describes the behavior that will occur if a
database with the same name as the test database is discovered:
* If ``autoclobber`` is ``False``, the user will be asked to
approve destroying the existing database. ``sys.exit`` is
called if the user does not approve.
* If autoclobber is ``True``, the database will be destroyed
without consulting the user.
``serialize`` determines if Django serializes the database into an
in-memory JSON string before running tests (used to restore the database
state between tests if you don't have transactions). You can set this to
``False`` to speed up creation time if you don't have any test classes
with :ref:`serialized_rollback=True <test-case-serialized-rollback>`.
If you are using the default test runner, you can control this with the
the :setting:`SERIALIZE <TEST_SERIALIZE>` entry in the :setting:`TEST
<DATABASE-TEST>` dictionary.
``keepdb`` determines if the test run should use an existing
database, or create a new one. If ``True``, the existing
database will be used, or created if not present. If ``False``,
a new database will be created, prompting the user to remove
the existing one, if present.
Returns the name of the test database that it created.
``create_test_db()`` has the side effect of modifying the value of
:setting:`NAME` in :setting:`DATABASES` to match the name of the test
database.
.. function:: destroy_test_db(old_database_name, verbosity=1, keepdb=False)
Destroys the database whose name is the value of :setting:`NAME` in
:setting:`DATABASES`, and sets :setting:`NAME` to the value of
``old_database_name``.
The ``verbosity`` argument has the same behavior as for
:class:`~django.test.runner.DiscoverRunner`.
If the ``keepdb`` argument is ``True``, then the connection to the
database will be closed, but the database will not be destroyed.
.. _topics-testing-code-coverage:
Integration with ``coverage.py``
================================
Code coverage describes how much source code has been tested. It shows which
parts of your code are being exercised by tests and which are not. It's an
important part of testing applications, so it's strongly recommended to check
the coverage of your tests.
Django can be easily integrated with `coverage.py`_, a tool for measuring code
coverage of Python programs. First, `install coverage.py`_. Next, run the
following from your project folder containing ``manage.py``::
coverage run --source='.' manage.py test myapp
This runs your tests and collects coverage data of the executed files in your
project. You can see a report of this data by typing following command::
coverage report
Note that some Django code was executed while running tests, but it is not
listed here because of the ``source`` flag passed to the previous command.
For more options like annotated HTML listings detailing missed lines, see the
`coverage.py`_ docs.
.. _coverage.py: https://coverage.readthedocs.io/
.. _install coverage.py: https://pypi.org/project/coverage/
|