When I run test cases by typing
python manage.py test myapp
After test cases completed, test databases deleted by default by django test runner. I don't want it to be deleted.
I can use any database!
I want to preserve my database because there are bugs in database that I wanted to see in database that created. So that I can pinpoint them!
You can prevent the test databases from being destroyed by using the test --keepdb option.
https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database
While passing -k to manage.py test will retain the test database, it will still delete the records that was created in your test cases. This is because Django's TestCase classes will still reset your database after every test case (django.test.TransactionTestCase will do a flush, while django.test.TestCase will wrap each of your test case in a transaction and do a rollback when the test case is done).
The only real solution to making Django retain test data is to extend the TestCase class and override the code that resets your database.
However, if you do not have the time to do this, you can also make your test case pause execution before it finishes, giving you the time to inspect your database before it gets reset. There are several ways of achieving this, but, now, THIS IS A HACK, asking for user input in your Python code will make Python pause execution and wait for user input.
from django.test import TestCase
class MyTestCase(TestCase):
def test_something_does_something(self):
result = do_something_with_the_database()
self.assertTrue(result)
# Ask for `input` so execution will pause and wait for input.
input(
'Execution is paused and you can now inspect the database.\n'
'Press return/enter key to continue:')
Alternatively, you can also use pdb's set_trace function, which will also make the execution pause and wait for input, and at the same time lets you debug the environment in that point of code execution.
Just make sure that you remove the input() (or pdb.set_trace()) call before you send your code to your automated build system or else it will wait for user input and time out.
According to the docs, you can preserve the database after running tests by:
$ python manage.py test -k
or
$ python manage.py test --keepdb
To preserve whole database state after test execution (not only tables structure)
Make sure your test class is based on django.test.SimpleTestCase (not TestCase or TransactionTestCase)
Take one of your tests for which you want to preserve database state
Add the following code to your test class to prevent database tables cleaning after the test execution
def tearDown(self) -> None:
pass
#classmethod
def tearDownClass(cls):
pass
Run the test with --keepdb parameter, like ./manage.py test app.test --keepdb - to prevent whole DB cleaning after test execution
Wait for the test to finish
Profit! Take snapshot/discover your test_database [do not forget that Django by default will add prefix test_ to your default database name]
Example of command for test test_copy
./manage.py test --noinput --keepdb api.tests.SomeTests.test_copy
class SomeTests(SimpleTestCase):
allow_database_queries = True
def setUp(self):
super(SomeTests, self).setUp()
self.huge_set_up_operations()
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.huge_init_database()
def tearDown(self):
pass
#classmethod
def tearDownClass(cls):
pass
def test_copy(self):
SubscriptionFactory()
For anyone in a Pytest environment I use the following pytest.ini for testing
[pytest]
DJANGO_SETTINGS_MODULE=myapp.settings.test
python_files = tests.py test_*.py *_tests.py
addopts =
--ds=myapp.settings.test
--reuse-db
--nomigrations
note the "--resuse-db" command argument/addpots
According to docs:
Regardless of whether the tests pass
or fail, the test databases
are destroyed when all the tests have
been executed.
Although, fixtures might be a help in your situation. Just create initial data, you want to be there when test starts, as texture, and make test load it.
Related
I am using the Python package in combination with the Django testing framework and sometimes want to test only one app/directory/package stated in the coverage --source option.
coverage run --source='custom_auth' manage.py test custom_auth.tests.TestAuth.test_authentication --keepdb
Is this command the correct way to run only one test? I am also using --keepdb command to ignore recreating the database again.
The test is executed in 0.147s, but something happens behind/before the test, and it takes about 3-5 minutes to start executing the test.
The other way, which may be easier to remember, is to tag your tests
from django.test import tag
class Tests(TestCase):
#tag('eu')
def test_001_default(self):
...
#tag('eu','invoice')
def test001_invoice(self):
...
#tag('eu','shipment', 'slow')
def test_001_shipment(self):
...
Then
./manage.py test --keepdb whatever.tests --tag=eu
or
./manage.py test --keepdb whatever.tests.name.Tests --tag=eu --exclude-tag=slow
etc.
OK, this is definitely my fault but I need to clean it up. One of my test scripts fairly consistently (but not always) updates my database in a way that causes problems for the others (basically, it takes away access rights, for the test user, to the test database).
I could easily find out which script is causing this by running a simple query, either after each individual test, or after each test script completes.
i.e. pytest, or nose2, would do the following:
run test_aaa.py
run check_db_access.py #ideal if I could induce a crash/abort
run test_bbb.py
run check_db_access.py
...
You get the idea. Is there a built-in option or plugin that I can use? The test suite currently works on both pytest and nose2 so either is an option.
Edit: this is not a test db, or a fixture-loaded db. This is a snapshot of any of a number of extremely complex live databases and the test suite, as per its design, is supposed to introspect the database(s) and figure out how to run its tests (almost all access is read-only). This works fine and has many beneficial aspects at least in my particular context, but it also means there is no tearDown or fixture-load for me to work with.
import pytest
#pytest.fixture(autouse = True)
def wrapper(request):
print('\nbefore: {}'.format(request.node.name))
yield
print('\nafter: {}'.format(request.node.name))
def test_a():
assert True
def test_b():
assert True
Example output:
$ pytest -v -s test_foo.py
test_foo.py::test_a
before: test_a
PASSED
after: test_a
test_foo.py::test_b
before: test_b
PASSED
after: test_b
I would like to perform some sanity check when I run tests using pytest. Typically, I want to check that some executables are accessible to the tests, and that the options provided by the user on the command-line are valid.
The closest thing I found was to use a fixture such as:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
sys.exit(0)
But this still runs all the tests. I'd like for the script to fail before attempting to run the tests.
You shouldn't need to validate the command line options explicitly; this will be done by the arg parser which will abort the execution early if necessary. As for conditions checking, you are not far from the solution. Use
pytest.exit to to an immediate abort
pytest.skip to skip all tests
pytest.xfail to fail all tests (this is an expected failure though, so it won't mark the whole execution as failed)
Example fixture:
#pytest.fixture(scope='session', autouse=True)
def precondition():
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
# or skip each test
# pytest.skip('Install spam before running this test suite.')
# or make it an expected failure
# pytest.xfail('Install spam before running this test suite.')
xdist compatibility
Invoking pytest.exit() in the test run with xdist will only crash the current worker and will not abort the main process. You have to move the check to a hook that is invoked before the runtestloop starts (so anything before the pytest_runtestloop hook), for example:
# conftest.py
def pytest_sessionstart(session):
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
If you want to run sanity check before whole test scenario then you can use conftest.py file - https://docs.pytest.org/en/2.7.3/plugins.html?highlight=re
Just add your function with the same scope and autouse option to conftest.py:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
pytest.exit("Error message here")
For some time now, my unit testing has been taking a longer than expected time. I have tried to debug it a couple of times without much success, as the delays are before my tests even begin to run. This has affected my ability to do anything remotely close to test driven development (maybe my expectations are too high), so I want to see if I can fix this once and for all.
When a run a test, there is a 70 to 80sec delay between the start and the actual beginning of the test. For example, if I run a test for a small module (using time python manage.py test myapp), I get
<... bunch of unimportant print messages I print from my settings>
Creating test database for alias 'default'...
......
----------------------------------------------------------------
Ran 6 tests in 2.161s
OK
Destroying test database for alias 'default'...
real 1m21.612s
user 1m17.170s
sys 0m1.400s
About 1m18 of the 1m:21 are between the
Creating test database for alias 'default'...
and the
.......
line. In other words, the test takes under 3sec, but the database initialization seems to be taking 1:18min
I have about 30 apps, most with 1 to 3 database models so this should give an idea of the project size. I use SQLite for unit testing, and have implemented some of the suggested improvements. I cannot post my whole setting file, but happy to add any information that is required.
I do use a runner
from django.test.runner import DiscoverRunner
from django.conf import settings
class ExcludeAppsTestSuiteRunner(DiscoverRunner):
"""Override the default django 'test' command, exclude from testing
apps which we know will fail."""
def run_tests(self, test_labels, extra_tests=None, **kwargs):
if not test_labels:
# No appnames specified on the command line, so we run all
# tests, but remove those which we know are troublesome.
test_labels = (
'app1',
'app2',
....
)
print ('Testing: ' + str(test_labels))
return super(ExcludeAppsTestSuiteRunner, self).run_tests(
test_labels, extra_tests, **kwargs)
and in my settings:
TEST_RUNNER = 'config.test_runner.ExcludeAppsTestSuiteRunner'
I have also tried using django-nose with django-nose-exclude
I have read a lot about how to speed up the test themselves, but have not found any leads on how to optimize or avoid the database initialization. I have seen the suggestions on trying not to test with the database but I cannot or don't know how to avoid that completely.
Please let me know if
This is normal and expected
Not expected (and hopefully a fix or lead on what to do)
Again, I don't need help on how to speed up the test themselves, but the initialization (or overhead). I want the example above to take 10sec instead of 80sec.
Many thanks
I run the test (for single app) with --verbose 3 and discovered this is all related to migrations:
Rendering model states... DONE (40.500s)
Applying authentication.0001_initial... OK (0.005s)
Applying account.0001_initial... OK (0.022s)
Applying account.0002_email_max_length... OK (0.016s)
Applying contenttypes.0001_initial... OK (0.024s)
Applying contenttypes.0002_remove_content_type_name... OK (0.048s)
Applying s3video.0001_initial... OK (0.021s)
Applying s3picture.0001_initial... OK (0.052s)
... Many more like this
I squashed all my migrations but still slow.
The final solution that fixes my problem is to force Django to disable migration during testing, which can be done from the settings like this
TESTING = 'test' in sys.argv[1:]
if TESTING:
print('=========================')
print('In TEST Mode - Disableling Migrations')
print('=========================')
class DisableMigrations(object):
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
MIGRATION_MODULES = DisableMigrations()
or use https://pypi.python.org/pypi/django-test-without-migrations
My whole test now takes about 1 minute and a small app takes 5 seconds.
In my case, migrations are not needed for testing as I update tests as I migrate, and don't use migrations to add data. This won't work for everybody
Summary
Use pytest !
Operations
pip install pytest-django
pytest --nomigrations instead of ./manage.py test
Result
./manage.py test costs 2 min 11.86 sec
pytest --nomigrations costs 2.18 sec
Hints
You can create a file called pytest.ini in your project root directory, and specify default command line options and/or Django settings there.
# content of pytest.ini
[pytest]
addopts = --nomigrations
DJANGO_SETTINGS_MODULE = yourproject.settings
Now you can simply run tests with pytest and save you a bit of typing.
You can speed up the subsequent tests even further by adding --reuse-db to the default command line options.
[pytest]
addopts = --nomigrations --reuse-db
However, as soon as your database model is changed, you must run pytest --create-db once to force re-creation of the test database.
If you need to enable gevent monkey patching during testing, you can create a file called pytest in your project root directory with the following content, cast the execution bit to it (chmod +x pytest) and run ./pytest for testing instead of pytest:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# content of pytest
from gevent import monkey
monkey.patch_all()
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourproject.settings")
from django.db import connection
connection.allow_thread_sharing = True
import re
import sys
from pytest import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
You can create a test_gevent.py file for testing whether gevent monkey patching is successful:
# -*- coding: utf-8 -*-
# content of test_gevent.py
import time
from django.test import TestCase
from django.db import connection
import gevent
def f(n):
cur = connection.cursor()
cur.execute("SELECT SLEEP(%s)", (n,))
cur.execute("SELECT %s", (n,))
cur.fetchall()
connection.close()
class GeventTestCase(TestCase):
longMessage = True
def test_gevent_spawn(self):
timer = time.time()
d1, d2, d3 = 1, 2, 3
t1 = gevent.spawn(f, d1)
t2 = gevent.spawn(f, d2)
t3 = gevent.spawn(f, d3)
gevent.joinall([t1, t2, t3])
cost = time.time() - timer
self.assertAlmostEqual(cost, max(d1, d2, d3), delta=1.0,
msg='gevent spawn not working as expected')
References
pytest-django documentation
pytest documentation
use ./manage.py test --keepdb when there are no changes in the migration files
Database initialization indeed takes too long...
I have a project with about the same number of models/tables (about 77), and approximately 350 tests and takes 1 minute total to run everything. Deving in a vagrant machine with 2 cpus allocated and 2GB of ram. Also I use py.test with pytest-xdist plugin for running multiple tests in parallel.
Another thing you can do is tell django reuse the test database and only re-create it when you have schema changes. Also you can use SQLite so that the tests will use an in-memory database. Both approaches explained here:
https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database
EDIT: In case none of the options above work, one more option is to have your unit tests inherit from django SimpleTestCase or use a custom test runner that doesn't create a database as explained in this answer here: django unit tests without a db.
Then you can just mock django calls to the database using a library like this one (which admittingly I wrote): https://github.com/stphivos/django-mock-queries
This way you can run your unit tests locally fast and let your CI server worry about running integration tests that require a database, before merging your code to some stable dev/master branch that isn't the production one.
I also run into issue, One solution what I did is to subclass the Django.TestCase --> create subclass of Django.TestCase
and overwritten the method like :
#classmethod
def _databases_support_transactions(cls):
return True
the backend DB is apache cassandra ..
We are trying to write an automated test for the behavior of the AppConfig.ready function, which we are using as an initialization hook to run code when the Django app has loaded. Our ready method implementation uses a Django setting that we need to override in our test, and naturally we're trying to use the override_settings decorator to achieve this.
There is a snag however - when the test runs, at the point the ready function is executed, the setting override hasn't kicked in (it is still using the original value from settings.py). Is there a way that we can still override the setting in a way where the override will apply when the ready function is called?
Some code to demonstrate this behavior:
settings.py
MY_SETTING = 'original value'
dummy_app/__init__.py
default_app_config = 'dummy_app.apps.DummyAppConfig'
dummy_app/apps.py
from django.apps import AppConfig
from django.conf import settings
class DummyAppConfig(AppConfig):
name = 'dummy_app'
def ready(self):
print('settings.MY_SETTING in app config ready function: {0}'.format(settings.MY_SETTING))
dummy_app/tests.py
from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(MY_SETTING='overridden value')
#override_settings(INSTALLED_APPS=('dummy_app',))
class AppConfigTests(TestCase):
def test_to_see_where_overridden_settings_value_is_available(self):
print('settings.MY_SETTING in test function: '.format(settings.MY_SETTING))
self.fail('Trigger test output')
Output
======================================================================
FAIL: test_to_see_where_overridden_settings_value_is_available (dummy_app.tests.AppConfigTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/labminds/venv/labos/src/dom-base/dummy_app/tests.py", line 12, in test_to_see_where_overridden_settings_value_is_available
self.fail('Trigger test output')
AssertionError: Trigger test output
-------------------- >> begin captured stdout << ---------------------
settings.MY_SETTING in app config ready function: original value
settings.MY_SETTING in test function: overridden value
--------------------- >> end captured stdout << ----------------------
It is important to note that we only want to override this setting for the tests that are asserting the behavior of ready, which is why we aren't considering changing the setting in settings.py, or using a separate version of this file used just for running our automated tests.
One option already considered - we could simply initialize the AppConfig class in our test, call ready and test the behavior that way (at which point the setting would be overridden by the decorator). However, we would prefer to run this as an integration test, and rely on the natural behavior of Django to call the function for us - this is key functionality for us and we want to make sure the test fails if Django's initialization behavior changes.
Some ideas (different effort required and automated assurance):
Don't integration test, and rely on reading the releas notes/commits before upgrading the Django version and / or rely on single manual testing
Assuming a test - stage deploy - prod deploy pipeline, unit test the special cases in isolation and add an integration check as a deployment smoke test (e.g.: by exposing this settings value through a management command or internal only url endpoint) - only verify that for staging it has the value it should be for staging. Slightly delayed feedback compared to unit tests
test it through a test framework outside of Django's own - i.e.: write the unittests (or py.tests) and inside those tests bootstrap django in each test (though you need a way to import & manipulate the settings)
use a combination of overriding settings via the OS's environment (we've used envdir a'la 12 factor app) and a management command that would do the test(s) - e.g.: MY_SETTING='overridden value' INSTALLED_APPS='dummy_app' EXPECTED_OUTCOME='whatever' python manage.py ensure_app_config_initialized_as_expected
looking at Django's own app init tests apps.clear_cache() and
with override_settings(INSTALLED_APPS=['test_app']):
config = apps.get_app_config('test_app')
assert config....
could work, though I've never tried it
You appear to have hit a documented limitation of ready in Django (scroll down to the warning). You can see the discussion in the ticket that prompted the edit. The ticket specifically refers to database interactions, but the same limitation would apply to any effort to test the ready function -- i.e. that production (not test) settings are used during ready.
Based on the ticket, "don't use ready" sounds like the official answer, but I don't find that attitude useful unless they direct me to a functionally equivalent place to run this kind of initialization code. ready seems to be the most official place to run once on startup.
Rather than (re)calling ready, I suggest having ready call a second method. Import and use that second method in your tests cases. Not only will your tests be cleaner, but it isolates the test case from any other ready logic like attaching signals. There's also a context manager that can be used to simplify the test:
#override_settings(SOME_SETTING='some-data')
def test(self):
...
or
def test(self):
with override_settings(SOME_SETTING='some-data'):
...
P.S. We work around several possible issues in ready by checking the migration status of the system:
def ready(self):
# imports have to be delayed for ready
from django.db.migrations.executor import MigrationExecutor
from django.conf import settings
from django.db import connections, DEFAULT_DB_ALIAS
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
if plan:
# not healthy (possibly setup for a migration)
return
...
Perhaps something similar could be done to prevent execution during tests. Somehow the system knows to (eventually) switch to test settings. I assume you could skip execution under the same conditions.