We are trying to write an automated test for the behavior of the AppConfig.ready function, which we are using as an initialization hook to run code when the Django app has loaded. Our ready method implementation uses a Django setting that we need to override in our test, and naturally we're trying to use the override_settings decorator to achieve this.
There is a snag however - when the test runs, at the point the ready function is executed, the setting override hasn't kicked in (it is still using the original value from settings.py). Is there a way that we can still override the setting in a way where the override will apply when the ready function is called?
Some code to demonstrate this behavior:
settings.py
MY_SETTING = 'original value'
dummy_app/__init__.py
default_app_config = 'dummy_app.apps.DummyAppConfig'
dummy_app/apps.py
from django.apps import AppConfig
from django.conf import settings
class DummyAppConfig(AppConfig):
name = 'dummy_app'
def ready(self):
print('settings.MY_SETTING in app config ready function: {0}'.format(settings.MY_SETTING))
dummy_app/tests.py
from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(MY_SETTING='overridden value')
#override_settings(INSTALLED_APPS=('dummy_app',))
class AppConfigTests(TestCase):
def test_to_see_where_overridden_settings_value_is_available(self):
print('settings.MY_SETTING in test function: '.format(settings.MY_SETTING))
self.fail('Trigger test output')
Output
======================================================================
FAIL: test_to_see_where_overridden_settings_value_is_available (dummy_app.tests.AppConfigTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/labminds/venv/labos/src/dom-base/dummy_app/tests.py", line 12, in test_to_see_where_overridden_settings_value_is_available
self.fail('Trigger test output')
AssertionError: Trigger test output
-------------------- >> begin captured stdout << ---------------------
settings.MY_SETTING in app config ready function: original value
settings.MY_SETTING in test function: overridden value
--------------------- >> end captured stdout << ----------------------
It is important to note that we only want to override this setting for the tests that are asserting the behavior of ready, which is why we aren't considering changing the setting in settings.py, or using a separate version of this file used just for running our automated tests.
One option already considered - we could simply initialize the AppConfig class in our test, call ready and test the behavior that way (at which point the setting would be overridden by the decorator). However, we would prefer to run this as an integration test, and rely on the natural behavior of Django to call the function for us - this is key functionality for us and we want to make sure the test fails if Django's initialization behavior changes.
Some ideas (different effort required and automated assurance):
Don't integration test, and rely on reading the releas notes/commits before upgrading the Django version and / or rely on single manual testing
Assuming a test - stage deploy - prod deploy pipeline, unit test the special cases in isolation and add an integration check as a deployment smoke test (e.g.: by exposing this settings value through a management command or internal only url endpoint) - only verify that for staging it has the value it should be for staging. Slightly delayed feedback compared to unit tests
test it through a test framework outside of Django's own - i.e.: write the unittests (or py.tests) and inside those tests bootstrap django in each test (though you need a way to import & manipulate the settings)
use a combination of overriding settings via the OS's environment (we've used envdir a'la 12 factor app) and a management command that would do the test(s) - e.g.: MY_SETTING='overridden value' INSTALLED_APPS='dummy_app' EXPECTED_OUTCOME='whatever' python manage.py ensure_app_config_initialized_as_expected
looking at Django's own app init tests apps.clear_cache() and
with override_settings(INSTALLED_APPS=['test_app']):
config = apps.get_app_config('test_app')
assert config....
could work, though I've never tried it
You appear to have hit a documented limitation of ready in Django (scroll down to the warning). You can see the discussion in the ticket that prompted the edit. The ticket specifically refers to database interactions, but the same limitation would apply to any effort to test the ready function -- i.e. that production (not test) settings are used during ready.
Based on the ticket, "don't use ready" sounds like the official answer, but I don't find that attitude useful unless they direct me to a functionally equivalent place to run this kind of initialization code. ready seems to be the most official place to run once on startup.
Rather than (re)calling ready, I suggest having ready call a second method. Import and use that second method in your tests cases. Not only will your tests be cleaner, but it isolates the test case from any other ready logic like attaching signals. There's also a context manager that can be used to simplify the test:
#override_settings(SOME_SETTING='some-data')
def test(self):
...
or
def test(self):
with override_settings(SOME_SETTING='some-data'):
...
P.S. We work around several possible issues in ready by checking the migration status of the system:
def ready(self):
# imports have to be delayed for ready
from django.db.migrations.executor import MigrationExecutor
from django.conf import settings
from django.db import connections, DEFAULT_DB_ALIAS
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
if plan:
# not healthy (possibly setup for a migration)
return
...
Perhaps something similar could be done to prevent execution during tests. Somehow the system knows to (eventually) switch to test settings. I assume you could skip execution under the same conditions.
Related
I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
For some time now, my unit testing has been taking a longer than expected time. I have tried to debug it a couple of times without much success, as the delays are before my tests even begin to run. This has affected my ability to do anything remotely close to test driven development (maybe my expectations are too high), so I want to see if I can fix this once and for all.
When a run a test, there is a 70 to 80sec delay between the start and the actual beginning of the test. For example, if I run a test for a small module (using time python manage.py test myapp), I get
<... bunch of unimportant print messages I print from my settings>
Creating test database for alias 'default'...
......
----------------------------------------------------------------
Ran 6 tests in 2.161s
OK
Destroying test database for alias 'default'...
real 1m21.612s
user 1m17.170s
sys 0m1.400s
About 1m18 of the 1m:21 are between the
Creating test database for alias 'default'...
and the
.......
line. In other words, the test takes under 3sec, but the database initialization seems to be taking 1:18min
I have about 30 apps, most with 1 to 3 database models so this should give an idea of the project size. I use SQLite for unit testing, and have implemented some of the suggested improvements. I cannot post my whole setting file, but happy to add any information that is required.
I do use a runner
from django.test.runner import DiscoverRunner
from django.conf import settings
class ExcludeAppsTestSuiteRunner(DiscoverRunner):
"""Override the default django 'test' command, exclude from testing
apps which we know will fail."""
def run_tests(self, test_labels, extra_tests=None, **kwargs):
if not test_labels:
# No appnames specified on the command line, so we run all
# tests, but remove those which we know are troublesome.
test_labels = (
'app1',
'app2',
....
)
print ('Testing: ' + str(test_labels))
return super(ExcludeAppsTestSuiteRunner, self).run_tests(
test_labels, extra_tests, **kwargs)
and in my settings:
TEST_RUNNER = 'config.test_runner.ExcludeAppsTestSuiteRunner'
I have also tried using django-nose with django-nose-exclude
I have read a lot about how to speed up the test themselves, but have not found any leads on how to optimize or avoid the database initialization. I have seen the suggestions on trying not to test with the database but I cannot or don't know how to avoid that completely.
Please let me know if
This is normal and expected
Not expected (and hopefully a fix or lead on what to do)
Again, I don't need help on how to speed up the test themselves, but the initialization (or overhead). I want the example above to take 10sec instead of 80sec.
Many thanks
I run the test (for single app) with --verbose 3 and discovered this is all related to migrations:
Rendering model states... DONE (40.500s)
Applying authentication.0001_initial... OK (0.005s)
Applying account.0001_initial... OK (0.022s)
Applying account.0002_email_max_length... OK (0.016s)
Applying contenttypes.0001_initial... OK (0.024s)
Applying contenttypes.0002_remove_content_type_name... OK (0.048s)
Applying s3video.0001_initial... OK (0.021s)
Applying s3picture.0001_initial... OK (0.052s)
... Many more like this
I squashed all my migrations but still slow.
The final solution that fixes my problem is to force Django to disable migration during testing, which can be done from the settings like this
TESTING = 'test' in sys.argv[1:]
if TESTING:
print('=========================')
print('In TEST Mode - Disableling Migrations')
print('=========================')
class DisableMigrations(object):
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
MIGRATION_MODULES = DisableMigrations()
or use https://pypi.python.org/pypi/django-test-without-migrations
My whole test now takes about 1 minute and a small app takes 5 seconds.
In my case, migrations are not needed for testing as I update tests as I migrate, and don't use migrations to add data. This won't work for everybody
Summary
Use pytest !
Operations
pip install pytest-django
pytest --nomigrations instead of ./manage.py test
Result
./manage.py test costs 2 min 11.86 sec
pytest --nomigrations costs 2.18 sec
Hints
You can create a file called pytest.ini in your project root directory, and specify default command line options and/or Django settings there.
# content of pytest.ini
[pytest]
addopts = --nomigrations
DJANGO_SETTINGS_MODULE = yourproject.settings
Now you can simply run tests with pytest and save you a bit of typing.
You can speed up the subsequent tests even further by adding --reuse-db to the default command line options.
[pytest]
addopts = --nomigrations --reuse-db
However, as soon as your database model is changed, you must run pytest --create-db once to force re-creation of the test database.
If you need to enable gevent monkey patching during testing, you can create a file called pytest in your project root directory with the following content, cast the execution bit to it (chmod +x pytest) and run ./pytest for testing instead of pytest:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# content of pytest
from gevent import monkey
monkey.patch_all()
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourproject.settings")
from django.db import connection
connection.allow_thread_sharing = True
import re
import sys
from pytest import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
You can create a test_gevent.py file for testing whether gevent monkey patching is successful:
# -*- coding: utf-8 -*-
# content of test_gevent.py
import time
from django.test import TestCase
from django.db import connection
import gevent
def f(n):
cur = connection.cursor()
cur.execute("SELECT SLEEP(%s)", (n,))
cur.execute("SELECT %s", (n,))
cur.fetchall()
connection.close()
class GeventTestCase(TestCase):
longMessage = True
def test_gevent_spawn(self):
timer = time.time()
d1, d2, d3 = 1, 2, 3
t1 = gevent.spawn(f, d1)
t2 = gevent.spawn(f, d2)
t3 = gevent.spawn(f, d3)
gevent.joinall([t1, t2, t3])
cost = time.time() - timer
self.assertAlmostEqual(cost, max(d1, d2, d3), delta=1.0,
msg='gevent spawn not working as expected')
References
pytest-django documentation
pytest documentation
use ./manage.py test --keepdb when there are no changes in the migration files
Database initialization indeed takes too long...
I have a project with about the same number of models/tables (about 77), and approximately 350 tests and takes 1 minute total to run everything. Deving in a vagrant machine with 2 cpus allocated and 2GB of ram. Also I use py.test with pytest-xdist plugin for running multiple tests in parallel.
Another thing you can do is tell django reuse the test database and only re-create it when you have schema changes. Also you can use SQLite so that the tests will use an in-memory database. Both approaches explained here:
https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database
EDIT: In case none of the options above work, one more option is to have your unit tests inherit from django SimpleTestCase or use a custom test runner that doesn't create a database as explained in this answer here: django unit tests without a db.
Then you can just mock django calls to the database using a library like this one (which admittingly I wrote): https://github.com/stphivos/django-mock-queries
This way you can run your unit tests locally fast and let your CI server worry about running integration tests that require a database, before merging your code to some stable dev/master branch that isn't the production one.
I also run into issue, One solution what I did is to subclass the Django.TestCase --> create subclass of Django.TestCase
and overwritten the method like :
#classmethod
def _databases_support_transactions(cls):
return True
the backend DB is apache cassandra ..
I am working on a python app that uses the default python logging system. Part of this system is the ability to define handlers in a logging config file. One of the handlers for this app is the django admin email handler, "django.utils.log.AdminEmailHandler". When the app is initializing the logging system, it makes a call to logging.config.fileconfig. This is done on a background thread and attempts to reload the config file periodically. I believe that is important.
I have traced through the python logging source code down to the method:
def _resolve(name):
"""Resolve a dotted name to a global object."""
name = name.split('.')
used = name.pop(0)
found = __import__(used)
for n in name:
used = used + '.' + n
try:
found = getattr(found, n)
except AttributeError:
__import__(used)
found = getattr(found, n)
return found
in the file python2.7/logging/config.py
When this function is given the paramater "django.utils.log.AdminEmailHandler" in order to create that handler, my app hangs on the command
__import__(used)
where used is "django".
I did a little research and I have seen some mentions of __import__ not being thread safe and to avoid its use in background threads. is this accurate? And knowing that __import__("django") does cause a deadlock, is there anything I could do to prevent it?
I suggest using the default Django LOGGING setting to control logging. For development, starting the server with manage.py runserver will automatically reload Django if any files are changed, including the settings file with the logging configuration. In practice it works quite well!
https://docs.djangoproject.com/en/dev/topics/logging/#examples
With this question I worked out how to break my tests over multiple files. So now in each file/module I have a series of TestCase classes.
I can still invoke individual TestCases by explicitly naming them from the command line like:
./manage.py test api.TokenGeneratorTestCase api.ViewsTestCase
Rather than invoking related TestCase's individually, now I'm thinking it would be nice to group the related TestCases into Suites, and then invoke the whole Suite from the commandline; hopefully without loosing the ability to invoke all the Suites in the app at once.
I've seen this python stuff about suites, and also this django stuff about suites, but working out how to do what I want is elusive. I think I'm looking to be able to say things like:
./manage.py test api.NewSeedsImportTestCase api.NewSeedsExportTestCase
./manage.py test api.NewSeedsSuite
./manage.py test api.NewRoomsSuite
./manage.py test api
Has anyone out there arranged their Django TestCases into Suites and can show me how?
One possible approach is to write a custom runner that would extend django.test.simple.DjangoTestSuiteRunner and override the build_suite method. That's where Django generates the suite used by the test command.
It gets an argument test_labels which corresponds to the command line arguments passed to the command. You can extend its functionality by allowing passing extra module paths from where tests should be loaded. Something like this should do the trick (this is just to demonstrate the approach, I haven't tested the code):
from django.test.simple import DjangoTestSuiteRunner
from django.utils import unittest
from django.utils.importlib import import_module
class MyTestSuiteRunner(DjangoTestSuiteRunner):
def build_suite(self, test_labels, extra_tests=None, *args, **kwargs):
if test_labels:
extra_test_modules = [label.lstrip('module:')
for label in test_labels
if label.startswith('module:')]
extra_tests = extra_tests or []
for module_path in extra_test_modules:
# Better way to load the tests here would probably be to use
# `django.test.siple.build_suite` as it does some extra stuff like looking for doctests.
extra_tests += unittest.defaultTestLoader.loadTestsFromModule(import_module(module_path))
# Remove the 'module:*' labels
test_labels = [label for label in test_labels if not label.startswith('module:')]
# Let Django do the rest
return super(MyTestSuiteRunner, self).build_suite(test_labels, extra_tests, *args, **kwargs)
Now you should be able to run the test command exactly as before, except that any label that looks like this module:api.test.extra will result in all the tests/suites from the module being added to the final suite.
Note that the 'module:' labels are not app labels so that it must be a full python path to the module.
You will also need to point your TEST_RUNNER settings to your new test runner.
When I run test cases by typing
python manage.py test myapp
After test cases completed, test databases deleted by default by django test runner. I don't want it to be deleted.
I can use any database!
I want to preserve my database because there are bugs in database that I wanted to see in database that created. So that I can pinpoint them!
You can prevent the test databases from being destroyed by using the test --keepdb option.
https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database
While passing -k to manage.py test will retain the test database, it will still delete the records that was created in your test cases. This is because Django's TestCase classes will still reset your database after every test case (django.test.TransactionTestCase will do a flush, while django.test.TestCase will wrap each of your test case in a transaction and do a rollback when the test case is done).
The only real solution to making Django retain test data is to extend the TestCase class and override the code that resets your database.
However, if you do not have the time to do this, you can also make your test case pause execution before it finishes, giving you the time to inspect your database before it gets reset. There are several ways of achieving this, but, now, THIS IS A HACK, asking for user input in your Python code will make Python pause execution and wait for user input.
from django.test import TestCase
class MyTestCase(TestCase):
def test_something_does_something(self):
result = do_something_with_the_database()
self.assertTrue(result)
# Ask for `input` so execution will pause and wait for input.
input(
'Execution is paused and you can now inspect the database.\n'
'Press return/enter key to continue:')
Alternatively, you can also use pdb's set_trace function, which will also make the execution pause and wait for input, and at the same time lets you debug the environment in that point of code execution.
Just make sure that you remove the input() (or pdb.set_trace()) call before you send your code to your automated build system or else it will wait for user input and time out.
According to the docs, you can preserve the database after running tests by:
$ python manage.py test -k
or
$ python manage.py test --keepdb
To preserve whole database state after test execution (not only tables structure)
Make sure your test class is based on django.test.SimpleTestCase (not TestCase or TransactionTestCase)
Take one of your tests for which you want to preserve database state
Add the following code to your test class to prevent database tables cleaning after the test execution
def tearDown(self) -> None:
pass
#classmethod
def tearDownClass(cls):
pass
Run the test with --keepdb parameter, like ./manage.py test app.test --keepdb - to prevent whole DB cleaning after test execution
Wait for the test to finish
Profit! Take snapshot/discover your test_database [do not forget that Django by default will add prefix test_ to your default database name]
Example of command for test test_copy
./manage.py test --noinput --keepdb api.tests.SomeTests.test_copy
class SomeTests(SimpleTestCase):
allow_database_queries = True
def setUp(self):
super(SomeTests, self).setUp()
self.huge_set_up_operations()
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.huge_init_database()
def tearDown(self):
pass
#classmethod
def tearDownClass(cls):
pass
def test_copy(self):
SubscriptionFactory()
For anyone in a Pytest environment I use the following pytest.ini for testing
[pytest]
DJANGO_SETTINGS_MODULE=myapp.settings.test
python_files = tests.py test_*.py *_tests.py
addopts =
--ds=myapp.settings.test
--reuse-db
--nomigrations
note the "--resuse-db" command argument/addpots
According to docs:
Regardless of whether the tests pass
or fail, the test databases
are destroyed when all the tests have
been executed.
Although, fixtures might be a help in your situation. Just create initial data, you want to be there when test starts, as texture, and make test load it.