Django Unit Testing taking a very long time to create test database - python

For some time now, my unit testing has been taking a longer than expected time. I have tried to debug it a couple of times without much success, as the delays are before my tests even begin to run. This has affected my ability to do anything remotely close to test driven development (maybe my expectations are too high), so I want to see if I can fix this once and for all.
When a run a test, there is a 70 to 80sec delay between the start and the actual beginning of the test. For example, if I run a test for a small module (using time python manage.py test myapp), I get
<... bunch of unimportant print messages I print from my settings>
Creating test database for alias 'default'...
......
----------------------------------------------------------------
Ran 6 tests in 2.161s
OK
Destroying test database for alias 'default'...
real 1m21.612s
user 1m17.170s
sys 0m1.400s
About 1m18 of the 1m:21 are between the
Creating test database for alias 'default'...
and the
.......
line. In other words, the test takes under 3sec, but the database initialization seems to be taking 1:18min
I have about 30 apps, most with 1 to 3 database models so this should give an idea of the project size. I use SQLite for unit testing, and have implemented some of the suggested improvements. I cannot post my whole setting file, but happy to add any information that is required.
I do use a runner
from django.test.runner import DiscoverRunner
from django.conf import settings
class ExcludeAppsTestSuiteRunner(DiscoverRunner):
"""Override the default django 'test' command, exclude from testing
apps which we know will fail."""
def run_tests(self, test_labels, extra_tests=None, **kwargs):
if not test_labels:
# No appnames specified on the command line, so we run all
# tests, but remove those which we know are troublesome.
test_labels = (
'app1',
'app2',
....
)
print ('Testing: ' + str(test_labels))
return super(ExcludeAppsTestSuiteRunner, self).run_tests(
test_labels, extra_tests, **kwargs)
and in my settings:
TEST_RUNNER = 'config.test_runner.ExcludeAppsTestSuiteRunner'
I have also tried using django-nose with django-nose-exclude
I have read a lot about how to speed up the test themselves, but have not found any leads on how to optimize or avoid the database initialization. I have seen the suggestions on trying not to test with the database but I cannot or don't know how to avoid that completely.
Please let me know if
This is normal and expected
Not expected (and hopefully a fix or lead on what to do)
Again, I don't need help on how to speed up the test themselves, but the initialization (or overhead). I want the example above to take 10sec instead of 80sec.
Many thanks
I run the test (for single app) with --verbose 3 and discovered this is all related to migrations:
Rendering model states... DONE (40.500s)
Applying authentication.0001_initial... OK (0.005s)
Applying account.0001_initial... OK (0.022s)
Applying account.0002_email_max_length... OK (0.016s)
Applying contenttypes.0001_initial... OK (0.024s)
Applying contenttypes.0002_remove_content_type_name... OK (0.048s)
Applying s3video.0001_initial... OK (0.021s)
Applying s3picture.0001_initial... OK (0.052s)
... Many more like this
I squashed all my migrations but still slow.

The final solution that fixes my problem is to force Django to disable migration during testing, which can be done from the settings like this
TESTING = 'test' in sys.argv[1:]
if TESTING:
print('=========================')
print('In TEST Mode - Disableling Migrations')
print('=========================')
class DisableMigrations(object):
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
MIGRATION_MODULES = DisableMigrations()
or use https://pypi.python.org/pypi/django-test-without-migrations
My whole test now takes about 1 minute and a small app takes 5 seconds.
In my case, migrations are not needed for testing as I update tests as I migrate, and don't use migrations to add data. This won't work for everybody

Summary
Use pytest !
Operations
pip install pytest-django
pytest --nomigrations instead of ./manage.py test
Result
./manage.py test costs 2 min 11.86 sec
pytest --nomigrations costs 2.18 sec
Hints
You can create a file called pytest.ini in your project root directory, and specify default command line options and/or Django settings there.
# content of pytest.ini
[pytest]
addopts = --nomigrations
DJANGO_SETTINGS_MODULE = yourproject.settings
Now you can simply run tests with pytest and save you a bit of typing.
You can speed up the subsequent tests even further by adding --reuse-db to the default command line options.
[pytest]
addopts = --nomigrations --reuse-db
However, as soon as your database model is changed, you must run pytest --create-db once to force re-creation of the test database.
If you need to enable gevent monkey patching during testing, you can create a file called pytest in your project root directory with the following content, cast the execution bit to it (chmod +x pytest) and run ./pytest for testing instead of pytest:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# content of pytest
from gevent import monkey
monkey.patch_all()
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourproject.settings")
from django.db import connection
connection.allow_thread_sharing = True
import re
import sys
from pytest import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
You can create a test_gevent.py file for testing whether gevent monkey patching is successful:
# -*- coding: utf-8 -*-
# content of test_gevent.py
import time
from django.test import TestCase
from django.db import connection
import gevent
def f(n):
cur = connection.cursor()
cur.execute("SELECT SLEEP(%s)", (n,))
cur.execute("SELECT %s", (n,))
cur.fetchall()
connection.close()
class GeventTestCase(TestCase):
longMessage = True
def test_gevent_spawn(self):
timer = time.time()
d1, d2, d3 = 1, 2, 3
t1 = gevent.spawn(f, d1)
t2 = gevent.spawn(f, d2)
t3 = gevent.spawn(f, d3)
gevent.joinall([t1, t2, t3])
cost = time.time() - timer
self.assertAlmostEqual(cost, max(d1, d2, d3), delta=1.0,
msg='gevent spawn not working as expected')
References
pytest-django documentation
pytest documentation

use ./manage.py test --keepdb when there are no changes in the migration files

Database initialization indeed takes too long...
I have a project with about the same number of models/tables (about 77), and approximately 350 tests and takes 1 minute total to run everything. Deving in a vagrant machine with 2 cpus allocated and 2GB of ram. Also I use py.test with pytest-xdist plugin for running multiple tests in parallel.
Another thing you can do is tell django reuse the test database and only re-create it when you have schema changes. Also you can use SQLite so that the tests will use an in-memory database. Both approaches explained here:
https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database
EDIT: In case none of the options above work, one more option is to have your unit tests inherit from django SimpleTestCase or use a custom test runner that doesn't create a database as explained in this answer here: django unit tests without a db.
Then you can just mock django calls to the database using a library like this one (which admittingly I wrote): https://github.com/stphivos/django-mock-queries
This way you can run your unit tests locally fast and let your CI server worry about running integration tests that require a database, before merging your code to some stable dev/master branch that isn't the production one.

I also run into issue, One solution what I did is to subclass the Django.TestCase --> create subclass of Django.TestCase
and overwritten the method like :
#classmethod
def _databases_support_transactions(cls):
return True
the backend DB is apache cassandra ..

Related

pytest - is it possible to run a script/command between all test scripts?

OK, this is definitely my fault but I need to clean it up. One of my test scripts fairly consistently (but not always) updates my database in a way that causes problems for the others (basically, it takes away access rights, for the test user, to the test database).
I could easily find out which script is causing this by running a simple query, either after each individual test, or after each test script completes.
i.e. pytest, or nose2, would do the following:
run test_aaa.py
run check_db_access.py #ideal if I could induce a crash/abort
run test_bbb.py
run check_db_access.py
...
You get the idea. Is there a built-in option or plugin that I can use? The test suite currently works on both pytest and nose2 so either is an option.
Edit: this is not a test db, or a fixture-loaded db. This is a snapshot of any of a number of extremely complex live databases and the test suite, as per its design, is supposed to introspect the database(s) and figure out how to run its tests (almost all access is read-only). This works fine and has many beneficial aspects at least in my particular context, but it also means there is no tearDown or fixture-load for me to work with.
import pytest
#pytest.fixture(autouse = True)
def wrapper(request):
print('\nbefore: {}'.format(request.node.name))
yield
print('\nafter: {}'.format(request.node.name))
def test_a():
assert True
def test_b():
assert True
Example output:
$ pytest -v -s test_foo.py
test_foo.py::test_a
before: test_a
PASSED
after: test_a
test_foo.py::test_b
before: test_b
PASSED
after: test_b

PyCharm + cProfile + py.test --> pstat snapshot view + call graph are empty

In PyCharm, I set up py.test as the default test runner.
I have a simple test case:
import unittest
import time
def my_function():
time.sleep(0.42)
class MyTestCase(unittest.TestCase):
def test_something(self):
my_function()
Now I run the test by right-clicking the file and choosing Profile 'py.test in test_profile.py'.
I see the test running successfully in the console (it says collected 1 items). However, the Statistics/Call Graph view showing the generated pstat file is empty and says Nothing to show.
I would expect to see profiling information for the test_something and my_function. What am I doing wrong?
Edit 1:
If I change the name of the file to something which does not start with test_, remove the unittest.TestCase and insert a __main__ method calling my_function, I can finally run cProfile without py.test and I see results.
However, I am working on a large project with tons of tests. I would like to directly profile these tests instead of writing extra profiling scripts. Is there a way to call the py.test test-discovery module so I can retrieve all tests of the project recursively? (the unittest discovery will not suffice since we yield a lot of parametrized tests in generator functions which are not recognized by unittest). This way I could at least solve the problem with only 1 additional script.
Here is a work-around. Create an additional python script with the following contents (adapt the path to the tests-root accordingly):
import os
import pytest
if __name__ == '__main__':
source_dir = os.path.dirname(os.path.abspath(__file__))
test_dir = os.path.abspath(os.path.join(source_dir, "../"))
pytest.main(test_dir, "setup.cfg")
The script filename must not start with test_, else pycharm will force you to run it with py.test. Then right-click the file and run it with Profile.
This also comes in handy for running it with Coverage.

Django: Override Setting used in AppConfig Ready Function

We are trying to write an automated test for the behavior of the AppConfig.ready function, which we are using as an initialization hook to run code when the Django app has loaded. Our ready method implementation uses a Django setting that we need to override in our test, and naturally we're trying to use the override_settings decorator to achieve this.
There is a snag however - when the test runs, at the point the ready function is executed, the setting override hasn't kicked in (it is still using the original value from settings.py). Is there a way that we can still override the setting in a way where the override will apply when the ready function is called?
Some code to demonstrate this behavior:
settings.py
MY_SETTING = 'original value'
dummy_app/__init__.py
default_app_config = 'dummy_app.apps.DummyAppConfig'
dummy_app/apps.py
from django.apps import AppConfig
from django.conf import settings
class DummyAppConfig(AppConfig):
name = 'dummy_app'
def ready(self):
print('settings.MY_SETTING in app config ready function: {0}'.format(settings.MY_SETTING))
dummy_app/tests.py
from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(MY_SETTING='overridden value')
#override_settings(INSTALLED_APPS=('dummy_app',))
class AppConfigTests(TestCase):
def test_to_see_where_overridden_settings_value_is_available(self):
print('settings.MY_SETTING in test function: '.format(settings.MY_SETTING))
self.fail('Trigger test output')
Output
======================================================================
FAIL: test_to_see_where_overridden_settings_value_is_available (dummy_app.tests.AppConfigTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/labminds/venv/labos/src/dom-base/dummy_app/tests.py", line 12, in test_to_see_where_overridden_settings_value_is_available
self.fail('Trigger test output')
AssertionError: Trigger test output
-------------------- >> begin captured stdout << ---------------------
settings.MY_SETTING in app config ready function: original value
settings.MY_SETTING in test function: overridden value
--------------------- >> end captured stdout << ----------------------
It is important to note that we only want to override this setting for the tests that are asserting the behavior of ready, which is why we aren't considering changing the setting in settings.py, or using a separate version of this file used just for running our automated tests.
One option already considered - we could simply initialize the AppConfig class in our test, call ready and test the behavior that way (at which point the setting would be overridden by the decorator). However, we would prefer to run this as an integration test, and rely on the natural behavior of Django to call the function for us - this is key functionality for us and we want to make sure the test fails if Django's initialization behavior changes.
Some ideas (different effort required and automated assurance):
Don't integration test, and rely on reading the releas notes/commits before upgrading the Django version and / or rely on single manual testing
Assuming a test - stage deploy - prod deploy pipeline, unit test the special cases in isolation and add an integration check as a deployment smoke test (e.g.: by exposing this settings value through a management command or internal only url endpoint) - only verify that for staging it has the value it should be for staging. Slightly delayed feedback compared to unit tests
test it through a test framework outside of Django's own - i.e.: write the unittests (or py.tests) and inside those tests bootstrap django in each test (though you need a way to import & manipulate the settings)
use a combination of overriding settings via the OS's environment (we've used envdir a'la 12 factor app) and a management command that would do the test(s) - e.g.: MY_SETTING='overridden value' INSTALLED_APPS='dummy_app' EXPECTED_OUTCOME='whatever' python manage.py ensure_app_config_initialized_as_expected
looking at Django's own app init tests apps.clear_cache() and
with override_settings(INSTALLED_APPS=['test_app']):
config = apps.get_app_config('test_app')
assert config....
could work, though I've never tried it
You appear to have hit a documented limitation of ready in Django (scroll down to the warning). You can see the discussion in the ticket that prompted the edit. The ticket specifically refers to database interactions, but the same limitation would apply to any effort to test the ready function -- i.e. that production (not test) settings are used during ready.
Based on the ticket, "don't use ready" sounds like the official answer, but I don't find that attitude useful unless they direct me to a functionally equivalent place to run this kind of initialization code. ready seems to be the most official place to run once on startup.
Rather than (re)calling ready, I suggest having ready call a second method. Import and use that second method in your tests cases. Not only will your tests be cleaner, but it isolates the test case from any other ready logic like attaching signals. There's also a context manager that can be used to simplify the test:
#override_settings(SOME_SETTING='some-data')
def test(self):
...
or
def test(self):
with override_settings(SOME_SETTING='some-data'):
...
P.S. We work around several possible issues in ready by checking the migration status of the system:
def ready(self):
# imports have to be delayed for ready
from django.db.migrations.executor import MigrationExecutor
from django.conf import settings
from django.db import connections, DEFAULT_DB_ALIAS
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
if plan:
# not healthy (possibly setup for a migration)
return
...
Perhaps something similar could be done to prevent execution during tests. Somehow the system knows to (eventually) switch to test settings. I assume you could skip execution under the same conditions.

How to preserve django test database after running test cases

When I run test cases by typing
python manage.py test myapp
After test cases completed, test databases deleted by default by django test runner. I don't want it to be deleted.
I can use any database!
I want to preserve my database because there are bugs in database that I wanted to see in database that created. So that I can pinpoint them!
You can prevent the test databases from being destroyed by using the test --keepdb option.
https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database
While passing -k to manage.py test will retain the test database, it will still delete the records that was created in your test cases. This is because Django's TestCase classes will still reset your database after every test case (django.test.TransactionTestCase will do a flush, while django.test.TestCase will wrap each of your test case in a transaction and do a rollback when the test case is done).
The only real solution to making Django retain test data is to extend the TestCase class and override the code that resets your database.
However, if you do not have the time to do this, you can also make your test case pause execution before it finishes, giving you the time to inspect your database before it gets reset. There are several ways of achieving this, but, now, THIS IS A HACK, asking for user input in your Python code will make Python pause execution and wait for user input.
from django.test import TestCase
class MyTestCase(TestCase):
def test_something_does_something(self):
result = do_something_with_the_database()
self.assertTrue(result)
# Ask for `input` so execution will pause and wait for input.
input(
'Execution is paused and you can now inspect the database.\n'
'Press return/enter key to continue:')
Alternatively, you can also use pdb's set_trace function, which will also make the execution pause and wait for input, and at the same time lets you debug the environment in that point of code execution.
Just make sure that you remove the input() (or pdb.set_trace()) call before you send your code to your automated build system or else it will wait for user input and time out.
According to the docs, you can preserve the database after running tests by:
$ python manage.py test -k
or
$ python manage.py test --keepdb
To preserve whole database state after test execution (not only tables structure)
Make sure your test class is based on django.test.SimpleTestCase (not TestCase or TransactionTestCase)
Take one of your tests for which you want to preserve database state
Add the following code to your test class to prevent database tables cleaning after the test execution
def tearDown(self) -> None:
pass
#classmethod
def tearDownClass(cls):
pass
Run the test with --keepdb parameter, like ./manage.py test app.test --keepdb - to prevent whole DB cleaning after test execution
Wait for the test to finish
Profit! Take snapshot/discover your test_database [do not forget that Django by default will add prefix test_ to your default database name]
Example of command for test test_copy
./manage.py test --noinput --keepdb api.tests.SomeTests.test_copy
class SomeTests(SimpleTestCase):
allow_database_queries = True
def setUp(self):
super(SomeTests, self).setUp()
self.huge_set_up_operations()
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.huge_init_database()
def tearDown(self):
pass
#classmethod
def tearDownClass(cls):
pass
def test_copy(self):
SubscriptionFactory()
For anyone in a Pytest environment I use the following pytest.ini for testing
[pytest]
DJANGO_SETTINGS_MODULE=myapp.settings.test
python_files = tests.py test_*.py *_tests.py
addopts =
--ds=myapp.settings.test
--reuse-db
--nomigrations
note the "--resuse-db" command argument/addpots
According to docs:
Regardless of whether the tests pass
or fail, the test databases
are destroyed when all the tests have
been executed.
Although, fixtures might be a help in your situation. Just create initial data, you want to be there when test starts, as texture, and make test load it.

Django test runner not finding tests

I am new to both Python and Django and I'm learning by creating a diet management site but I've been completely defeated by getting my unit tests to run. All the docs and blogs I've found say that as long as it's discoverable from tests.py, tests.py is in the same folder as models.py and your test class subclasses TestCase, it should all get picked up automatically. This isn't working for me, when I run manage.py test <myapp> it doesn't find any tests.
I started with all my tests in their own package but have simplified it down to all tests just being in my tests.py file. The current tests.py looks like:
import unittest
from pyDietTracker.models import Weight
from pyDietTracker.weight.DisplayDataAdapters import DisplayWeight
class TestDisplayWeight(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def testGetWeightInStone_KG_Correctly_Converted(self):
weight = Weight()
weight.weight = 99.8
testAdapter = DisplayWeight(weight)
self.assertEquals(testAdapter.GetWeightInStone(), '15 st 10 lb')
I have tried it by subclassing the Django TestCase class as well but this didn't work either. I'm using Django 1.1.1, Python 2.6 and I'm running Snow Leopard.
I'm sure I am missing something very basic and obvious but I just can't work out what. Any ideas?
Edit: Just a quick update after a comment
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.admin',
'pyDietTracker',
)
To get the tests to run I am running manage.py test pyDietTracker
I had the same issue but my root cause was different.
I was getting Ran 0 tests, as OP.
But it turns out the test methods inside your test class must start with keyword test to run.
Example:
from django.test import TestCase
class FooTest(TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def this_wont_run(self):
print 'Fail'
def test_this_will(self):
print 'Win'
Also the files with your TestCases in them have to start with test.
If you're using a yourapp/tests package/style for unittests, make sure there's a __init__.py in your tests folder (since that's what makes it a Python module!).
I can run test for specific apps e.g.
python project/manage.py test app_name
but when I run
python project/manage.py test
0 tests was found
Figure out I need to run this in the same directory as manage.py
so the solution would be, cd to project directory and run
python manage.py test
In my case, the app folder itself was missing an __init__.py. This results in the behaviour that the test will be run with python manage.py test project.app_name but not with python manage.py test.
project/
app_name/
__init__.py # this was missing
In my case, I typed def instead of class. Instead of
class TestDisplayWeight(TestCase): # correct!
I had
def TestDisplayWeight(TestCase): # wrong!
This may also happen when you are using a tests module instead of a tests.py. In this case you need to import all the test classes into the __init__.py of your tests module, e.g.
tests/
__init__.py
somemodule.py
In your __init__.py you now need to import the somemodule like this:
from .somemodule import *
This also happens if you have a syntax error in your tests.py.
Worked it out.
It turns out I had done django-admin.py startproject pyDietTracker but not python manage.py startapp myApp. After going back and doing this, it did work as documented. It would appear I have a lot to learn about reading and the difference between a site and an app in Django.
Thank you for your help S.Lott and Emil Stenström. I wish I could accept both your answers because they are both helped alot.
Most important lesson Tests only work at the app level not the site level
Here's another one that I've just had: Check your test files are not executable. My virtualbox auto-mounted them as executable so the test discover missed them completely. I had to add them into the relevant __init__.py files before someone told me what the issue was as a work around, but now they are removed, and non-executable and everything _just_works.
in my case, I miss starting my functions name with test_
and when run my test with :
python manage.py test myapp
result was :
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Destroying test database for alias 'default'...
it seems Django cannot recognize my tests!
then i change myproject/myapp/test.py file like this :
from django.test import TestCase
# Create your tests here.
class apitest(TestCase):
def test_email(self):
pass
def test_secend(self):
pass
after that result is:
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
..
----------------------------------------------------------------------
Ran 2 tests in 2.048s
OK
Destroying test database for alias 'default'...
I know I am late at this point but I also had trouble with
Found 0 test(s).
System check identified no issues (1 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I have followed all the steps still I was facing the same issue. My fix was I missed __init__.py file in the test directory. Adding the file and re-running the command solved my issue.
HIGHLIGHTING IT A BIT:
Make sure you have __init__.py file
I had this happen when I had a test.py file, and a test/ subdirectory, in the same Django app directory. I guess I'm confusing python or the test runner whether I'm looking for a test module (in test.py) or a test package (in test/ subdir).
If you are trying to run a test in your main app, such as my_app/my_app/ make sure you have the following checked:
App name is listed in INSTALLED_APPS inside settings.py
Make sure your DATABASES['default'] inside settings.py is set properly
The App has a models.py (even if you are not using one, at least an empty one is required to be there)
Using this syntax
python manage.py test
instead of ./manage.py test solved this problem for me.
See https://docs.djangoproject.com/en/1.11/topics/testing/overview/
The most common reason for tests not running is that your settings aren't right, and your module is not in INSTALLED_APPS.
We use django.test.TestCase instead of unittest.TestCase. It has the Client bundled in.
https://docs.djangoproject.com/en/1.11/topics/testing/tools/#django.test.TestCase
I had the same problem, turns out I saved the __init__ as a python file but it did not put .py at the end of its name. I added .py at the end of file's name. it was ok afterwards
(in other words, I had created __init__ instead of __init__.py )
In the same file, I had two test classes with the SAME NAME, and of course this prevented all tests from running.
I created a method called run in my test class which turned out to be a very bad idea. Python could see that I wanted to run tests, but was unable to. This problem is slightly different, but the result is the same - it made it seem as if the tests couldn't be found.
Note that the following message was displayed:
You want to run the existing test: <unittest.runner.TextTestResult run=0 errors=0 failures=0>
Run --help and look for verbose. Crank it to max.
I ran manage.py test --verbose and found this debug output right at the top:
>nosetests --with-spec --spec-color --verbose --verbosity=2.
Oh look! I had installed and forgotten about nosetests. And it says --verbosity=2. I figured out that 3 is the max and running it with 3 I found lots of these:
nose.selector: INFO: /media/sf_C_DRIVE/Users/me/git/django/app/tests/test_processors.py is executable; skipped
That gave me the right hint. It indeed has problems with files having the x-bit set. However, I was thrown off the track as it had run SOME of the tests - even though it explicitly said it would skip them. Changing bits is not possible, as I run the tests in a VM, sharing my Windows NTFS-disk. So adding --exe fixed it.
Had the same issue and it was because my filename had a - char in its name.
My filename was route-tests.py and changed it to route_tests.py
If you encounter this error after upgrading to Django 3, it might be because the -k parameter changed meaning from:
-k, --keepdb Preserves the test DB between runs.
to
-k TEST_NAME_PATTERNS Only run test methods and classes that match the pattern or substring. Can be used multiple times. Same as unittest -k option.
So just replace -k with --keepdb to make it work again.
Django engine searches files and folders with test_ prefix (inside of a tests folder). In my case it was simple solution.
So, be sure to checkout file/folder name starts with it.
I had the same problem, it was caused by init.py at the project root - deleted that, all tests ran fine again.
This is late. but you can simply add your app name in front of importing models. like
from myapp.models import something
This works for Me.
In Django, methods in test classes must start with "test" keyword. for example test_is_true(). methods name like is_true() will not execute.

Categories