How to disable Django's Test Discovery? - python

This is an unusual situation - most Django users want Django's test runner to find all of their tests.
I'm working on a python library with several test suites that are run with different configurations, so I don't want the discovery to find and run tests from the wrong configuration. How do I disable discovery entirely and rely on the pre-1.6 behavior of only running the tests for apps explicitly declared in INSTALLED_APPS?
My library structure:
library/ # django app used by others
tests/ # custom test suites here
core/ # tests of core functionality
custom/ # tests of a custom feature requiring separate config
contrib/ # tests for assorted contrib features, also requiring separate config
manage_core.py # separate manage.py files for each "project"
manage_custom.py # these specify settings file to use.
manage_contrib.py
settings.py # base settings for all tests
settings_core.py # settings for 'core' tests including unique INSTALLED_APPS
settings_custom.py # settings for 'custom' tests; different INSTALLED_APPS
settings_contrib.py # settings for 'contrib' tests; different INSTALLED_APPS
The problem is that this command, which should only run tests for the 'contrib' test suite, is also finding and running tests for 'core':
./manage_contrib.py test contrib.tests

It's missing from the Django docs, but the command-line has an option, found via ./manage.py help test:
-t TOP_LEVEL, --top-level-directory TOP_LEVEL
Top level of project for unittest discovery.
Confusingly, specifying the module to test doesn't appear to prevent test discovery, but specifying a sub-directory does, like this:
./manage_contrib.py test contrib.tests -t ./contrib/
That appears to prevent the discovery of tests located outside of contrib.

Hmm I'm unfortunately not aware of a settings parameter that might let you tell unittest to only run from individual apps (a-la "settings.TEST_DIRECTORIES=settings.INSTALLED_APPS") but if you're able to give your tests a unique naming convention, you could use the --pattern= option when running the test suite.
For example, if you have
/myapp/tests/test_a_models.py
/myapp/tests/test_b_models.py
You could only run a with ./manage.py test --pattern='*_a_*' and then run b with ./manage.py test --pattern='*_b_*'
Definitely not ideal, but might get the job done depending on how much flexibility you have with the test naming conventions in your own app.

Related

How to properly test a Django library

I frequently run into practical problems when I want to test a library I develop which is a Django app. Django apps can be developed independently, like DRF, etc.
For testing, you either need a Django project, or at least a settings.py file or the like, see here.
But I don't get how to do that properly for a "standalone" library. I need to generate migrations for that library (if it contains models) - so I need a manage.py file to invoke manage.py makemigrations, I need to check if the lib is integrating into the project properly (in my case apps are full-fledged plugins, I am using GDAPS).
What is the best approach here?
should I create a separate "test project" in a separate repo that uses my library and tests it?
should I create a project within my library's tests directory and use that? CAVE python paths...
should I not use the project at all and mimick everything - how?
Please give me at least a hint in some direction.
Was googling the same question, and found that the "Using the Django test runner to test reusable applications" section of the Advanced testing topics of the Django documentation gives a suitable common practice. It's a mixture of all three your approaches.
Basically you should create a tests package next to the application code, with a test_settings.py containing all the settings your library needs, and at least set the SECRET_KEY and INSTALLED_APPS variables.
SECRET_KEY = 'fake-key'
INSTALLED_APPS = [
"tests", "your-library"
]
Where "your-library" is the name of the libray you need to test.
In my case, I had to also add 'django.contrib.auth', 'django.contrib.contenttypes', as they where needed by my library.
Then you put all your tests into that package, eventual addictional models in a models.py file, and run them with a runtests.py that lives on the root of your repo:
#!/usr/bin/env python
import os
import sys
import django
from django.conf import settings
from django.test.utils import get_runner
if __name__ == "__main__":
os.environ['DJANGO_SETTINGS_MODULE'] = 'tests.test_settings'
django.setup()
TestRunner = get_runner(settings)
test_runner = TestRunner()
failures = test_runner.run_tests(["tests"])
sys.exit(bool(failures))
It will use the test_settings.py as its settings file (os.environ['DJANGO_SETTINGS_MODULE'] = 'tests.test_settings'), and run all the tests contained in the tests package.

Pytest Not Deselecting Certain Tests (but should be in pytest.ini)

I've got a test suite set up and have been using pytest and pytest-django.
To give some background: I am trying to do some integration testing with a headless browser and have pytest ignore certain tests that are used where I am using a standard (non-headless) browser; I'd like to deselect those tests using a real, visual browser so that they don't trigger in my CI/CD pipeline.
Given the example pytest.ini below, if I run pytest launcher it shows as I would expect that there is 1 test being deselected (the StandardBrowserTestCases class only has 1 test in it).
However, if I run pytest other_app (which also has a StandardBrowserTestCases class) it does not show anything being deselected and StandardBrowserTestCases is ran with the other tests and not deselected.
[pytest]
addopts =
--nomigrations
--cov-config=.coveragerc
--cov=my_project
--cov=other_app
--cov=launcher
--cov=people
; # Ignore the StandardBrowserTestCases - these are only used for local
; # development / visual debug and contain no real valuable tests
--deselect=other_app/tests/test_integrations.py::StandardBrowserTestCases
--deselect=launcher/tests/test_integrations.py::StandardBrowserTestCases
--deselect=people/tests/test_integrations.py::StandardBrowserTestCases
--junitxml=./test-results/junit.xml
--cov-report html:./test-results/htmlcov
--html=./test-results/test_results.html
--self-contained-html
DJANGO_SETTINGS_MODULE = my_project.unit-test-settings
python_files = tests.py test_*.py *_tests.py
Am I using --deselect right? How can I figure out (troubleshoot) to see why it works on one app (launcher) but not the other (other_app)? How can I get pytest to deselect those tests from each app instead of just one?

running unittests \ integration tests in python

I have a Django project with multiple apps. Each app has a set of unittests. I'm using pytest as my test runner. We have gotten to a point that we want to start writing integration tests. I was wondering if there is any way to keep the naming convention and thus the auto discovery of pytest but still be able (via flag maybe?) to run the different test types. The most intuitive solution that comes to mind is some sort of decorator on test methods or even TestCase classes (something like Category in JUnit).
something like:
#testtype('unittest')
def test_my_test(self):
# do some testing
#testtype('integration')
def test_my_integration_test(self):
# do some integration testing
and then i could run the test like:
py.test --type=integration
py.test --type=unittest
Is there such a thing?
If not, the only other solution i can think about is to add a django command and "manually" build a testsuite and run it with pytest... I would prefer not to use this option. Is there any other solution that can help me?
Thanks
You can mark test functions.
import pytest
#pytest.mark.unittest
def test_my_test(self):
# do some testing
#pytest.mark.integration
def test_my_integration_test(self):
# do some integration testing
These custom markers must be registered in your pytest.ini file.
Then use the -m flag to run the marked tests
py.test -v -m unittest
Another option would be to split your tests into unittest and integration directories. Then you can run tests in a specific directory with:
py.test -v unittest
or
py.test -v integration
Another way to do this (without any config or code)
pytest -o "python_functions=*_integration_test"
You can also do this in module/class level, e.g.,
python_files = integration_test_*.py
python_classes = IntegrationTest
Ref:
https://docs.pytest.org/en/latest/example/pythoncollection.html#changing-naming-conventions

How can I make the application package available to the tests, when using py.test?

Suppose I'm writing a test. Obviously it is testing my app, so I need to import the app package somehow into the test script. The directory structure is this:
root/
app/
__init__.py
somemodule.py
tests/
my_test.py
And I run the tests like so:
cd tests
py.test # runs all the tests in the current directory
The question is: How should I import the application modules in my test modules?
In my_test.py, I tried doing from .. import app. This gives me an error Parent module '' not loaded, cannot perform relative import.
What is the standard way to accomplish this?
EDIT: Please note I edited the question to refer specifically to the py.test tool.
You should be able to make it run by properly configuring your py.test.
Add your module to your app/__init__.py the following line
from .somemodule import MyClass # or whatever ur class is called
Create a file called conftest.py in your main folder. You can leave it empty but it is used by py.test to find out the project path. Inside you can run some py.test initialization like adding fixtures.
In your my_test.py you will be able now to call
from app import MyClass
Now from your main folder you can finally:
py.test tests/test.py
This has worked for me. I think py.test has a way to include modules since you are probably not able to achieve the same without it. At least if I did not use py.test I would stick to modifying my PYTHONPATH to point to my application path.
EDIT:
Just to clarify py.test manipylates the sys.path for the testing session to include the root directory. Py.test identify the root path by using the conftest.py file. The root path is then added to the system path and used for testing.
You are indeed able to run:
py.test tests/test.py
and this would also work:
cd..
py.test rootTest/tests/test.py

Django testing external script

I want to perform testing on a script that interacts with my Django application, namely the database. Normally, when we want to test something in Django we simply fire up the built in test suite. With this test suite we even get nice command line switches such as overriding the built in settings.py with a different settings file:
python manage.py test myApp --settings='settings_test'
Here is the problem:
1) I want to test said script which is not part of an app, so I know of no way to invoke the test suite using manage.py. Is this possible? I.e.:
python manage.py test /path/myScript.py --settings='settings_test'
I would suggest to use a different test runner.
You can do pip install django-nose and then set the following setting in your test_settings.py
TEST_RUNNER = `django_nose.NoseTestSuiteRunner`
Now you can run the tests with
./manage.py test --settings=yourproject.test_settings.py
and the Nose testrunner will search all subfolders for folders called tests and in those folders it will search for files that end with _tests.py (and in those files it will search for classes that derive from TestCase, as usual).
So your project structure should look something like this:
- Project-Root/
- Your-Non-App-Code/
- __init__.py
- non_app_code.py
- tests/
- __init__.py
- non_app_code_tests.py
For more info on how to install django-nose, check their Github repo: https://github.com/django-nose/django-nose

Categories