With this question I worked out how to break my tests over multiple files. So now in each file/module I have a series of TestCase classes.
I can still invoke individual TestCases by explicitly naming them from the command line like:
./manage.py test api.TokenGeneratorTestCase api.ViewsTestCase
Rather than invoking related TestCase's individually, now I'm thinking it would be nice to group the related TestCases into Suites, and then invoke the whole Suite from the commandline; hopefully without loosing the ability to invoke all the Suites in the app at once.
I've seen this python stuff about suites, and also this django stuff about suites, but working out how to do what I want is elusive. I think I'm looking to be able to say things like:
./manage.py test api.NewSeedsImportTestCase api.NewSeedsExportTestCase
./manage.py test api.NewSeedsSuite
./manage.py test api.NewRoomsSuite
./manage.py test api
Has anyone out there arranged their Django TestCases into Suites and can show me how?
One possible approach is to write a custom runner that would extend django.test.simple.DjangoTestSuiteRunner and override the build_suite method. That's where Django generates the suite used by the test command.
It gets an argument test_labels which corresponds to the command line arguments passed to the command. You can extend its functionality by allowing passing extra module paths from where tests should be loaded. Something like this should do the trick (this is just to demonstrate the approach, I haven't tested the code):
from django.test.simple import DjangoTestSuiteRunner
from django.utils import unittest
from django.utils.importlib import import_module
class MyTestSuiteRunner(DjangoTestSuiteRunner):
def build_suite(self, test_labels, extra_tests=None, *args, **kwargs):
if test_labels:
extra_test_modules = [label.lstrip('module:')
for label in test_labels
if label.startswith('module:')]
extra_tests = extra_tests or []
for module_path in extra_test_modules:
# Better way to load the tests here would probably be to use
# `django.test.siple.build_suite` as it does some extra stuff like looking for doctests.
extra_tests += unittest.defaultTestLoader.loadTestsFromModule(import_module(module_path))
# Remove the 'module:*' labels
test_labels = [label for label in test_labels if not label.startswith('module:')]
# Let Django do the rest
return super(MyTestSuiteRunner, self).build_suite(test_labels, extra_tests, *args, **kwargs)
Now you should be able to run the test command exactly as before, except that any label that looks like this module:api.test.extra will result in all the tests/suites from the module being added to the final suite.
Note that the 'module:' labels are not app labels so that it must be a full python path to the module.
You will also need to point your TEST_RUNNER settings to your new test runner.
Related
In PyCharm, I set up py.test as the default test runner.
I have a simple test case:
import unittest
import time
def my_function():
time.sleep(0.42)
class MyTestCase(unittest.TestCase):
def test_something(self):
my_function()
Now I run the test by right-clicking the file and choosing Profile 'py.test in test_profile.py'.
I see the test running successfully in the console (it says collected 1 items). However, the Statistics/Call Graph view showing the generated pstat file is empty and says Nothing to show.
I would expect to see profiling information for the test_something and my_function. What am I doing wrong?
Edit 1:
If I change the name of the file to something which does not start with test_, remove the unittest.TestCase and insert a __main__ method calling my_function, I can finally run cProfile without py.test and I see results.
However, I am working on a large project with tons of tests. I would like to directly profile these tests instead of writing extra profiling scripts. Is there a way to call the py.test test-discovery module so I can retrieve all tests of the project recursively? (the unittest discovery will not suffice since we yield a lot of parametrized tests in generator functions which are not recognized by unittest). This way I could at least solve the problem with only 1 additional script.
Here is a work-around. Create an additional python script with the following contents (adapt the path to the tests-root accordingly):
import os
import pytest
if __name__ == '__main__':
source_dir = os.path.dirname(os.path.abspath(__file__))
test_dir = os.path.abspath(os.path.join(source_dir, "../"))
pytest.main(test_dir, "setup.cfg")
The script filename must not start with test_, else pycharm will force you to run it with py.test. Then right-click the file and run it with Profile.
This also comes in handy for running it with Coverage.
We are trying to write an automated test for the behavior of the AppConfig.ready function, which we are using as an initialization hook to run code when the Django app has loaded. Our ready method implementation uses a Django setting that we need to override in our test, and naturally we're trying to use the override_settings decorator to achieve this.
There is a snag however - when the test runs, at the point the ready function is executed, the setting override hasn't kicked in (it is still using the original value from settings.py). Is there a way that we can still override the setting in a way where the override will apply when the ready function is called?
Some code to demonstrate this behavior:
settings.py
MY_SETTING = 'original value'
dummy_app/__init__.py
default_app_config = 'dummy_app.apps.DummyAppConfig'
dummy_app/apps.py
from django.apps import AppConfig
from django.conf import settings
class DummyAppConfig(AppConfig):
name = 'dummy_app'
def ready(self):
print('settings.MY_SETTING in app config ready function: {0}'.format(settings.MY_SETTING))
dummy_app/tests.py
from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(MY_SETTING='overridden value')
#override_settings(INSTALLED_APPS=('dummy_app',))
class AppConfigTests(TestCase):
def test_to_see_where_overridden_settings_value_is_available(self):
print('settings.MY_SETTING in test function: '.format(settings.MY_SETTING))
self.fail('Trigger test output')
Output
======================================================================
FAIL: test_to_see_where_overridden_settings_value_is_available (dummy_app.tests.AppConfigTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/labminds/venv/labos/src/dom-base/dummy_app/tests.py", line 12, in test_to_see_where_overridden_settings_value_is_available
self.fail('Trigger test output')
AssertionError: Trigger test output
-------------------- >> begin captured stdout << ---------------------
settings.MY_SETTING in app config ready function: original value
settings.MY_SETTING in test function: overridden value
--------------------- >> end captured stdout << ----------------------
It is important to note that we only want to override this setting for the tests that are asserting the behavior of ready, which is why we aren't considering changing the setting in settings.py, or using a separate version of this file used just for running our automated tests.
One option already considered - we could simply initialize the AppConfig class in our test, call ready and test the behavior that way (at which point the setting would be overridden by the decorator). However, we would prefer to run this as an integration test, and rely on the natural behavior of Django to call the function for us - this is key functionality for us and we want to make sure the test fails if Django's initialization behavior changes.
Some ideas (different effort required and automated assurance):
Don't integration test, and rely on reading the releas notes/commits before upgrading the Django version and / or rely on single manual testing
Assuming a test - stage deploy - prod deploy pipeline, unit test the special cases in isolation and add an integration check as a deployment smoke test (e.g.: by exposing this settings value through a management command or internal only url endpoint) - only verify that for staging it has the value it should be for staging. Slightly delayed feedback compared to unit tests
test it through a test framework outside of Django's own - i.e.: write the unittests (or py.tests) and inside those tests bootstrap django in each test (though you need a way to import & manipulate the settings)
use a combination of overriding settings via the OS's environment (we've used envdir a'la 12 factor app) and a management command that would do the test(s) - e.g.: MY_SETTING='overridden value' INSTALLED_APPS='dummy_app' EXPECTED_OUTCOME='whatever' python manage.py ensure_app_config_initialized_as_expected
looking at Django's own app init tests apps.clear_cache() and
with override_settings(INSTALLED_APPS=['test_app']):
config = apps.get_app_config('test_app')
assert config....
could work, though I've never tried it
You appear to have hit a documented limitation of ready in Django (scroll down to the warning). You can see the discussion in the ticket that prompted the edit. The ticket specifically refers to database interactions, but the same limitation would apply to any effort to test the ready function -- i.e. that production (not test) settings are used during ready.
Based on the ticket, "don't use ready" sounds like the official answer, but I don't find that attitude useful unless they direct me to a functionally equivalent place to run this kind of initialization code. ready seems to be the most official place to run once on startup.
Rather than (re)calling ready, I suggest having ready call a second method. Import and use that second method in your tests cases. Not only will your tests be cleaner, but it isolates the test case from any other ready logic like attaching signals. There's also a context manager that can be used to simplify the test:
#override_settings(SOME_SETTING='some-data')
def test(self):
...
or
def test(self):
with override_settings(SOME_SETTING='some-data'):
...
P.S. We work around several possible issues in ready by checking the migration status of the system:
def ready(self):
# imports have to be delayed for ready
from django.db.migrations.executor import MigrationExecutor
from django.conf import settings
from django.db import connections, DEFAULT_DB_ALIAS
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
if plan:
# not healthy (possibly setup for a migration)
return
...
Perhaps something similar could be done to prevent execution during tests. Somehow the system knows to (eventually) switch to test settings. I assume you could skip execution under the same conditions.
Say I've got a test suite like this:
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
I am currently doing the following:
suite = unittest.TestSuite()
loader = unittest.TestLoader()
safetests = loader.loadTestsFromTestCase(SafeTests)
suite.addTests(safetests)
if TARGET != 'prod':
unsafetests = loader.loadTestsFromTestCase(BombTests)
suite.addTests(unsafetests)
unittest.TextTestRunner().run(suite)
I have major problem, and one interesting point
I would like to be using nose or py.test (doestn't really matter which)
I have a large number of different applications that are exposing these test
suites via entry points.
I would like to be able to aggregate these custom tests across all installed
applications so I can't just use a clever naming convention. I don't
particularly care about these being exposed through entry points, but I
do care about being able to run tests across applications in
site-packages. (Without just importing... every module.)
I do not care about maintaining the current dependency on
unittest.TestCase, trashing that dependency is practically a goal.
EDIT This is to confirm that #Oleksiy's point about passing args to
nose.run does in fact work with some caveats.
Things that do not work:
passing all the files that one wants to execute (which, weird)
passing all the modules that one wants to execute. (This either executes
nothing, the wrong thing, or too many things. Interesting case of 0, 1 or
many, perhaps?)
Passing in the modules before the directories: the directories have to come
first, or else you will get duplicate tests.
This fragility is absurd, if you've got ideas for improving it I welcome
comments, or I set up
a github repo with my
experiments trying to get this to work.
All that aside, The following works, including picking up multiple projects
installed into site-packages:
#!python
import importlib, os, sys
import nose
def runtests():
modnames = []
dirs = set()
for modname in sys.argv[1:]:
modnames.append(modname)
mod = importlib.import_module(modname)
fname = mod.__file__
dirs.add(os.path.dirname(fname))
modnames = list(dirs) + modnames
nose.run(argv=modnames)
if __name__ == '__main__':
runtests()
which, if saved into a runtests.py file, does the right thing when run as:
runtests.py project.tests otherproject.tests
For nose you can have both tests in place and select which one to run using attribute plugin, which is great for selecting which tests to run. I would keep both tests and assign attributes to them:
from nose.plugins.attrib import attr
#attr("safe")
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
For you production code I would just call nose with nosetests -a safe, or setting NOSE_ATTR=safe in your os production test environment, or call run method on nose object to run it natively in python with -a command line options based on your TARGET:
import sys
import nose
if __name__ == '__main__':
module_name = sys.modules[__name__].__file__
argv = [sys.argv[0], module_name]
if TARGET == 'prod':
argv.append('-a slow')
result = nose.run(argv=argv)
Finally, if for some reason your tests are not discovered you can explicitly mark them as test with #istest attribute (from nose.tools import istest)
This turned out to be a mess: Nose pretty much exclusively uses the
TestLoader.load_tests_from_names function (it's the only function tested in
unit_tests/test_loader)
so since I wanted to actually load things from an arbitrary python object I
seemed to need to write my own figure out what kind of load function to use.
Then, in addition, to correctly get things to work like the nosetests script
I needed to import a large number of things. I'm not at all certain that this
is the best way to do things, not even kind of. But this is a stripped down
example (no error checking, less verbosity) that is working for me:
import sys
import types
import unittest
from nose.config import Config, all_config_files
from nose.core import run
from nose.loader import TestLoader
from nose.suite import ContextSuite
from nose.plugins.manager import PluginManager
from myapp import find_test_objects
def load_tests(config, obj):
"""Load tests from an object
Requires an already configured nose.config.Config object.
Returns a nose.suite.ContextSuite so that nose can actually give
formatted output.
"""
loader = TestLoader()
kinds = [
(unittest.TestCase, loader.loadTestsFromTestCase),
(types.ModuleType, loader.loadTestsFromModule),
(object, loader.loadTestsFromTestClass),
]
tests = None
for kind, load in kinds.items():
if isinstance(obj, kind) or issubclass(obj, kind):
log.debug("found tests for %s as %s", obj, kind)
tests = load(obj)
break
suite = ContextSuite(tests=tests, context=obj, config=config)
def main():
"Actually configure the nose config object and run the tests"
config = Config(files=all_config_files(), plugins=PluginManager())
config.configure(argv=sys.argv)
tests = []
for group in find_test_objects():
tests.append(load_tests(config, group))
run(suite=tests)
If your question is, "How do I get pytest to 'see' a test?", you'll need to prepend 'test_' to each test file and each test case (i.e. function). Then, just pass the directories you want to search on the pytest command line and it will recursively search for files that match 'test_XXX.py', collect the 'test_XXX' functions from them and run them.
As for the docs, you can try starting here.
If you don't like the default pytest test collection method, you can customize it using the directions here.
If you are willing to change your code to generate a py.test "suite" (my definition) instead of a unittest suite (tech term), you may do so easily. Create a file called conftest.py like the following stub
import pytest
def pytest_collect_file(parent, path):
if path.basename == "foo":
return MyFile(path, parent)
class MyFile(pytest.File):
def collect(self):
myname="foo"
yield MyItem(myname, self)
yield MyItem(myname, self)
class MyItem(pytest.Item):
SUCCEEDED=False
def __init__(self, name, parent):
super(MyItem, self).__init__(name, parent)
def runtest(self):
if not MyItem.SUCCEEDED:
MyItem.SUCCEEDED = True
print "good job, buddy"
return
else:
print "you sucker, buddy"
raise Exception()
def repr_failure(self, excinfo):
return ""
Where you will be generating/adding your code into your MyFile and MyItem classes (as opposed to the unittest.TestSuite and unittest.TestCase). I kept the naming convention of MyFile class that way, because it is intended to represent something that you read from a file, but of course you can basically decouple it (as I've done here). See here for an official example of that. The only limit is that in the way I've written this foo must exist as a file, but you can decouple that too, e.g. by using conftest.py or whatever other file name exist in your tree (and only once, otherwise everything will run for each files that matches -- and if you don't do the if path.basename test for every file that exists in your tree!!!)
You can run this from command line with
py.test -whatever -options
or programmactically from any code you with
import pytest
pytest.main("-whatever -options")
The nice thing with py.test is that you unlock many very powerful plugings such as html report
In nosetests, I know that you can specify which tests you want to run via a nosetests config file as such:
[nosetests]
tests=testIWT_AVW.py:testIWT_AVW.tst_bynd1,testIWT_AVW.py:testIWT_AVW.tst_bynd3
However, the above just looks messy and becomes harder to maintain when a lot of tests are added, especially without being able to use linebreaks. I found it a lot more convenient to be able to specify which tests I want to run using unittests TestSuite feature. e.g.
def custom_suite():
suite = unittest.TestSuite()
suite.addTest(testIWT_AVW('tst_bynd1'))
suite.addTest(testIWT_AVW('tst_bynd3'))
return suite
if __name__=="__main__":
runner = unittest.TextTestRunner()
runner.run(custom_suite())
Question: How do I specify which tests should be run by nosetests within my .py file? Thanks.
P.S. If there is a way to specify tests via a nosetest config file that doesn't force all tests to be written on one line I would be open to it as well, as a second alternative
I'm not entirely sure whether you want to run the tests programmatically or from the command line. Either way this should cover both:
import itertools
from nose.loader import TestLoader
from nose import run
from nose.suite import LazySuite
paths = ("/path/to/my/project/module_a",
"/path/to/my/project/module_b",
"/path/to/my/project/module_c")
def run_my_tests():
all_tests = ()
for path in paths:
all_tests = itertools.chain(all_tests, TestLoader().loadTestsFromDir(path))
suite = LazySuite(all_tests)
run(suite=suite)
if __name__ == '__main__':
run_my_tests()
Note that the nose.suite.TestLoader object has a number of different methods available for loading tests.
You can call the run_my_tests method from other code or you can run this from the command line with a python interpreter, rather than through nose. If you have other nose configuration, you may need to pass that in programmatically as well.
If I'm correctly understanding your question, you have several options here:
you can mark your tests with special nose decorators: istest and nottest. See docs
you can mark tests with tags
you can join test cases in test suites. I haven't used it by myself, but it seems that you have to override nose's default test discovery to respect your test suites (see docs)
Hope that helps.
Lets say I have the following testcases in different files
TestOne.py {tags: One, Two}
TestTwo.py {tags: Two}
TestThree.py {tags: Three}
Each of which inherits from unittest.TestCase. Is there any ability in python to embed metadata information within these files, so that I can have a main.py script to search for those tags and execute only those testcases?
For Eg: If I want to execute testcases with {tags: Two} then only testcases TestOne.py and TestTwo.py should be executed.
The py.test testing framework has support for meta data, via what they call markers.
For py.test test cases are functions that have names starting with "test", and which are in modules with names starting with "test". The tests themselves are simple assert statements. py.test can also run tests for the unittest library, and IIRC Nose tests.
The meta data consists of dynamically generated decorators for the test functions. The decorators have the form: #pytest.mark.my_meta_name. You can choose anything for my_meta_name. There are a few predefined markers that you can see with py.test --markers.
Here is an adapted snippet from their documentation:
# content of test_server.py
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
def test_always_succeeds():
assert 2 == 3 - 1
def test_will_always_fail():
assert 4 == 5
You select marked tests with the -m command line option of the test runner. To selectively run test_send_http() you enter this into a shell:
py.test -v -m webtest
Of course it's more easy to define tags in the main module, but if it's important for you to save them with test files, it could be a good solution to define it in test files like this:
In TestOne.py:
test_tags = ['One', 'Two']
...
Then you can read all tags in the initialize function of your main module in this way:
test_modules = ['TestOne', 'TestTwo', 'TestThree']
test_tags_dict = {}
def initialize():
for module_name in test_modules:
module = import_string(module)
if hasattr(module, 'test_tags'):
for tag in module.test_tags:
if tag not in test_tags_dict:
test_tags_dict[tag] = []
test_tags_dict[tag].append(module)
So you can implement a run_test_with_tag function to run all tests for an specific tag:
def run_test_with_tag(tag):
for module in test_tags_dict.get(tag, []):
# Run module tests here ...