I've got a functional test suite using pytest-dependency to skip tests when other tests they depend on fail. That way, for example, if the login page is broken, I get one test failure saying "The login page is broken" instead of a slew of test failures saying "I couldn't log into user X", "I couldn't log into user Y", etc.
This works great for running the entire suite, but I'm trying to shorten my edit-compile-test loop, and right now the slowest point is testing my tests. If the test I'm working on has a bunch of other tests it depends on, they all have to succeed in order to not skip the test I'm trying to test. So, I either have to run the entire dependency tree, or comment out my #pytest.mark.dependency(...) decorators (which is an additional thing that I, as a human, have to remember to do). Technically there's nothing these depended-on tests do that enables their dependers to run - the only reason I want these dependencies at all is to make it easier for me to triage test failures.
Is there a command-line argument that would tell pytest-dependency to not skip things on account of dependents, or to tell pytest to not use the pytest-dependency plugin on this run (and this run only)?
The -p option allows disabling a particular plugin:
pytest -p no:dependency
Hello I know that it's possible to run tests in django in parallel via --parallel flag eg. python manage.py test --parallel 10. It really speeds up testing in project I'm working for, what is really nice. But Developers in company shares different hardware setups. So ideally I would like to put parallel argument in ./app_name/settings.py so every developer would use at least 4 threads in testing or number of cores provided by multiprocessing lib.
I know that I can make another script let's say run_test.py in which I make use of --parallel, but I would love to make parallel testing 'invisible'.
To sum up - my question is: Can I put number of parallel test runs in settings of django app?
And if answer is yes. There is second question - Would command line argument (X) manage.py --parallel X override settings from './app_name/settings'
Any help is much appreciated.
There is no setting for this, but you can override the test command to set a different default value. In one of your installed apps, create a .management.commands submodule, and add a test.py file. In there you need to subclass the old test command:
from django.conf import settings
from django.core.management.commands.test import Command as TestCommand
class Command(TestCommand):
def add_arguments(self, parser):
super().add_arguments(parser)
if hasattr(settings, 'TEST_PARALLEL_PROCESSES'):
parser.set_defaults(parallel=settings.TEST_PARALLEL_PROCESSES)
This adds a new default to the --parallel flag. Running python manage.py test --parallel=1 will still override the default.
I'm writing a pytest plugin that needs to warn the user about anomalies encountered during the collection phase, but I don't find any way to consistently send output to the console from inside my pytest_generate_tests function.
Output from print and from the logging module only appears in the console when adding the -s option. All logging-related documentation I found refers to logging inside tests, not from within a plugin.
In the end I used the pytest-warning infrastructure by using the undocumented _warn() method of the pytest config object passed to or anyway accessible from various hooks. For example:
def pytest_generate_tests(metafunc):
[...]
if warning_condition:
metafunc.config._warn("Warning condition encountered.")
[...]
This way you get additional pytest-warnings in the one-line summary if any was reported and you can see the warnings details by adding the '-r w' option to the pytest command line.
I'm looking for a way to run a full celery setup during django tests, asked in this other SO question
After thinking about it, I think I could settle for running a unittest (it's more of an integration test) in which I run the test script against the main Django (development) database. Is there a way to write unittests, run them with Nose and do so against the main database? I imagine it would be a matter of telling Nose (or whatever other framework) about the django settings.
I've looked at django-nose but wasn't able to find a way to tell it to use the main DB and not a test one.
I don't know about nose, but here is how to run against and existing db with django (1.6) unit tests.
from django.test.runner import DiscoverRunner
from django.db import transaction
class ExistingDBTestRunner(DiscoverRunner):
def run_tests(self, test_labels, extra_tests=None, **kwargs):
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
#old_config = self.setup_databases()
result = self.run_suite(suite)
#self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
Then in settings.py
if 'test' in sys.argv:
TEST_RUNNER = '<?>.ExistingDBTestRunner'
# alternative db settings?
It will be a little different in older versions of django. Also, you may need to override _fixture_setup and _fixture_teardown in your test cases to pass.
The above code will connect to a preexisting database but since each test is wrapped in a transaction the changes won't be available to other connections (like the celery worker). The easiest way to disable transactions is to subclass from unittest.TestCase instead of django.test.TestCase.
Have you had a look at django-nose? It seems like it would be the right tool for the job.
What is the latest way to write Python tests? What modules/frameworks to use?
And another question: are doctest tests still of any value? Or should all the tests be written in a more modern testing framework?
Thanks, Boda Cydo.
The usual way is to use the builtin unittest module for creating unit tests and bundling them together to test suites which can be run independently. unittest is very similar to (and inspired by) jUnit and thus very easy to use.
If you're interested in the very latest changes, take a look at the new PyCon talk by Michael Foord:
PyCon 2010: New and Improved: Coming changes to unittest
Using the built-in unittest module is as relevant and easy as ever. The other unit testing options, py.test,nose, and twisted.trial are mostly compatible with unittest.
Doctests are of the same value they always were—they are great for testing your documentation, not your code. If you are going to put code examples in your docstrings, doctest can assure you keep them correct and up to date. There's nothing worse than trying to reproduce an example and failing, only to later realize it was actually the documentation's fault.
I don't know much about doctests, but at my university, nose testing is taught and encouraged.
Nose can be installed by following this procedure (I'm assuming you're using a PC - Windows OS):
install setuptools
Run DOS Command Prompt (Start -> All Programs -> Accessories -> Command Prompt)
For this step to work, you must be connected to the internet. In DOS, type: C:\Python25\Scripts\easy_install nose
If you are on a different OS, check this site
EDIT:
It's been two years since I originally wrote this post. Now, I've learned of this programming principle called Designing by Contract. This allows a programmer to define preconditions, postconditions and invariants (called contracts) for all functions in their code. The effect is that an error is raised if any of these contracts are violated.
The DbC framework that I would recommend for python is called PyContract I have successfully used it in my evolutionary programming framework
In my current project I'm using unittest, minimock, nose. In the past I've made heavy use of doctests, but in a large projects some tests can get kinda unwieldy, so I tend to reserve usage of doctests for simpler functions.
If you are using setuptools or distribute (you should be switching to distribute), you can set up nose as the default test collector so that you can run your tests with "python setup.py test"
setup(name='foo',
...
test_suite='nose.collector',
...
Now running "python setup.py test" will invoke nose, which will crawl your project for things that look like tests and run them, accumulating the results. If you also have doctests in your project, you can run nosetests with the --with-doctest option to enable the doctest plugin.
nose also has integration with coverage
nosetests --with-coverage.
You can also use the --cover-html --cover-html-dir options to generate an HTML coverage report for each module, with each line of code that is not under test highlighted. I wouldn't get too obsessed with getting coverage to report 100% test coverage for all modules. Some code is better left for integration tests, which I'll cover at the end.
I have become a huge fan of minimock, as it makes testing code with a lot of external dependencies really easy. While it works really well when paired with doctest, it can be used with any testing framework using the unittest.TraceTracker class. I would encourage you to avoid using it to test all of your code though, since you should still try to write your code so that each translation unit can be tested in isolation without mocking. Sometimes that's not possible though.
Here is an (untested) example of such a test using minimock and unittest:
# tests/test_foo.py
import minimock
import unittest
import foo
class FooTest(unittest2.TestCase):
def setUp(self):
# Track all calls into our mock objects. If we don't use a TraceTracker
# then all output will go to stdout, but we want to capture it.
self.tracker = minimock.TraceTracker()
def tearDown(self):
# Restore all objects in global module state that minimock had
# replaced.
minimock.restore()
def test_bar(self):
# foo.bar invokes urllib2.urlopen, and then calls read() on the
# resultin file object, so we'll use minimock to create a mocked
# urllib2.
urlopen_result = minimock.Mock('urlobject', tracker=self.tracker)
urlopen_result.read = minimock.Mock(
'urlobj.read', tracker=self.tracker, returns='OMG')
foo.urllib2.urlopen = minimock.Mock(
'urllib2.urlopen', tracker=self.tracker, returns=urlopen_result)
# Now when we call foo.bar(URL) and it invokes
# *urllib2.urlopen(URL).read()*, it will not actually send a request
# to URL, but will instead give us back the dummy response body 'OMG',
# which it then returns.
self.assertEquals(foo.bar('http://example.com/foo'), 'OMG')
# Now we can get trace info from minimock to verify that our mocked
# urllib2 was used as intended. self.tracker has traced our calls to
# *urllib2.urlopen()*
minimock.assert_same_trace(self.tracker, """\
Called urllib2.urlopen('http://example.com/foo)
Called urlobj.read()
Called urlobj.close()""")
Unit tests shouldn't be the only kinds of tests you write though. They are certainly useful and IMO extremely important if you plan on maintaining this code for any extended period of time. They make refactoring easier and help catch regressions, but they don't really test the interaction between various components and how they interact (if you do it right).
When I start getting to the point where I have a mostly finished product with decent test coverage that I intend to release, I like to write at least one integration test that runs the complete program in an isolated environment.
I've had a lot of success with this on my current project. I had about 80% unit test coverage, and the rest of the code was stuff like argument parsing, command dispatch and top level application state, which is difficult to cover in unit tests. This program has a lot of external dependencies, hitting about a dozen different web services and interacting with about 6,000 machines in production, so running this in isolation proved kinda difficult.
I ended up writing an integration test which spawns a WSGI server written with eventlet and webob that simulates all of the services my program interacts with in production. Then the integration test monkey patches our web service client library to intercept all HTTP requests and send them to the WSGI application. After doing that, it loads a state file that contains a serialized snapshot of the state of the cluster, and invokes the application by calling it's main() function. Now all of the external services my program interacts with are simulated, so that I can run my program as it would be run in production in a repeatable manner.
The important thing to remember about doctests is that the tests are based on string comparisons, and the way that numbers are rendered as strings will vary on different platforms and even in different python interpreters.
Most of my work deals with computations, so I use doctests only to test my examples and my version string. I put a few in the __init__.py since that will show up as the front page of my epydoc-generated API documentation.
I use nose for testing, although I'm very interested in checking out the latest changes to py.test.