Python Nosetest multi-processing enable and disable at Class/Package level - python

So I have a directory with sub-directory of Acceptance Tests.
Most of my tests have no dependancies on each other, expect for one suite.
Is there a way I can tell nose when it reaches this class to execute the tests sequentially. Then once it hits the next class to enable multi-processing again?
This is nothing to do with fixtures in this test suite they simply can't run concurrently. They are executing APIs which affect other tests running at the same time.
Thanks in advance.

I would use nose attribute plugin to decorate tests that require disabling multiprocessing explicitly and run two nose commands: one with multiprocessing enabled, excluding sensitive tests, and one with multiprocessing disabled, including only sensitive tests. You would have to rely on CI framework should combine test results. Something like:
from unittest import TestCase
from nose.plugins.attrib import attr
#attr('sequential')
class MySequentialTestCase(TestCase):
def test_in_seq_1(self):
pass
def test_in_seq_2(self):
pass
class MyMultiprocessingTestCase(TestCase):
def test_in_parallel_1(self):
pass
def test_in_parallel_2(self):
pass
And run it like:
> nosetests -a '!sequential' --processes=10
test_in_parallel_1 (ms_test.MyMultiprocessingTestCase) ... ok
test_in_parallel_2 (ms_test.MyMultiprocessingTestCase) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.071s
OK
> nosetests -a sequential
test_in_seq_1 (ms_test.MySequentialTestCase) ... ok
test_in_seq_2 (ms_test.MySequentialTestCase) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK

Related

pytest - is it possible to run a script/command between all test scripts?

OK, this is definitely my fault but I need to clean it up. One of my test scripts fairly consistently (but not always) updates my database in a way that causes problems for the others (basically, it takes away access rights, for the test user, to the test database).
I could easily find out which script is causing this by running a simple query, either after each individual test, or after each test script completes.
i.e. pytest, or nose2, would do the following:
run test_aaa.py
run check_db_access.py #ideal if I could induce a crash/abort
run test_bbb.py
run check_db_access.py
...
You get the idea. Is there a built-in option or plugin that I can use? The test suite currently works on both pytest and nose2 so either is an option.
Edit: this is not a test db, or a fixture-loaded db. This is a snapshot of any of a number of extremely complex live databases and the test suite, as per its design, is supposed to introspect the database(s) and figure out how to run its tests (almost all access is read-only). This works fine and has many beneficial aspects at least in my particular context, but it also means there is no tearDown or fixture-load for me to work with.
import pytest
#pytest.fixture(autouse = True)
def wrapper(request):
print('\nbefore: {}'.format(request.node.name))
yield
print('\nafter: {}'.format(request.node.name))
def test_a():
assert True
def test_b():
assert True
Example output:
$ pytest -v -s test_foo.py
test_foo.py::test_a
before: test_a
PASSED
after: test_a
test_foo.py::test_b
before: test_b
PASSED
after: test_b

Perform sanity check before running tests with pytest

I would like to perform some sanity check when I run tests using pytest. Typically, I want to check that some executables are accessible to the tests, and that the options provided by the user on the command-line are valid.
The closest thing I found was to use a fixture such as:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
sys.exit(0)
But this still runs all the tests. I'd like for the script to fail before attempting to run the tests.
You shouldn't need to validate the command line options explicitly; this will be done by the arg parser which will abort the execution early if necessary. As for conditions checking, you are not far from the solution. Use
pytest.exit to to an immediate abort
pytest.skip to skip all tests
pytest.xfail to fail all tests (this is an expected failure though, so it won't mark the whole execution as failed)
Example fixture:
#pytest.fixture(scope='session', autouse=True)
def precondition():
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
# or skip each test
# pytest.skip('Install spam before running this test suite.')
# or make it an expected failure
# pytest.xfail('Install spam before running this test suite.')
xdist compatibility
Invoking pytest.exit() in the test run with xdist will only crash the current worker and will not abort the main process. You have to move the check to a hook that is invoked before the runtestloop starts (so anything before the pytest_runtestloop hook), for example:
# conftest.py
def pytest_sessionstart(session):
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
If you want to run sanity check before whole test scenario then you can use conftest.py file - https://docs.pytest.org/en/2.7.3/plugins.html?highlight=re
Just add your function with the same scope and autouse option to conftest.py:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
pytest.exit("Error message here")

How to run setup and tear down function for all the tests in sub directories in py.test?

I have unit tests that runs with py.test for python 2.7 and py.test 3.0. My test directory is like this:
tests
---dir1
test1.py
-------sub-dir1-1
test-1-1.py
-------sub-dir1-2
test-1-2.py
---dir2
test2.py
-------sub-dir2-1
test-2-1.py
-------sub-dir2-2
test-2-2.py
I want all my tests to run a common setup and tear down function before and after test. I would like to do it with the least modification of all the test code.
Thanks
If I understand correctly, you can write a fixture that is session scoped (scope param) and make it used automatically (autouse param). I used a "yield fixture" as an example. Please note, that pytest.yield_fixture is deprecated since pytest 3.0 and pytest.fixture allows use of yield.
import pytest
#pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request):
print 'run_pre_start'
yield
print 'run_after_finish'
It will run before first test (printing "run_pre_start") and part after yield will run after all the tests (printing "run_after_finish").

python unittest with coverage report on (sub)processes

I'm using nose to run my "unittest" tests and have nose-cov to include coverage reports. These all work fine, but part of my tests require running some code as a multiprocessing.Process. The nose-cov docs state that it can do multiprocessing, but I'm not sure how to get that to work.
I'm just running tests by running nosetests and using the following .coveragerc:
[run]
branch = True
parallel = True
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
#if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def __main__\(\):
omit =
mainserver/tests/*
EDIT:
I fixed the parallel switch in my ".coveragerc" file. I've also tried adding a sitecustomize.py like so in my site-packages directory:
import os
import coverage
os.environ['COVERAGE_PROCESS_START']='/sites/metrics_dev/.coveragerc'
coverage.process_startup()
I'm pretty sure it's still not working properly, though, because the "missing" report still shows lines that I know are running (they output to the console). I've also tried adding the environment variable in my test case file and also in the shell before running the test cases. I also tried explicitly calling the same things in the function that's called by multiprocessing.Process to start the new process.
First, the configuration setting you need is parallel, not parallel-mode. Second, you probably need to follow the directions in the Measuring Subprocesses section of the coverage.py docs.
Another thing to consider is if you see more than one coverage file while running coverage. Maybe it's only a matter of combining them afterwards.
tl;dr — to use coverage + nosetests + nose’s --processes option, set coverage’s --concurrency option to multiprocessing, preferably in either .coveragerc or setup.cfg rather than on the command-line (see also: command line usage and configuration files).
Long version…
I also fought this for a while, having followed the documentation on Configuring Python for sub-process coverage to the letter. Finally, upon re-examining the output of coverage run --help a bit more closely, I stumbled across the --concurrency=multiprocessing option, which I’d never used before, and which seems to be the missing link. (In hindsight, this makes sense: nose’s --processing option uses the multiprocessing library under the hood.)
Here is a minimal configuration that works as expected:
unit.py:
def is_even(x):
if x % 2 == 0:
return True
else:
return False
test.py:
import time
from unittest import TestCase
import unit
class TestIsEvenTrue(TestCase):
def test_is_even_true(self):
time.sleep(1) # verify multiprocessing is being used
self.assertTrue(unit.is_even(2))
# use a separate class to encourage nose to use a separate process for this
class TestIsEvenFalse(TestCase):
def test_is_even_false(self):
time.sleep(1)
self.assertFalse(unit.is_even(1))
setup.cfg:
[nosetests]
processes = 2
verbosity = 2
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source = unit
sitecustomize.py (note: located in site-packages)
import os
try:
import coverage
os.environ['COVERAGE_PROCESS_START'] = 'setup.cfg'
coverage.process_startup()
except ImportError:
pass
$ coverage run $(command -v nosetests)
test_is_even_false (test.TestIsEvenFalse) ... ok
test_is_even_true (test.TestIsEvenTrue) ... ok
----------------------------------------------------------------------
Ran 2 tests in 1.085s
OK
$ coverage combine && coverage report
Name Stmts Miss Branch BrPart Cover
-------------------------------------------
unit.py 4 0 2 0 100%

unittest.py doesn't play well with trace.py - why?

Wow. I found out tonight that Python unit tests written using the unittest module don't play well with coverage analysis under the trace module. Here's the simplest possible unit test, in foobar.py:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
if __name__ == "__main__":
unittest.main()
If I run this with python foobar.py, I get this output:
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Great. Now I want to perform coverage testing as well, so I run it again with python -m trace --count -C . foobar.py, but now I get this:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
No, Python, it's not OK - you didn't run my test! It seems as though running in the context of trace somehow gums up unittest's test detection mechanism. Here's the (insane) solution I came up with:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
class Insane(object):
pass
if __name__ == "__main__":
module = Insane()
for k, v in locals().items():
setattr(module, k, v)
unittest.main(module)
This is basically a workaround that reifies the abstract, unnameable name of the top-level module by faking up a copy of it. I can then pass that name to unittest.main() so as to sidestep whatever effect trace has on it. No need to show you the output; it looks just like the successful example above.
So, I have two questions:
What is going on here? Why does trace screw things up for unittest?
Is there an easier and/or less insane way to get around this problem?
A simpler workaround is to pass the name of the module explicitly to unittest.main:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
if __name__ == "__main__":
unittest.main(module='foobar')
trace messes up test discovery in unittest because of how trace loads the module it is running. trace reads the module source code, compiles it, and executes it in a context with a __name__ global set to '__main__'. This is enough to make most modules behave as if they were called as the main module, but doesn't actually change the module which is registered as __main__ in the Python interpreter. When unittest asks for the __main__ module to scan for test cases, it actually gets the trace module called from the command line, which of course doesn't contain the unit tests.
coverage.py takes a different approach of actually replacing which module is called __main__ in sys.modules.
I don't know why trace doesn't work properly, but coverage.py does:
$ coverage run foobar.py
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
$ coverage report
Name Stmts Miss Cover
----------------------------
foobar 6 0 100%
I like Theran's answer but there were some catches with it, on Python 3.6 at least:
if I ran foobar.py that went fine, but if I ran foobar.py Sometestclass, to execute only Sometestclass, trace did not pick that up and ran all tests anyway.
My workaround was to specify defaultTest, when appropriate:
remember that unittest usually are run as
python foobar.py <-flags and options> <TestClass.testmethod> so targeted test is always the last arg, unless it's a unittest option, in which case it starts with -. or it's the foobar.py file itself.
lastarg = sys.argv[-1]
#not a flag, not foobar.py either...
if not lastarg.startswith("-") and not lastarg.endswith(".py"):
defaultTest = lastarg
else:
defaultTest = None
unittest.main(module=os.path.splitext(os.path.basename(__file__))[0], defaultTest=defaultTest)
anyway, now trace only executes the desired tests, or all of them if I don't specify otherwise.

Categories