Python unittest: cancel all tests if a specific test fails - python

I am using unittest to test my Flask application, and nose to actually run the tests.
My first set of tests is to ensure the testing environment is clean and prevent running the tests on the Flask app's configured database. I'm confident that I've set up the test environment cleanly, but I'd like some assurance of that without running all the tests.
import unittest
class MyTestCase(unittest.TestCase):
def setUp(self):
# set some stuff up
pass
def tearDown(self):
# do the teardown
pass
class TestEnvironmentTest(MyTestCase):
def test_environment_is_clean(self):
# A failing test
assert 0 == 1
class SomeOtherTest(MyTestCase):
def test_foo(self):
# A passing test
assert 1 == 1
I'd like the TestEnvironmentTest to cause unittest or noseto bail if it fails, and prevent SomeOtherTest and any further tests from running. Is there some built-in method of doing so in either unittest (preferred) or nose that allows for that?

In order to get one test to execute first and only halt execution of the other tests in case of an error with that test, you'll need to put a call to the test in setUp() (because python does not guarantee test order) and then fail or skip the rest on failure.
I like skipTest() because it actually doesn't run the other tests whereas raising an exception seems to still attempt to run the tests.
def setUp(self):
# set some stuff up
self.environment_is_clean()
def environment_is_clean(self):
try:
# A failing test
assert 0 == 1
except AssertionError:
self.skipTest("Test environment is not clean!")

For your use case there's setUpModule() function:
If an exception is raised in a setUpModule then none of the tests in
the module will be run and the tearDownModule will not be run. If
the exception is a SkipTest exception then the module will be
reported as having been skipped instead of as an error.
Test your environment inside this function.

You can skip entire test cases by calling skipTest() in setUp(). This is a new feature in Python 2.7. Instead of failing the tests, it will simply skip them all.

I'm not quite sure whether it fits your needs, but you can make the execution of a second suite of unittests conditional on the result of a first suite of unittests:
envsuite = unittest.TestSuite()
moretests = unittest.TestSuite()
# fill suites with test cases ...
envresult = unittest.TextTestRunner().run(envsuite)
if envresult.wasSuccessful():
unittest.TextTestRunner().run(moretests)

Related

Getting pytest to run setup and teardown for every test (coming from nose)

How do I make this test suite (see below) run setup and teardown with every test?
import system
def setup():
system.bootstrap() # create vanilla installation.
def teardown():
system.reset() # reset installation.
def test01():
# <-- expects setup()
system.create_user('bob')
assert 'bob' in system.directory
# <-- expects teardown() even when the assertion fails.
def test02():
# <-- expects setup()
system.create_user('jane')
assert 'jane' in system.directory
# <-- expects teardown() even when the assertion fails.
I'm sure this answer is close, I just can't get it to work in VScode on windows-11.
I've looked at how to implement xunit-style set-up in the docs, but no function is passed. The testing is very functional.
What I see is that test01 runs with setup and teardown, but not test02.
What is missing in my mental model?
While your solution works, I would recommend the use of a pytest fixture --
import system
#pytest.fixture(autouse=True)
def handle_system():
system.bootstrap()
yield
system.reset()
Whilst writing the question, I found out that pytest runs setup and teardown if I just rename them to: setup_function and teardown_function.
No parameters required.
import system
def setup_function():
system.bootstrap() # create vanilla installation.
def teardown_function():
system.reset() # reset installation.
def test01():
# <-- expects setup()
system.create_user('bob')
assert 'bob' in system.directory
# <-- expects teardown() even when the assertion fails.
def test02():
# <-- expects setup()
system.create_user('jane')
assert 'jane' in system.directory
# <-- expects teardown() even when the assertion fails.
pytest on windows 11. Python 3.9.10. pytest 7.0.1

Perform sanity check before running tests with pytest

I would like to perform some sanity check when I run tests using pytest. Typically, I want to check that some executables are accessible to the tests, and that the options provided by the user on the command-line are valid.
The closest thing I found was to use a fixture such as:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
sys.exit(0)
But this still runs all the tests. I'd like for the script to fail before attempting to run the tests.
You shouldn't need to validate the command line options explicitly; this will be done by the arg parser which will abort the execution early if necessary. As for conditions checking, you are not far from the solution. Use
pytest.exit to to an immediate abort
pytest.skip to skip all tests
pytest.xfail to fail all tests (this is an expected failure though, so it won't mark the whole execution as failed)
Example fixture:
#pytest.fixture(scope='session', autouse=True)
def precondition():
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
# or skip each test
# pytest.skip('Install spam before running this test suite.')
# or make it an expected failure
# pytest.xfail('Install spam before running this test suite.')
xdist compatibility
Invoking pytest.exit() in the test run with xdist will only crash the current worker and will not abort the main process. You have to move the check to a hook that is invoked before the runtestloop starts (so anything before the pytest_runtestloop hook), for example:
# conftest.py
def pytest_sessionstart(session):
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
If you want to run sanity check before whole test scenario then you can use conftest.py file - https://docs.pytest.org/en/2.7.3/plugins.html?highlight=re
Just add your function with the same scope and autouse option to conftest.py:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
pytest.exit("Error message here")

What is the correct way to report an error in a Python unittest in the setUp method?

I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)

Skip a unit test from a Nose2 Plugin

I'm having trouble actually skipping a unit test from a Nose2 plugin. I am able to mark the test skipped and see the reason in the final result, but the test still runs. This example code should basically skip any test, as long as the plugin is active.
from nose2.events import Plugin
class SkipAllTests(Plugin):
def startTest(self, event):
event.result.addSkip(event.test, 'skip it')
event.handled = True
If I call event.test.skipTest('reason') it actually raises the SkipTest exception like it should, it's just that the exception isn't caught by the test runner, it just raises inside of my startTest hook method. Any ideas?
I don't think you can actually stop a test from running with the startTest hook. The nose2 docs suggest using either matchPath or getTestCaseNames to do this. Here's a working example using matchPath:
from nose2.events import Plugin
class SkipAllTests(Plugin):
configSection = "skipper"
commandLineSwitch = (None, 'skipper', "Skip all tests")
def matchPath(self, event):
event.handled = True
return False
The matchPath docs actually explictly explain how it can be used to stop tests from running:
Plugins can use this hook to prevent python modules from being loaded
by the test loader or force them to be loaded by the test loader. Set
event.handled to True and return False to cause the loader to skip the
module.
Using this method will prevent the test case from ever being loaded. If you want the test to actually show up in the list as skipped, rather than not have it show up in the list of tests at all, you can do a little bit of hackery with the StartTestEvent:
def dummy(*args, **kwargs):
pass
class SkipAllTests(Plugin):
configSection = "skipper"
commandLineSwitch = (None, 'skipper', "Skip all tests")
def startTest(self, event):
event.test._testFunc = dummy
event.result.addSkip(event.test, 'skip it')
event.handled = True
Here, we replace the actual function the test is going to run with a dummy function that does nothing. That way, when the test executes, it no-ops, and then reports that it was skipped.

Use `unittest` to verify that the `exit` function was called

I defined a very simple configuration manager (parses config files and verifies that certain keys exist in them) and now I'm writing some tests for it.
In many cases where the config file is is invalid, I want the configuration handler to call exit(). Is there some way I can write tests to ensure that exit was called and still continue testing? Can I perhaps, for a testing environment only, "mock" the exit function?
I am using Python 2.7 and unittest.
sys.exit() raises the SystemExit exception, which should be caught by the unittest framework. So you can check that SystemExit is raised for those test cases.
Example, exit_test.py:
import sys
import unittest
def func():
sys.exit()
class MyTest(unittest.TestCase):
def test_func(self):
self.assertRaises(SystemExit, func)
if __name__ == "__main__":
unittest.main()
Run:
$ python exit_test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK

Categories