I am using Python assert statements to match the actual and expected behaviour. I do not have a control over these as if there is an error test cases aborts. I want to take control of assertion error and want to define if I want to abort testcase on failure assert or not.
Also I want to add something like if there is an assertion error then test case should be paused and user can resume at any moment.
I do not have any idea how to do this
Code example, we are using pytest here
import pytest
def test_abc():
a = 10
assert a == 10, "some error message"
Below is my expectation
When assert throws an assertionError, i should have a option of pausing the testcase and can debug and later resume. For pause and resume I will use tkinter module. I will make a assert function as below
import tkinter
import tkinter.messagebox
top = tkinter.Tk()
def _assertCustom(assert_statement, pause_on_fail = 0):
#assert_statement will be something like: assert a == 10, "Some error"
#pause_on_fail will be derived from global file where I can change it on runtime
if pause_on_fail == 1:
try:
eval(assert_statement)
except AssertionError as e:
tkinter.messagebox.showinfo(e)
eval (assert_statement)
#Above is to raise the assertion error again to fail the testcase
else:
eval (assert_statement)
Going forward I have to change every assert statement with this function as
import pytest
def test_abc():
a = 10
# Suppose some code and below is the assert statement
_assertCustom("assert a == 10, 'error message'")
This is too much effort for me as I have to make change at thousand of places where I have used assert. Is there any easy way to do that in pytest
Summary: I needs something where I can pause the testcase on failure and then resume after debugging. I know about tkinter and that is the reason I have used it. Any other ideas will be welcomes
Note: Above code is not tested yet. There may be small syntax errors too
Edit: Thanks for the answers. Extending this question a little ahead now. What if I want change the behaviour of assert. Currently when there is an assertion error testcase exits. What if I want to choose if I need testcase exit on particular assert failure or not. I don't want to write custom assert function as mentioned above because this way I have to change at number of places
You are using pytest, which gives you ample options to interact with failing tests. It gives you command line options and and several hooks to make this possible. I'll explain how to use each and where you could make customisations to fit your specific debugging needs.
I'll also go into more exotic options that would allow you to skip specific assertions entirely, if you really feel you must.
Handle exceptions, not assert
Note that a failing test doesn’t normally stop pytest; only if you enabled the explicitly tell it to exit after a certain number of failures. Also, tests fail because an exception is raised; assert raises AssertionError but that’s not the only exception that’ll cause a test to fail! You want to control how exceptions are handled, not alter assert.
However, a failing assert will end the individual test. That's because once an exception is raised outside of a try...except block, Python unwinds the current function frame, and there is no going back on that.
I don't think that that's what you want, judging by your description of your _assertCustom() attempts to re-run the assertion, but I'll discuss your options further down nonetheless.
Post-mortem debugging in pytest with pdb
For the various options to handle failures in a debugger, I'll start with the --pdb command-line switch, which opens the standard debugging prompt when a test fails (output elided for brevity):
$ mkdir demo
$ touch demo/__init__.py
$ cat << EOF > demo/test_foo.py
> def test_ham():
> assert 42 == 17
> def test_spam():
> int("Vikings")
> EOF
$ pytest demo/test_foo.py --pdb
[ ... ]
test_foo.py:2: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(2)test_ham()
-> assert 42 == 17
(Pdb) q
Exit: Quitting debugger
[ ... ]
With this switch, when a test fails pytest starts a post-mortem debugging session. This is essentially exactly what you wanted; to stop the code at the point of a failed test and open the debugger to take a look at the state of your test. You can interact with the local variables of the test, the globals, and the locals and globals of every frame in the stack.
Here pytest gives you full control over whether or not to exit after this point: if you use the q quit command then pytest exits the run too, using c for continue will return control to pytest and the next test is executed.
Using an alternative debugger
You are not bound to the pdb debugger for this; you can set a different debugger with the --pdbcls switch. Any pdb.Pdb() compatible implementation would work, including the IPython debugger implementation, or most other Python debuggers (the pudb debugger requires the -s switch is used, or a special plugin). The switch takes a module and class, e.g. to use pudb you could use:
$ pytest -s --pdb --pdbcls=pudb.debugger:Debugger
You could use this feature to write your own wrapper class around Pdb that simply returns immediately if the specific failure is not something you are interested in. pytest uses Pdb() exactly like pdb.post_mortem() does:
p = Pdb()
p.reset()
p.interaction(None, t)
Here, t is a traceback object. When p.interaction(None, t) returns, pytest continues with the next test, unless p.quitting is set to True (at which point pytest then exits).
Here is an example implementation that prints out that we are declining to debug and returns immediately, unless the test raised ValueError, saved as demo/custom_pdb.py:
import pdb, sys
class CustomPdb(pdb.Pdb):
def interaction(self, frame, traceback):
if sys.last_type is not None and not issubclass(sys.last_type, ValueError):
print("Sorry, not interested in this failure")
return
return super().interaction(frame, traceback)
When I use this with the above demo, this is output (again, elided for brevity):
$ pytest test_foo.py -s --pdb --pdbcls=demo.custom_pdb:CustomPdb
[ ... ]
def test_ham():
> assert 42 == 17
E assert 42 == 17
test_foo.py:2: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Sorry, not interested in this failure
F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../test_foo.py(4)test_spam()
-> int("Vikings")
(Pdb)
The above introspects sys.last_type to determine if the failure is 'interesting'.
However, I can't really recommend this option unless you want to write your own debugger using tkInter or something similar. Note that that is a big undertaking.
Filtering failures; pick and choose when to open the debugger
The next level up is the pytest debugging and interaction hooks; these are hook points for behaviour customisations, to replace or enhance how pytest normally handles things like handling an exception or entering the debugger via pdb.set_trace() or breakpoint() (Python 3.7 or newer).
The internal implementation of this hook is responsible for printing the >>> entering PDB >>> banner above as well, so using this hook to prevent the debugger from running means you won't see this output at all. You can have your own hook then delegate to the original hook when a test failure is 'interesting', and so filter test failures independent of the debugger you are using! You can access the internal implementation by accessing it by name; the internal hook plugin for this is named pdbinvoke. To prevent it from running you need to unregister it but save a reference do we can call it directly as needed.
Here is a sample implementation of such a hook; you can put this in any of the locations plugins are loaded from; I put it in demo/conftest.py:
import pytest
#pytest.hookimpl(trylast=True)
def pytest_configure(config):
# unregister returns the unregistered plugin
pdbinvoke = config.pluginmanager.unregister(name="pdbinvoke")
if pdbinvoke is None:
# no --pdb switch used, no debugging requested
return
# get the terminalreporter too, to write to the console
tr = config.pluginmanager.getplugin("terminalreporter")
# create or own plugin
plugin = ExceptionFilter(pdbinvoke, tr)
# register our plugin, pytest will then start calling our plugin hooks
config.pluginmanager.register(plugin, "exception_filter")
class ExceptionFilter:
def __init__(self, pdbinvoke, terminalreporter):
# provide the same functionality as pdbinvoke
self.pytest_internalerror = pdbinvoke.pytest_internalerror
self.orig_exception_interact = pdbinvoke.pytest_exception_interact
self.tr = terminalreporter
def pytest_exception_interact(self, node, call, report):
if not call.excinfo. errisinstance(ValueError):
self.tr.write_line("Sorry, not interested!")
return
return self.orig_exception_interact(node, call, report)
The above plugin uses the internal TerminalReporter plugin to write out lines to the terminal; this makes the output cleaner when using the default compact test status format, and lets you write things to the terminal even with output capturing enabled.
The example registers the plugin object with pytest_exception_interact hook via another hook, pytest_configure(), but making sure it runs late enough (using #pytest.hookimpl(trylast=True)) to be able to un-register the internal pdbinvoke plugin. When the hook is called, the example tests against the call.exceptinfo object; you can also check the node or the report too.
With the above sample code in place in demo/conftest.py, the test_ham test failure is ignored, only the test_spam test failure, which raises ValueError, results in the debug prompt opening:
$ pytest demo/test_foo.py --pdb
[ ... ]
demo/test_foo.py F
Sorry, not interested!
demo/test_foo.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
demo/test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(4)test_spam()
-> int("Vikings")
(Pdb)
To re-iterate, the above approach has the added advantage that you can combine this with any debugger that works with pytest, including pudb, or the IPython debugger:
$ pytest demo/test_foo.py --pdb --pdbcls=IPython.core.debugger:Pdb
[ ... ]
demo/test_foo.py F
Sorry, not interested!
demo/test_foo.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
demo/test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(4)test_spam()
1 def test_ham():
2 assert 42 == 17
3 def test_spam():
----> 4 int("Vikings")
ipdb>
It also has much more context about what test was being run (via the node argument) and direct access to the exception raised (via the call.excinfo ExceptionInfo instance).
Note that specific pytest debugger plugins (such as pytest-pudb or pytest-pycharm) register their own pytest_exception_interact hooksp. A more complete implementation would have to loop over all plugins in the plugin-manager to override arbitrary plugins, automatically, using config.pluginmanager.list_name_plugin and hasattr() to test each plugin.
Making failures go away altogether
While this gives you full control over failed test debugging, this still leaves the test as failed even if you opted not to open the debugger for a given test. If you want to make failures go away altogether, you can make use a different hook: pytest_runtest_call().
When pytest runs tests, it'll run the test via the above hook, which is expected to return None or raise an exception. From this a report is created, optionally a log entry is created, and if the test failed, the aforementioned pytest_exception_interact() hook is called. So all you need to do is change what the result that this hook produces; instead of an exception it should just not return anything at all.
The best way to do that is to use a hook wrapper. Hook wrappers don't have to do the actual work, but instead are given a chance to alter what happens to the result of a hook. All you have to do is add the line:
outcome = yield
in your hook wrapper implementation and you get access to the hook result, including the test exception via outcome.excinfo. This attribute is set to a tuple of (type, instance, traceback) if an exception was raised in the test. Alternatively, you could call outcome.get_result() and use standard try...except handling.
So how do you make a failed test pass? You have 3 basic options:
You could mark the test as an expected failure, by calling pytest.xfail() in the wrapper.
You could mark the item as skipped, which pretends that the test was never run in the first place, by calling pytest.skip().
You could remove the exception, by using the outcome.force_result() method; set the result to an empty list here (meaning: the registered hook produced nothing but None), and the exception is cleared entirely.
What you use is up to you. Do make sure to check the result for skipped and expected-failure tests first as you don't need to handle those cases as if the test failed. You can access the special exceptions these options raise via pytest.skip.Exception and pytest.xfail.Exception.
Here's an example implementation which marks failed tests that don't raise ValueError, as skipped:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
outcome = yield
try:
outcome.get_result()
except (pytest.xfail.Exception, pytest.skip.Exception, pytest.exit.Exception):
raise # already xfailed, skipped or explicit exit
except ValueError:
raise # not ignoring
except (pytest.fail.Exception, Exception):
# turn everything else into a skip
pytest.skip("[NOTRUN] ignoring everything but ValueError")
When put in conftest.py the output becomes:
$ pytest -r a demo/test_foo.py
============================= test session starts =============================
platform darwin -- Python 3.8.0, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: ..., inifile:
collected 2 items
demo/test_foo.py sF [100%]
=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
demo/test_foo.py:4: ValueError
=========================== short test summary info ============================
FAIL demo/test_foo.py::test_spam
SKIP [1] .../demo/conftest.py:12: [NOTRUN] ignoring everything but ValueError
===================== 1 failed, 1 skipped in 0.07 seconds ======================
I used the -r a flag to make it clearer that test_ham was skipped now.
If you replace the pytest.skip() call with pytest.xfail("[XFAIL] ignoring everything but ValueError"), the test is marked as an expected failure:
[ ... ]
XFAIL demo/test_foo.py::test_ham
reason: [XFAIL] ignoring everything but ValueError
[ ... ]
and using outcome.force_result([]) marks it as passed:
$ pytest -v demo/test_foo.py # verbose to see individual PASSED entries
[ ... ]
demo/test_foo.py::test_ham PASSED [ 50%]
It's up to you which one you feel fits your use case best. For skip() and xfail() I mimicked the standard message format (prefixed with [NOTRUN] or [XFAIL]) but you are free to use any other message format you want.
In all three cases pytest will not open the debugger for tests whose outcome you altered using this method.
Altering individual assert statements
If you want to alter assert tests within a test, then you are setting yourself up for a whole lot more work. Yes, this is technically possible, but only by rewriting the very code that Python is going to execute at compile time.
When you use pytest, this is actually already being done. Pytest rewrites assert statements to give you more context when your asserts fail; see this blog post for a good overview of exactly what is being done, as well as the _pytest/assertion/rewrite.py source code. Note that that module is over 1k lines long, and requires that you understand how Python's abstract syntax trees work. If you do, you could monkeypatch that module to add your own modifications there, including surrounding the assert with a try...except AssertionError: handler.
However, you can't just disable or ignore asserts selectively, because subsequent statements could easily depend on state (specific object arrangements, variables set, etc.) that a skipped assert was meant to guard against. If an assert tests that foo is not None, then a later assert relies on foo.bar to exist, then you simply will run into an AttributeError there, etc. Do stick to re-raising the exception, if you need to go this route.
I'm not going to go into further detail on rewriting asserts here, as I don't think this is worth pursuing, not given the amount of work involved, and with post-mortem debugging giving you access to the state of the test at the point of assertion failure anyway.
Note that if you do want to do this, you don't need to use eval() (which wouldn't work anyway, assert is a statement, so you'd need to use exec() instead), nor would you have to run the assertion twice (which can lead to issues if the expression used in the assertion altered state). You would instead embed the ast.Assert node inside a ast.Try node, and attach an except handler that uses an empty ast.Raise node re-raise the exception that was caught.
Using the debugger to skip assertion statements.
The Python debugger actually lets you skip statements, using the j / jump command. If you know up front that a specific assertion will fail, you can use this to bypass it. You could run your tests with --trace, which opens the debugger at the start of every test, then issue a j <line after assert> to skip it when the debugger is paused just before the assert.
You can even automate this. Using the above techniques you can build a custom debugger plugin that
uses the pytest_testrun_call() hook to catch the AssertionError exception
extracts the line 'offending' line number from the traceback, and perhaps with some source code analysis determines the line numbers before and after the assertion required to execute a successful jump
runs the test again, but this time using a Pdb subclass that sets a breakpoint on the line before the assert, and automatically executes a jump to the second when the breakpoint is hit, followed by a c continue.
Or, instead of waiting for an assertion to fail, you could automate setting breakpoints for each assert found in a test (again using source code analysis, you can trivially extract line numbers for ast.Assert nodes in an an AST of the test), execute the asserted test using debugger scripted commands, and use the jump command to skip the assertion itself. You'd have to make a tradeoff; run all tests under a debugger (which is slow as the interpreter has to call a trace function for every statement) or only apply this to failing tests and pay the price of re-running those tests from scratch.
Such a plugin would be a lot of work to create, I'm not going to write an example here, partly because it wouldn't fit in an answer anyway, and partly because I don't think it is worth the time. I'd just open up the debugger and make the jump manually. A failing assert indicates a bug in either the test itself or the code-under-test, so you may as well just focus on debugging the problem.
You can achieve exactly what you want without absolutely any code modification with pytest --pdb.
With your example:
import pytest
def test_abc():
a = 9
assert a == 10, "some error message"
Run with --pdb:
py.test --pdb
collected 1 item
test_abc.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_abc():
a = 9
> assert a == 10, "some error message"
E AssertionError: some error message
E assert 9 == 10
test_abc.py:4: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /private/tmp/a/test_abc.py(4)test_abc()
-> assert a == 10, "some error message"
(Pdb) p a
9
(Pdb)
As soon as a test fails, you can debug it with the builtin python debugger. If you're done debugging, you can continue with the rest of the tests.
If you're using PyCharm then you can add an Exception Breakpoint to pause execution whenever an assert fails. Select View Breakpoints (CTRL-SHIFT-F8) and add an on-raise exception handler for AssertionError. Note that this may slow down the execution of the tests.
Otherwise, if you don't mind pausing at the end of each failing test (just before it errors) rather than at the point the assertion fails, then you have a few options. Note however that by this point various cleanup code, such as closing files that were opened in the test, might have already been run. Possible options are:
You can tell pytest to drop you into the debugger on errors using the --pdb option.
You can define the following decorator and decorate each relevant test function with it. (Apart from logging a message, you could also start a pdb.post_mortem at this point, or even an interactive code.interact with the locals of the frame where the exception originated, as described in this answer.)
from functools import wraps
def pause_on_assert(test_func):
#wraps(test_func)
def test_wrapper(*args, **kwargs):
try:
test_func(*args, **kwargs)
except AssertionError as e:
tkinter.messagebox.showinfo(e)
# re-raise exception to make the test fail
raise
return test_wrapper
#pause_on_assert
def test_abc()
a = 10
assert a == 2, "some error message"
If you don't want to manually decorate every test function, you can instead define an autouse fixture that inspects sys.last_value:
import sys
#pytest.fixture(scope="function", autouse=True)
def pause_on_assert():
yield
if hasattr(sys, 'last_value') and isinstance(sys.last_value, AssertionError):
tkinter.messagebox.showinfo(sys.last_value)
One simple solution, if you're willing to use Visual Studio Code, could be to use conditional breakpoints.
This would allow you to set up your assertions, for example:
import pytest
def test_abc():
a = 10
assert a == 10, "some error message"
Then add a conditional breakpoint in your assert line which will only break when your assertion fails:
I set up a CI pipeline (Gitlab CI if that matters) for my latest Python project and added several test cases for things I still want to implement. In each test case I raise a NotImplementedError since, well, it has not been implemented yet.
import unittest
class GenericTest(unittest.TestCase):
def test_stuff(self):
"""I'll fill this in when I come around to it."""
raise NotImplementedError
Generally, I want these tests to fail, since do not yet work properly. However, when I push to my repository and the tests are run on the CI system, I would like to skip these tests. I already know they will 'fail' and they mask actual failing tests.
Is there a way to suppress these exceptions, or a specific type of exception (like IKnowThisWillFailError), so that the affected tests are not counted as 'failed'?
what about
import unittest
class GenercTest(unittest.TestCase):
def test_stuff(self):
"""I'll fill this in when I come around to it."""
raise unittest.SkipTest("IKnowThisWillFail")
your CI system probably can differentiate between skipped and failed tests
try:
# code
except IKnowThisWillFailError:
pass
except:
# catch
I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
I am using unittest to test my Flask application, and nose to actually run the tests.
My first set of tests is to ensure the testing environment is clean and prevent running the tests on the Flask app's configured database. I'm confident that I've set up the test environment cleanly, but I'd like some assurance of that without running all the tests.
import unittest
class MyTestCase(unittest.TestCase):
def setUp(self):
# set some stuff up
pass
def tearDown(self):
# do the teardown
pass
class TestEnvironmentTest(MyTestCase):
def test_environment_is_clean(self):
# A failing test
assert 0 == 1
class SomeOtherTest(MyTestCase):
def test_foo(self):
# A passing test
assert 1 == 1
I'd like the TestEnvironmentTest to cause unittest or noseto bail if it fails, and prevent SomeOtherTest and any further tests from running. Is there some built-in method of doing so in either unittest (preferred) or nose that allows for that?
In order to get one test to execute first and only halt execution of the other tests in case of an error with that test, you'll need to put a call to the test in setUp() (because python does not guarantee test order) and then fail or skip the rest on failure.
I like skipTest() because it actually doesn't run the other tests whereas raising an exception seems to still attempt to run the tests.
def setUp(self):
# set some stuff up
self.environment_is_clean()
def environment_is_clean(self):
try:
# A failing test
assert 0 == 1
except AssertionError:
self.skipTest("Test environment is not clean!")
For your use case there's setUpModule() function:
If an exception is raised in a setUpModule then none of the tests in
the module will be run and the tearDownModule will not be run. If
the exception is a SkipTest exception then the module will be
reported as having been skipped instead of as an error.
Test your environment inside this function.
You can skip entire test cases by calling skipTest() in setUp(). This is a new feature in Python 2.7. Instead of failing the tests, it will simply skip them all.
I'm not quite sure whether it fits your needs, but you can make the execution of a second suite of unittests conditional on the result of a first suite of unittests:
envsuite = unittest.TestSuite()
moretests = unittest.TestSuite()
# fill suites with test cases ...
envresult = unittest.TextTestRunner().run(envsuite)
if envresult.wasSuccessful():
unittest.TextTestRunner().run(moretests)
Let's say I have the following function:
def f():
if TESTING:
# Run expensive sanity check code
...
What is the correct way to run the TESTING code block only if we are running a unittest?
[edit: Is there some "global" variable I can access to find out if unittests are on?]
Generally, I'd suggest not doing this. Your production-code really shouldn't realize that the unit-tests exist. One reason for this, is that you could have code in your if TESTING block that makes the tests pass (accidentally), and since production runs of your code won't run these bits, could leave you exposed to failure in production even when your tests pass.
However, if you insist of doing this, there are two potential ways (that I can think of) this can be done.
First of, you could use a module level TESTING var that you set in your test case to True. For example:
Production Code:
TESTING = False # This is false until overridden in tests
def foo():
if TESTING:
print "expensive stuff..."
Unit-Test Code:
import production
def test_foo():
production.TESTING = True
production.foo() # Prints "expensive stuff..."
The second way is to use python's builtin assert keyword. When python is run with -O, the interpreter will strip (or ignore) all assert statements in your code, allowing you to sprinkle these expensive gems throughout and know they will not be run if it is executed in optimized mode. Just be sure to run your tests without the -O flag.
Example (Production Code):
def expensive_checks():
print "expensive stuff..."
return True
def foo():
print "normal, speedy stuff."
assert expensive_checks()
foo()
Output (run with python mycode.py)
normal, speedy stuff.
expensive stuff...
Output (run with python -O mycode.py)
normal, speedy stuff.
One word of caution about the assert statements... if the assert statement does not evaluate to a true value, an AssertionError will be raised.