Retrieve result of nose test case and use in teardown - python

In nose, the teardown runs regardless if setup has completed successfully or the status of the test run.
I want to perform a task in teardown that is only executed if the test that just ran failed. Is there an easy way to retrieve the result of each individual test case and pass it to the teardown method to be interpreted?
class TestMyProgram:
def setup(self):
# setup code here
def teardown(self):
# teardown code here
# run this code if test failed
if test_result == 'FAIL':
# do something
def test_one(self):
# example test placeholder
pass
def test_two(self):
# example test placeholder
pass

You have to capture the state of the test, and pass it on to your teardown method. The state of the test is within nose code: you cannot access without writing a nose plugin. But even with plugin, you would have to write a custom rig to pass on the state to the teardown method. But if you are willing to break the structure of the code a little bit to accommodate your request, you might be able to do something like this:
def special_trardown(self, state):
# only state specific logic goes here
print state
def test_one_with_passing_state(self):
try:
test_one(self)
except AssertionError as err:
test_result = "FAIL"
self.special_teardown(test_result)
raise
Its not perfect, but it makes the flow of events obvious to other people looking at your tests. You can also wrap it up as decorator / context manager for more syntactic sugar.

Related

Pytest: accessing a fixture from an outside function

I would like to know if there is a way to access a pytest fixture from functions located in other modules. I wonder if I could use custom pytest hooks for this.
# test/test_file.py
pytest.fixture
def my_fixture()
return my_object
def test_function():
print('Starting test)
my_function('arg1')
# lib/functions.py
def my_function(arg1):
# I would like have access to my_fixture
So since it's possible to access fixtures in pytest hooks during runtime session, I wonder if there is a way to access them from outside of the test functions also.
So far this solution works for me:
#pytest.mark.hookwrapper
def pytest_runtest_makereport(self, item, call):
# Stops at first non-None test result
outcome = yield
rep = outcome.get_result()
if rep.when == 'call' and hasattr(item, 'callspec'):
test_case = item.callspec.params.get('test_case')
Here I can access my_fixture thingy, but only in a pre-defined hooks, pytest standard flow. I wonder if I could create my own hooks, and then make my functions "aware" of these and get the access to anything defined inside.
How to define and use custom hooks?

pytest unittest custom decorator change unittest execution order

I am using pytest to test firmware functionality on a microcontroller. The interface to it is irrelevant for this conversation.
As such, I have a number of checks which I include at the end of every unittest to ensure that the microcontroller is still in a valid state.
Now, these checks are common to every unittest, to be executed at the end of execution of each unittest. I thought it would make sense to use a decorator for this and decorate each unittest instead of duplicating code.
This works except now, unittest execution is no longer in sequence. Unittests now seem to be executed randomly instead of the order in which they were defined which was the behavior before I introduced the decorator.
Could someone please tell me why this happening?
Example pseudocode:
def check_no_err(func):
def checker(api, *args, **kwargs):
func(api, *args, **kwargs)
assert 1 #<some checker code here run after the unittest>
#check_no_err
def test_eval_status(api):
assert 1 #unittest body here
edit: I am using a fixture with a function scope to achieve the same behavior which provides predictable unit test execution order. Like this:
#pytest.fixture(scope='function')
def test_end_checker(api):
assert 1 #the check to run after every test which includes this fixture
So the only remaining question is why execution order is being randomized due to the additional decorator.

Disable autouse fixtures on specific pytest marks

Is it possible to prevent the execution of "function scoped" fixtures with autouse=True on specific marks only?
I have the following fixture set to autouse so that all outgoing requests are automatically mocked out:
#pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
But I have a mark called endtoend that I use to define a series of tests that are allowed to make external requests for more robust end to end testing. I would like to inject no_requests in all tests (the vast majority), but not in tests like the following:
#pytest.mark.endtoend
def test_api_returns_ok():
assert make_request().status_code == 200
Is this possible?
You can also use the request object in your fixture to check the markers used on the test, and don't do anything if a specific marker is set:
import pytest
#pytest.fixture(autouse=True)
def autofixt(request):
if 'noautofixt' in request.keywords:
return
print("patching stuff")
def test1():
pass
#pytest.mark.noautofixt
def test2():
pass
Output with -vs:
x.py::test1 patching stuff
PASSED
x.py::test2 PASSED
In case you have your endtoend tests in specific modules or classes you could also just override the no_requests fixture locally, for example assuming you group all your integration tests in a file called end_to_end.py:
# test_end_to_end.py
#pytest.fixture(autouse=True)
def no_requests():
return
def test_api_returns_ok():
# Should make a real request.
assert make_request().status_code == 200
I wasn't able to find a way to disable fixtures with autouse=True, but I did find a way to revert the changes made in my no_requests fixture. monkeypatch has a method undo that reverts all patches made on the stack, so I was able to call it in my endtoend tests like so:
#pytest.mark.endtoend
def test_api_returns_ok(monkeypatch):
monkeypatch.undo()
assert make_request().status_code == 200
It would be difficult and probably not possible to cancel or change the autouse
You can't canel an autouse, being as it's autouse. Maybe you could do something to change the autouse fixture based on a mark's condition. But this would be hackish and difficult.
possibly with:
import pytest
from _pytest.mark import MarkInfo
I couldn't find a way to do this, but maybe the #pytest.fixture(autouse=True) could get the MarkInfo and if it came back 'endtoend' the fixture wouldn't set the attribute. But you would also have to set a condition in the fixture parameters.
i.e.: #pytest.fixture(True=MarkInfo, autouse=True). Something like that. But I couldn't find a way.
It's recommended that you organize tests to prevent this
You could just separate the no_requests from the endtoend tests by either:
limit the scope of your autouse fixture
put the no_requests into a class
Not make it an auto use, just pass it into the params of each def you need it
Like so:
class NoRequests:
#pytest.fixture(scope='module', autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
def test_no_request1(self):
# do stuff here
# and so on
This is good practice. Maybe a different organization could help
But in your case, it's probably easiest to monkeypatch.undo()

Does setUp method from unittest.TestCase know the current test case?

Does the setUp method from the unittest.TestCase knows what is the next test case that will be executed? For example:
import unittest
class Tests(unittest.TestCase):
def setUp(self):
print "The next test will be: " + self.next_test_name()
def test_01(self):
pass
def test_02(self):
pass
if __name__ == '__main__':
unittest.main()
Such code should produce upon execution:
The next test will be: test_01
The next test will be: test_02
No, unittest makes no guarantee about the order of test execution.
Moreover, you should structure your unit tests not to rely on any particular order. If they require some state setup from another test method then you no longer have, by definition, a unit test.
The current test method name which will be executed lives in self._testMethodName, but use it at your own risk (accessing _private attributes is subject to breakage without warning). Do not use this to customise setUp based on the particular test method, prefer to split tests requiring a different setup off into separate test classes.
class Tests(unittest.TestCase):
def setUp(self):
print "The next test will be: " + self.id()
def test_01(self):
pass
def test_02(self):
pass
if __name__ == '__main__':
unittest.main()
Which will produce:
The next test will be: __main__.Tests.test_01
The next test will be: __main__.Tests.test_02

Skip unittest test without decorator syntax

I have a suite of tests that I have loaded using TestLoader's (from the unittest module) loadTestsFromModule() method, i.e.,
suite = loader.loadTestsFromModule(module)
This gives me a perfectly ample list of tests that works fine. My problem is that the test harness I'm working with sometimes needs to skip certain tests based on various criteria. What I want to do is something like this:
for test in suite:
mark the test as 'to-skip' if it meets certain criteria
Note that I can't just remove the test from the list of tests because I want the unittest test runner to actually skip the tests, add them to the skipped count, and all of that jazz.
The unittest documentation suggests using decorators around the test methods or classes. Since I'm loading these tests from a module and determining to skip based on criteria not contained within the tests themselves, I can't really use decorators. Is there a way I can iterate over each individual test and some how mark it as a "to-skip" test without having to directly access the test class or methods within the class?
Using unittest.TestCase.skipTest:
import unittest
class TestFoo(unittest.TestCase):
def setUp(self): print('setup')
def tearDown(self): print('teardown')
def test_spam(self): pass
def test_egg(self): pass
def test_ham(self): pass
if __name__ == '__main__':
import sys
loader = unittest.loader.defaultTestLoader
runner = unittest.TextTestRunner(verbosity=2)
suite = loader.loadTestsFromModule(sys.modules['__main__'])
for ts in suite:
for t in ts:
if t.id().endswith('am'): # To skip `test_spam` and `test_ham`
setattr(t, 'setUp', lambda: t.skipTest('criteria'))
runner.run(suite)
prints
test_egg (__main__.TestFoo) ... setup
teardown
ok
test_ham (__main__.TestFoo) ... skipped 'criteria'
test_spam (__main__.TestFoo) ... skipped 'criteria'
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK (skipped=2)
----------------------------------------------------------------------
Ran 3 tests in 0.002s
OK (skipped=2)
UPDATE
Updated the code to patch setUp instead of test method. Otherwise, setUp/tearDown methods will be executed for test to be skipped.
NOTE
unittest.TestCase.skipTest (Test skipping) was introduced in Python 2.7, 3.1. So this method only work in Python 2.7+, 3.1+.
This is a bit of a hack, but because you only need to raise unittest.SkipTest you can walk through your suite and modify each test to raise it for you instead of running the actual test code:
import unittest
from unittest import SkipTest
class MyTestCase(unittest.TestCase):
def test_this_should_skip(self):
pass
def test_this_should_get_skipped_too(self):
pass
def _skip_test(reason):
raise SkipTest(reason)
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(MyTestCase)
for test in suite:
skipped_test_method = lambda: _skip_test("reason")
setattr(test, test._testMethodName, skipped_test_method)
unittest.TextTestRunner(verbosity=2).run(suite)
When I run this, this is the output I get:
test_this_should_get_skipped_too (__main__.MyTestCase) ... skipped 'reason'
test_this_should_skip (__main__.MyTestCase) ... skipped 'reason'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK (skipped=2)
Google brought me here.
I found the easiest way to do this is by raising a SkipTest exception when your skip criteria is met.
from unittest.case import SkipTest
def test_this_foo(self):
if <skip conditsion>:
raise SkipTest
And that test will be marked as skipped.
Some observations:
A test is a callable object with a __call__(result) method
TestCase provides a higher-level interface, allowing test methods to throw a SkipTest exception to skip themselves
The skip decorators do exactly this
Skipped tests are recorded calling the TestResult.addSkip(test, reason) method.
So you just need to replace the to-be-skipped tests with a custom test that calls addSkip:
class Skipper(object):
def __init__(self, test, reason):
self.test = test
self.reason = reason
def __call__(self, result):
result.addSkip(self.test, self.reason)

Categories