In pytest, how can I abort the fixture teardown? - python

Our pytest environment has a lot of fixtures (mostly scope='function' and scope='module') that are doing something of the form:
#pytest.yield_fixture(scope='function')
def some_fixture():
... some object initialization ...
yield some_object
... teardown ...
We use the teardown phase of the fixture (after the yield) to delete some resources created specifically for the test.
However, if a test is failing, I don't want the teardown to execute so we will have the resources still exist for further debugging.
For example, here is a common scenario that repeats in all of our testing framework:
#pytest.yield_fixture(scope='function')
def obj_fixture():
obj = SomeObj.create()
yield obj
obj.delete()
def test_obj_some_field(obj_fixture):
assert obj_fixture.some_field is True
In this case, if the condition in the assert is True I want the obj.delete() to execute.
However, if the test is failing, I want pytest to skip the obj.delete() and anything else after the yield.
Thank you.
EDIT
I want the process to be done without altering the fixture and the tests code, I prefer an automatic process instead of doing this refactor in our whole testing codebase.

There's an example in the pytest docs about how to do this. The basic idea is that you need to capture this information in a hook function and add it to the test item, which is available on the test request, which is available to fixtures/tests via the request fixture.
For you, it would look something like this:
# conftest.py
import pytest
#pytest.hookimpl(tryfirst = True, hookwrapper = True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
# test_obj.py
import pytest
#pytest.fixture()
def obj(request):
obj = 'obj'
yield obj
# setup succeeded, but the test itself ("call") failed
if request.node.rep_setup.passed and request.node.rep_call.failed:
print(' dont kill obj here')
else:
print(' kill obj here')
def test_obj(obj):
assert obj == 'obj'
assert False # force the test to fail
If you run this with pytest -s (to not let pytest capture output from fixtures), you'll see output like
foobar.py::test_obj FAILED dont kill obj here
which indicates that we're hitting the right branch of the conditional.

Teardown is intended to be executed independently of whether a test passed or failed.
So I suggest to either write your teardown code such, that it is robust enough to be executed whether the test passed or failed or to add the cleanup to the end of your test, so that it will only be called if no preceding assert failed and if no exception occurred before

Set a class-level flag to indicate pass/fail and check that in your teardown. This is not tested, but should give you the idea::
#pytest.yield_fixture(scope='function')
def obj_fixture():
obj = SomeObj.create()
yield obj
if this.passed:
obj.delete()
def test_obj_some_field(obj_fixture):
assert obj_fixture.some_field is True
this.passed = True

I use a Makefile to execute pytest so had an additional tool at my disposal. I too needed the cleanup of fixtures to happen only on success. I added the cleanup as a second command to my test method in my Makefile.
clean:
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
-rm database.db # the minus here allows this to fail quietly
database:
python -m create_database
lint:
black .
flake8 .
test: clean lint database
pytest -x -p no:warnings
rm -rf tests/mock/fixture_dir

Related

How to pass test status to it's teardown, through a fixture preferably

I have a BaseTest class which has tear_down and I want to have inside tear_down a variable representing wether or not the test has failed.
I tried look at A LOT of older posts but I coulden't implement them as they were hooks or mixture of hook and fixture and something did not work on my end.
What is the best practice for doing that?
Last thing I've tried was -
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
Then pass request fixture to teardown and inside use
has_failed = request.node.rep_call.failed
But request had no attributes at all, it was a method.
Also tried -
#pytest.fixture
def has_failed(request):
yield
return True if request.node.rep_call.failed else False
and pass it like that.
def teardown_method(self, has_failed):
And again, no attributes.
Isn't there a simple fixture to just do like request.test_status or something like that?
It's important that the teardown will have that bool parameter wether or not it failed and not do stuff outside the teardown.
Thanks!
There doesn't appear to be any super simple fixture offering the test report as a fixture. And I see what you mean: most examples of recording the test report are geared toward non-unittest use cases (including the official docs). However, we can adjust these examples to work with unittest TestCases.
There appears to be a private _testcase attribute on the item arg passed to pytest_runtest_makereport, which contains the instance of the TestCase. We can set an attribute on it, which can then be accessed within teardown_method.
# conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call' and hasattr(item, '_testcase'):
item._testcase.did_pass = report.passed
And here's a dinky little example TestCase
import unittest
class DescribeIt(unittest.TestCase):
def setup_method(self, method):
self.did_pass = None
def teardown_method(self, method):
print('\nself.did_pass =', self.did_pass)
def test_it_works(self):
assert True
def test_it_doesnt_work(self):
assert False
When we run it, we find it prints the proper test failure/success bool
$ py.test --no-header --no-summary -qs
============================= test session starts =============================
collected 2 items
tests/tests.py::DescribeIt::test_it_doesnt_work FAILED
self.did_pass = False
tests/tests.py::DescribeIt::test_it_works PASSED
self.did_pass = True
========================= 1 failed, 1 passed in 0.02s =========================

How can I test if a pytest fixture raises an exception?

Use case: In a pytest test suite I have a #fixture which raises exceptions if command line options for its configuration are missing. I've written a test for this fixture using xfail:
import pytest
from <module> import <exception>
#pytest.mark.xfail(raises=<exception>)
def test_fixture_with_missing_options_raises_exception(rc_visard):
pass
However the output after running the tests does not state the test as passed but "xfailed" instead:
============================== 1 xfailed in 0.15 seconds ========================
In addition to that I am not able to test if the fixture raises the exception for specific missing command line options.
Is there a better approach to do this? Can I mock the pytest command line options somehow that I do not need to call specific tests via pytest --<commandline-option-a> <test-file-name>::<test-name>.
initial setup
Suppose you have a simplified project with conftest.py containing the following code:
import pytest
def pytest_addoption(parser):
parser.addoption('--foo', action='store', dest='foo', default='bar',
help='--foo should be always bar!')
#pytest.fixture
def foo(request):
fooval = request.config.getoption('foo')
if fooval != 'bar':
raise ValueError('expected foo to be "bar"; "{}" provided'.format(fooval))
It adds a new command line arg --foo and a fixture foo returning the passed arg, or bar if not specified. If anything else besides bar passed via --foo, the fixture raises a ValueError.
You use the fixture as usual, for example
def test_something(foo):
assert foo == 'bar'
Now let's test that fixture.
preparations
In this example, we need to do some simple refactoring first. Move the fixture and related code to some file called something else than conftest.py, for example, my_plugin.py:
# my_plugin.py
import pytest
def pytest_addoption(parser):
parser.addoption('--foo', action='store', dest='foo', default='bar',
help='--foo should be always bar!')
#pytest.fixture
def foo(request):
fooval = request.config.getoption('foo')
if fooval != 'bar':
raise ValueError('expected foo to be "bar"; "{}" provided'.format(fooval))
In conftest.py, ensure the new plugin is loaded:
# conftest.py
pytest_plugins = ['my_plugin']
Run the existing test suite to ensure we didn't break anything, all tests should still pass.
activate pytester
pytest provides an extra plugin for writing plugin tests, called pytester. It is not activated by default, so you should do that manually. In conftest.py, extend the plugins list with pytester:
# conftest.py
pytest_plugins = ['my_plugin', 'pytester']
writing the tests
Once pytester is active, you get a new fixture available called testdir. It can generate and run pytest test suites from code. Here's what our first test will look like:
# test_foo_fixture.py
def test_all_ok(testdir):
testdata = '''
def test_sample(foo):
assert True
'''
testconftest = '''
pytest_plugins = ['my_plugin']
'''
testdir.makeconftest(testconftest)
testdir.makepyfile(testdata)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
It should be pretty obvious what happens here: we provide the tests code as string and testdir will generate a pytest project from it in some temporary directory. To ensure our foo fixture is available in the generated test project, we pass it in the generated conftest same way as we do in the real one. testdir.runpytest() starts the test run, producing a result that we can inspect.
Let's add another test that checks whether foo will raise a ValueError:
def test_foo_valueerror_raised(testdir):
testdata = '''
def test_sample(foo):
assert True
'''
testconftest = '''
pytest_plugins = ['my_plugin']
'''
testdir.makeconftest(testconftest)
testdir.makepyfile(testdata)
result = testdir.runpytest('--foo', 'baz')
result.assert_outcomes(error=1)
result.stdout.fnmatch_lines([
'*ValueError: expected foo to be "bar"; "baz" provided'
])
Here we execute the generated tests with --foo baz and verify afterwards if one test ended with an error and the error output contains the expected error message.

Chaining tests and passing an object from one test to another

I'm trying to pass the result of one test to another in pytest - or more specifically, reuse an object created by the first test in the second test.
This is how I currently do it.
#pytest.fixture(scope="module")
def result_holder:
return []
def test_creation(result_holder):
object = create_object()
assert object.status == 'created' # test that creation works as expected
result_holder.append(object.id) # I need this value for the next test
# ideally this test should only run if the previous test was successful
def test_deletion(result_holder):
previous_id = result_holder.pop()
object = get_object(previous_id) # here I retrieve the object created in the first test
object.delete()
assert object.status == 'deleted' # test for deletion
(before we go further, I'm aware of py.test passing results of one test to another - but the single answer on that question is off-topic, and the question itself is 2 years old)
Using fixtures like this doesn't feel super clean... And the behavior is not clear if the first test fails (although that can be remedied by testing for the content of the fixture, or using something like the incremental fixture in the pytest doc and the comments below). Is there a better/more canonical way to do this?
For sharing data between tests, you could use the pytest namespace or cache.
Namespace
Example with sharing data via namespace. Declare the shared variable via hook in conftest.py:
# conftest.py
import pytest
def pytest_namespace():
return {'shared': None}
Now access and redefine it in tests:
import pytest
def test_creation():
pytest.shared = 'spam'
assert True
def test_deletion():
assert pytest.shared == 'spam'
Cache
The cache is a neat feature because it is persisted on disk between test runs, so usually it comes handy when reusing results of some long-running tasks to save time on repeated test runs, but you can also use it for sharing data between tests. The cache object is available via config. You can access it i.e. via request fixture:
def test_creation(request):
request.config.cache.set('shared', 'spam')
assert True
def test_deletion(request):
assert request.config.cache.get('shared', None) == 'spam'
ideally this test should only run if the previous test was successful
There is a plugin for that: pytest-dependency. Example:
import pytest
#pytest.mark.dependency()
def test_creation():
assert False
#pytest.mark.dependency(depends=['test_creation'])
def test_deletion():
assert True
will yield:
$ pytest -v
============================= test session starts =============================
...
collected 2 items
test_spam.py::test_creation FAILED [ 50%]
test_spam.py::test_deletion SKIPPED [100%]
================================== FAILURES ===================================
________________________________ test_creation ________________________________
def test_creation():
> assert False
E assert False
test_spam.py:5: AssertionError
===================== 1 failed, 1 skipped in 0.09 seconds =====================
#Use return and then call it later so it'll look like:
def test_creation():
object = create_object()
assert object.status == 'created'
return(object.id) #this doesn't show on stdout but it will hand it to what's calling it
def test_update(id):
object = test_creation
object.id = id
object.update()
assert object.status == 'updated' # some more tests
#If this is what youre thinking of there ya go

Set a variable based on a py.test (testinfra) check output

I am trying to make a testinfra test file more portable, I'd like to use a single file to handle tests for either a prod / dev or test env.
For this I need to get a value from the remote tested machine, which I get by :
def test_ACD_GRAIN(host):
grain = host.salt("grains.item", "client_NAME")
assert grain['client_NAME'] == "test"
I'd need to use this grain['client_NAME'] value in different part of the test file, therefore I'd like to store it in a variable.
Anyway to do this ?
There are a lot of ways to share state between tests. To name a few:
Using a session-scoped fixture
Define a fixture with a session scope where the value is calculated. It will executed before the first test that uses it runs and then will be cached for the whole test run:
# conftest.py
#pytest.fixture(scope='session')
def grain():
host = ...
return host.salt("grains.item", "client_NAME")
Just use the fixture as the input argument in tests to access the value:
def test_ACD_GRAIN(grain):
assert grain['client_NAME'] == "test"
Using pytest namespace
Define an autouse fixture with a session scope, so it is autoapplied once per session and stores the value in the pytest namespace.
# conftest.py
import pytest
def pytest_namespace():
return {'grain': None}
#pytest.fixture(scope='session', autouse=True)
def grain():
host = ...
pytest.grain = host.salt("grains.item", "client_NAME")
It will be executed before the first test runs. In tests, just call pytest.grain to get the value:
import pytest
def test_ACD_GRAIN():
grain = pytest.grain
assert grain['client_NAME'] == "test"
pytest cache: reuse values between test runs
If the value does not change between test runs, you can even persist in on disk:
#pytest.fixture
def grain(request):
grain = request.config.cache.get('grain', None)
if not grain:
host = ...
grain = host.salt("grains.item", "client_NAME")
request.config.cache.set('grain', grain)
return grain
Now the tests won't need to recalculate the value on different test runs unless you clear the cache on disk:
$ pytest
...
$ pytest --cache-show
...
grain contains:
'spam'
Rerun the tests with the --cache-clear flag to delete the cache and force the value to be recalculated.

How to dynamically add new fixtures to a test based on the fixture signature of a test

So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"

Categories