I'm trying to pass the result of one test to another in pytest - or more specifically, reuse an object created by the first test in the second test.
This is how I currently do it.
#pytest.fixture(scope="module")
def result_holder:
return []
def test_creation(result_holder):
object = create_object()
assert object.status == 'created' # test that creation works as expected
result_holder.append(object.id) # I need this value for the next test
# ideally this test should only run if the previous test was successful
def test_deletion(result_holder):
previous_id = result_holder.pop()
object = get_object(previous_id) # here I retrieve the object created in the first test
object.delete()
assert object.status == 'deleted' # test for deletion
(before we go further, I'm aware of py.test passing results of one test to another - but the single answer on that question is off-topic, and the question itself is 2 years old)
Using fixtures like this doesn't feel super clean... And the behavior is not clear if the first test fails (although that can be remedied by testing for the content of the fixture, or using something like the incremental fixture in the pytest doc and the comments below). Is there a better/more canonical way to do this?
For sharing data between tests, you could use the pytest namespace or cache.
Namespace
Example with sharing data via namespace. Declare the shared variable via hook in conftest.py:
# conftest.py
import pytest
def pytest_namespace():
return {'shared': None}
Now access and redefine it in tests:
import pytest
def test_creation():
pytest.shared = 'spam'
assert True
def test_deletion():
assert pytest.shared == 'spam'
Cache
The cache is a neat feature because it is persisted on disk between test runs, so usually it comes handy when reusing results of some long-running tasks to save time on repeated test runs, but you can also use it for sharing data between tests. The cache object is available via config. You can access it i.e. via request fixture:
def test_creation(request):
request.config.cache.set('shared', 'spam')
assert True
def test_deletion(request):
assert request.config.cache.get('shared', None) == 'spam'
ideally this test should only run if the previous test was successful
There is a plugin for that: pytest-dependency. Example:
import pytest
#pytest.mark.dependency()
def test_creation():
assert False
#pytest.mark.dependency(depends=['test_creation'])
def test_deletion():
assert True
will yield:
$ pytest -v
============================= test session starts =============================
...
collected 2 items
test_spam.py::test_creation FAILED [ 50%]
test_spam.py::test_deletion SKIPPED [100%]
================================== FAILURES ===================================
________________________________ test_creation ________________________________
def test_creation():
> assert False
E assert False
test_spam.py:5: AssertionError
===================== 1 failed, 1 skipped in 0.09 seconds =====================
#Use return and then call it later so it'll look like:
def test_creation():
object = create_object()
assert object.status == 'created'
return(object.id) #this doesn't show on stdout but it will hand it to what's calling it
def test_update(id):
object = test_creation
object.id = id
object.update()
assert object.status == 'updated' # some more tests
#If this is what youre thinking of there ya go
Related
I have a BaseTest class which has tear_down and I want to have inside tear_down a variable representing wether or not the test has failed.
I tried look at A LOT of older posts but I coulden't implement them as they were hooks or mixture of hook and fixture and something did not work on my end.
What is the best practice for doing that?
Last thing I've tried was -
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
Then pass request fixture to teardown and inside use
has_failed = request.node.rep_call.failed
But request had no attributes at all, it was a method.
Also tried -
#pytest.fixture
def has_failed(request):
yield
return True if request.node.rep_call.failed else False
and pass it like that.
def teardown_method(self, has_failed):
And again, no attributes.
Isn't there a simple fixture to just do like request.test_status or something like that?
It's important that the teardown will have that bool parameter wether or not it failed and not do stuff outside the teardown.
Thanks!
There doesn't appear to be any super simple fixture offering the test report as a fixture. And I see what you mean: most examples of recording the test report are geared toward non-unittest use cases (including the official docs). However, we can adjust these examples to work with unittest TestCases.
There appears to be a private _testcase attribute on the item arg passed to pytest_runtest_makereport, which contains the instance of the TestCase. We can set an attribute on it, which can then be accessed within teardown_method.
# conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call' and hasattr(item, '_testcase'):
item._testcase.did_pass = report.passed
And here's a dinky little example TestCase
import unittest
class DescribeIt(unittest.TestCase):
def setup_method(self, method):
self.did_pass = None
def teardown_method(self, method):
print('\nself.did_pass =', self.did_pass)
def test_it_works(self):
assert True
def test_it_doesnt_work(self):
assert False
When we run it, we find it prints the proper test failure/success bool
$ py.test --no-header --no-summary -qs
============================= test session starts =============================
collected 2 items
tests/tests.py::DescribeIt::test_it_doesnt_work FAILED
self.did_pass = False
tests/tests.py::DescribeIt::test_it_works PASSED
self.did_pass = True
========================= 1 failed, 1 passed in 0.02s =========================
I've created a pytest fixture which gets a token. When the tests which use this fixture fail, then the token will be printed in the logs. On the one hand that is not helpful, on the other hand it is a security issue.
How can I prevent the fixtures content to be printed?
MVCE
import pytest
#pytest.fixture
def token():
yield "secret"
def test_foo(token):
assert False
shows the "secret":
The easiest solution is to change the traceback format, e.g. pytest --tb=short will omit printing test function args. This can also be persisted in pytest.ini, effectively modifying the default pytest invocation:
[pytest]
addopts = --tb=short
However, you can also customize the output by extending pytest.
Technically, everything pytest prints to terminal is contained in TestReport, so you can modify the report object after the test finishes, but before the failure summary is printed. Example code, to be put in a conftest.py in the project or tests root dir:
def pytest_runtest_logreport(report):
if report.longrepr is None:
return
for tb_repr, *_ in report.longrepr.chain:
for entry in tb_repr.reprentries:
if entry.reprfuncargs is not None:
args = entry.reprfuncargs.args
for idx, (name, value) in enumerate(args):
if name == "token":
args[idx] = (name, "********")
if entry.reprlocals is not None:
lines = entry.reprlocals.lines
for idx, line in enumerate(lines):
if line.startswith("token"):
lines[idx] = "token = '*********'"
Although clumsy and untested, this demonstrates the approach: get the traceback info stored in the report, if any entry has either reprfuncargs available (this contains values for all test function arguments, including fixtures), modify the token value if present. Do the same for reprlocals (those are the f_locals of the recorded frame and are printed when you invoke e.g. pytest --showlocals).
When running the test now, you should get the modified error output like
===== FAILURES =====
_____ test_foo _____
token = ********
def test_foo(token):
> assert False
E assert False
The pytest_runtest_logreport hook is used to postprocess the report object created in pytest_runtest_makereport, before the actual reporting starts.
An approach which works well enough, adapted from here:
import pytest
class Secret:
def __init__(self, value):
self.value = value
def __repr__(self):
return "Secret(********)"
def __str___(self):
return "*******"
def get_from_vault(key):
return "something looked up in vault"
#pytest.fixture(scope='session')
def password():
return Secret(get_from_vault("key_in_value"))
def login(username, password):
pass
def test_using_password(password):
# reference the value directly in a function
login("username", password.value)
# If you use the value directly in an assert, it'll still log if it fails
assert "something looked up in vault" == password.value
# but won't be printed here
assert False
This isn't perfect but it will be simpler. This is the output:
==================================================================================== FAILURES ====================================================================================
______________________________________________________________________________ test_using_password _______________________________________________________________________________
password = Secret(********)
def test_using_password(password):
# reference the value directly in a function
login("username", password.value)
# If you use the value directly in an assert, it'll still log if it fails
assert "something looked up in vault" == password.value
# but won't be printed here
> assert False
E assert False
test_stuff.py:31: AssertionError
============================================================================ short test summary info =============================================================================
FAILED test_stuff.py::test_using_password - assert False
You can also follow an approach as suggested here: https://github.com/pytest-dev/pytest/issues/8613#issuecomment-830011874
Wrap the value in an object that won't allow it to escape unintentionally (__str__ and __repr__ return obfuscated values)
Use that wrapper in place of a raw string, unpacking it only where needed.
Our pytest environment has a lot of fixtures (mostly scope='function' and scope='module') that are doing something of the form:
#pytest.yield_fixture(scope='function')
def some_fixture():
... some object initialization ...
yield some_object
... teardown ...
We use the teardown phase of the fixture (after the yield) to delete some resources created specifically for the test.
However, if a test is failing, I don't want the teardown to execute so we will have the resources still exist for further debugging.
For example, here is a common scenario that repeats in all of our testing framework:
#pytest.yield_fixture(scope='function')
def obj_fixture():
obj = SomeObj.create()
yield obj
obj.delete()
def test_obj_some_field(obj_fixture):
assert obj_fixture.some_field is True
In this case, if the condition in the assert is True I want the obj.delete() to execute.
However, if the test is failing, I want pytest to skip the obj.delete() and anything else after the yield.
Thank you.
EDIT
I want the process to be done without altering the fixture and the tests code, I prefer an automatic process instead of doing this refactor in our whole testing codebase.
There's an example in the pytest docs about how to do this. The basic idea is that you need to capture this information in a hook function and add it to the test item, which is available on the test request, which is available to fixtures/tests via the request fixture.
For you, it would look something like this:
# conftest.py
import pytest
#pytest.hookimpl(tryfirst = True, hookwrapper = True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
# test_obj.py
import pytest
#pytest.fixture()
def obj(request):
obj = 'obj'
yield obj
# setup succeeded, but the test itself ("call") failed
if request.node.rep_setup.passed and request.node.rep_call.failed:
print(' dont kill obj here')
else:
print(' kill obj here')
def test_obj(obj):
assert obj == 'obj'
assert False # force the test to fail
If you run this with pytest -s (to not let pytest capture output from fixtures), you'll see output like
foobar.py::test_obj FAILED dont kill obj here
which indicates that we're hitting the right branch of the conditional.
Teardown is intended to be executed independently of whether a test passed or failed.
So I suggest to either write your teardown code such, that it is robust enough to be executed whether the test passed or failed or to add the cleanup to the end of your test, so that it will only be called if no preceding assert failed and if no exception occurred before
Set a class-level flag to indicate pass/fail and check that in your teardown. This is not tested, but should give you the idea::
#pytest.yield_fixture(scope='function')
def obj_fixture():
obj = SomeObj.create()
yield obj
if this.passed:
obj.delete()
def test_obj_some_field(obj_fixture):
assert obj_fixture.some_field is True
this.passed = True
I use a Makefile to execute pytest so had an additional tool at my disposal. I too needed the cleanup of fixtures to happen only on success. I added the cleanup as a second command to my test method in my Makefile.
clean:
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
-rm database.db # the minus here allows this to fail quietly
database:
python -m create_database
lint:
black .
flake8 .
test: clean lint database
pytest -x -p no:warnings
rm -rf tests/mock/fixture_dir
I am trying to make a testinfra test file more portable, I'd like to use a single file to handle tests for either a prod / dev or test env.
For this I need to get a value from the remote tested machine, which I get by :
def test_ACD_GRAIN(host):
grain = host.salt("grains.item", "client_NAME")
assert grain['client_NAME'] == "test"
I'd need to use this grain['client_NAME'] value in different part of the test file, therefore I'd like to store it in a variable.
Anyway to do this ?
There are a lot of ways to share state between tests. To name a few:
Using a session-scoped fixture
Define a fixture with a session scope where the value is calculated. It will executed before the first test that uses it runs and then will be cached for the whole test run:
# conftest.py
#pytest.fixture(scope='session')
def grain():
host = ...
return host.salt("grains.item", "client_NAME")
Just use the fixture as the input argument in tests to access the value:
def test_ACD_GRAIN(grain):
assert grain['client_NAME'] == "test"
Using pytest namespace
Define an autouse fixture with a session scope, so it is autoapplied once per session and stores the value in the pytest namespace.
# conftest.py
import pytest
def pytest_namespace():
return {'grain': None}
#pytest.fixture(scope='session', autouse=True)
def grain():
host = ...
pytest.grain = host.salt("grains.item", "client_NAME")
It will be executed before the first test runs. In tests, just call pytest.grain to get the value:
import pytest
def test_ACD_GRAIN():
grain = pytest.grain
assert grain['client_NAME'] == "test"
pytest cache: reuse values between test runs
If the value does not change between test runs, you can even persist in on disk:
#pytest.fixture
def grain(request):
grain = request.config.cache.get('grain', None)
if not grain:
host = ...
grain = host.salt("grains.item", "client_NAME")
request.config.cache.set('grain', grain)
return grain
Now the tests won't need to recalculate the value on different test runs unless you clear the cache on disk:
$ pytest
...
$ pytest --cache-show
...
grain contains:
'spam'
Rerun the tests with the --cache-clear flag to delete the cache and force the value to be recalculated.
I am using py.test and wonder if/how it is possible to retrieve the name of the currently executed test within the setup method that is invoked before running each test. Consider this code:
class TestSomething(object):
def setup(self):
test_name = ...
def teardown(self):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
Right before TestSomething.test_the_power becomes executed, I would like to have access to this name in setup as outlined in the code via test_name = ... so that test_name == "TestSomething.test_the_power".
Actually, in setup, I allocate some resource for each test. In the end, looking at the resources that have been created by various unit tests, I would like to be able to see which one was created by which test. Best thing would be to just use the test name upon creation of the resource.
You can also do this using the Request Fixture like this:
def test_name1(request):
testname = request.node.name
assert testname == 'test_name1'
You can also use the PYTEST_CURRENT_TEST environment variable set by pytest for each test case.
PYTEST_CURRENT_TEST environment variable
To get just the test name:
os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
The setup and teardown methods seem to be legacy methods for supporting tests written for other frameworks, e.g. nose. The native pytest methods are called setup_method as well as teardown_method which receive the currently executed test method as an argument. Hence, what I want to achieve, can be written like so:
class TestSomething(object):
def setup_method(self, method):
print "\n%s:%s" % (type(self).__name__, method.__name__)
def teardown_method(self, method):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
The output of py.test -s then is:
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
plugins: cov
collected 2 items
test_pytest.py
TestSomething:test_the_power
.
TestSomething:test_something_else
.
=========================== 2 passed in 0.03 seconds ===========================
Short answer:
Use fixture called request
This fixture has the following interesting attributes:
request.node.originalname = the name of the function/method
request.node.name = name of the function/method and ids of the parameters
request.node.nodeid = relative path to the test file, name of the test class (if in a class), name of the function/method and ids of the parameters
Long answer:
I inspected the content of request.node. Here are the most interesting attributes I found:
class TestClass:
#pytest.mark.parametrize("arg", ["a"])
def test_stuff(self, request, arg):
print("originalname:", request.node.originalname)
print("name:", request.node.name)
print("nodeid:", request.node.nodeid)
Prints the following:
originalname: test_stuff
name: test_stuff[a]
nodeid: relative/path/to/test_things.py::TestClass::test_stuff[a]
NodeID is the most promising if you want to completely identify the test (including the parameters). Note that if the test is as a function (instead of in a class), the class name (::TestClass) is simply missing.
You can parse nodeid as you wish, for example:
components = request.node.nodeid.split("::")
filename = components[0]
test_class = components[1] if len(components) == 3 else None
test_func_with_params = components[-1]
test_func = test_func_with_params.split('[')[0]
test_params = test_func_with_params.split('[')[1][:-1].split('-')
In my example this results to:
filename = 'relative/path/to/test_things.py'
test_class = 'TestClass'
test_func = 'test_stuff'
test_params = ['a']
# content of conftest.py
#pytest.fixture(scope='function', autouse=True)
def test_log(request):
# Here logging is used, you can use whatever you want to use for logs
log.info("STARTED Test '{}'".format(request.node.name))
def fin():
log.info("COMPLETED Test '{}' \n".format(request.node.name))
request.addfinalizer(fin)
Try my little wrapper function which returns the full name of the test, the file and the test name. You can use whichever you like later.
I used it within conftest.py where fixtures do not work as far as I know.
def get_current_test():
full_name = os.environ.get('PYTEST_CURRENT_TEST').split(' ')[0]
test_file = full_name.split("::")[0].split('/')[-1].split('.py')[0]
test_name = full_name.split("::")[1]
return full_name, test_file, test_name
You might have multiple tests, in which case...
test_names = [n for n in dir(self) if n.startswith('test_')]
...will give you all the functions and instance variables that begin with "test_" in self. As long as you don't have any variables named "test_something" this will work.
You can also define a method setup_method(self, method) instead of setup(self) and that will be called before each test method invocation. Using this, you're simply given each method as a parameter. See: http://pytest.org/latest/xunit_setup.html
You could give the inspect module are try.
import inspect
def foo():
print "My name is: ", inspect.stack()[0][3]
foo()
Output: My name is: foo
Try type(self).__name__ perhaps?