I've created a pytest fixture which gets a token. When the tests which use this fixture fail, then the token will be printed in the logs. On the one hand that is not helpful, on the other hand it is a security issue.
How can I prevent the fixtures content to be printed?
MVCE
import pytest
#pytest.fixture
def token():
yield "secret"
def test_foo(token):
assert False
shows the "secret":
The easiest solution is to change the traceback format, e.g. pytest --tb=short will omit printing test function args. This can also be persisted in pytest.ini, effectively modifying the default pytest invocation:
[pytest]
addopts = --tb=short
However, you can also customize the output by extending pytest.
Technically, everything pytest prints to terminal is contained in TestReport, so you can modify the report object after the test finishes, but before the failure summary is printed. Example code, to be put in a conftest.py in the project or tests root dir:
def pytest_runtest_logreport(report):
if report.longrepr is None:
return
for tb_repr, *_ in report.longrepr.chain:
for entry in tb_repr.reprentries:
if entry.reprfuncargs is not None:
args = entry.reprfuncargs.args
for idx, (name, value) in enumerate(args):
if name == "token":
args[idx] = (name, "********")
if entry.reprlocals is not None:
lines = entry.reprlocals.lines
for idx, line in enumerate(lines):
if line.startswith("token"):
lines[idx] = "token = '*********'"
Although clumsy and untested, this demonstrates the approach: get the traceback info stored in the report, if any entry has either reprfuncargs available (this contains values for all test function arguments, including fixtures), modify the token value if present. Do the same for reprlocals (those are the f_locals of the recorded frame and are printed when you invoke e.g. pytest --showlocals).
When running the test now, you should get the modified error output like
===== FAILURES =====
_____ test_foo _____
token = ********
def test_foo(token):
> assert False
E assert False
The pytest_runtest_logreport hook is used to postprocess the report object created in pytest_runtest_makereport, before the actual reporting starts.
An approach which works well enough, adapted from here:
import pytest
class Secret:
def __init__(self, value):
self.value = value
def __repr__(self):
return "Secret(********)"
def __str___(self):
return "*******"
def get_from_vault(key):
return "something looked up in vault"
#pytest.fixture(scope='session')
def password():
return Secret(get_from_vault("key_in_value"))
def login(username, password):
pass
def test_using_password(password):
# reference the value directly in a function
login("username", password.value)
# If you use the value directly in an assert, it'll still log if it fails
assert "something looked up in vault" == password.value
# but won't be printed here
assert False
This isn't perfect but it will be simpler. This is the output:
==================================================================================== FAILURES ====================================================================================
______________________________________________________________________________ test_using_password _______________________________________________________________________________
password = Secret(********)
def test_using_password(password):
# reference the value directly in a function
login("username", password.value)
# If you use the value directly in an assert, it'll still log if it fails
assert "something looked up in vault" == password.value
# but won't be printed here
> assert False
E assert False
test_stuff.py:31: AssertionError
============================================================================ short test summary info =============================================================================
FAILED test_stuff.py::test_using_password - assert False
You can also follow an approach as suggested here: https://github.com/pytest-dev/pytest/issues/8613#issuecomment-830011874
Wrap the value in an object that won't allow it to escape unintentionally (__str__ and __repr__ return obfuscated values)
Use that wrapper in place of a raw string, unpacking it only where needed.
Related
I'm using ExecutionResult and ResultVisitor from robot.api to fetch the Testcase and keyword which causes the failure. But I'm not able to:
Print the reason for the failure. E.g., Suppose I've a failure something like this. I should be able to print the reason as Call not Found on Agent Desktop
Fetch the details from a keyword in the failed testcase. In my case, I should fetch the Arguments provided to a keyword and returned output from that keyword, which is part of the teardown.
*** Keywords ***
Teardown Keyword
[Arguments] ${arg1} ${arg2}
${output}= <Some execution here>
[return] ${output}
Here I should fetch the values of ${arg1}, ${arg2} and ${output}
This is my sample code for reference:
from robot.api import ExecutionResult, ResultVisitor
class Visitor(ResultVisitor):
def __init__(self):
self.failed = []
def end_test(self, test):
if test.status == "FAIL":
self.failed.append(test)
result = ExecutionResult('output.xml')
result.visit(visitor)
Understand how the visitor pattern works and how to terminate visiting unwanted branches. From the API documentation, you can explicitly return False at any node level if you do not want to visit further down the result tree.
Use other visitor steps like start_keyword and end_keyword to visit each keyword and see which keyword failed.
Eg:
def start_test(self, test):
if test.passed:
# Not interested in passed test cases.
# So we can stop further visiting by returning False.
return False
# start_keyword for keywords in this test case
# would be called only if this test case has failed
def start_keyword(self, kw):
# We are in a keyword of a failed test case
if kw.passed:
# Even though the test case might have failed,
# the keyword might have still passed.
# Since we are not interested in passed keyword,
# return False to stop visiting further.
return False
# end_keyword for this keyword would only be called if this keyword has failed.
def end_keyword(self, kw):
# We are in a failed keyword
# Add your required logic here.
if kw.type == robot.model.keyword.Keyword.TEARDOWN_TYPE and kw.name == 'your keyword':
print("keyword args:", kw.args)
I have a BaseTest class which has tear_down and I want to have inside tear_down a variable representing wether or not the test has failed.
I tried look at A LOT of older posts but I coulden't implement them as they were hooks or mixture of hook and fixture and something did not work on my end.
What is the best practice for doing that?
Last thing I've tried was -
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
Then pass request fixture to teardown and inside use
has_failed = request.node.rep_call.failed
But request had no attributes at all, it was a method.
Also tried -
#pytest.fixture
def has_failed(request):
yield
return True if request.node.rep_call.failed else False
and pass it like that.
def teardown_method(self, has_failed):
And again, no attributes.
Isn't there a simple fixture to just do like request.test_status or something like that?
It's important that the teardown will have that bool parameter wether or not it failed and not do stuff outside the teardown.
Thanks!
There doesn't appear to be any super simple fixture offering the test report as a fixture. And I see what you mean: most examples of recording the test report are geared toward non-unittest use cases (including the official docs). However, we can adjust these examples to work with unittest TestCases.
There appears to be a private _testcase attribute on the item arg passed to pytest_runtest_makereport, which contains the instance of the TestCase. We can set an attribute on it, which can then be accessed within teardown_method.
# conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call' and hasattr(item, '_testcase'):
item._testcase.did_pass = report.passed
And here's a dinky little example TestCase
import unittest
class DescribeIt(unittest.TestCase):
def setup_method(self, method):
self.did_pass = None
def teardown_method(self, method):
print('\nself.did_pass =', self.did_pass)
def test_it_works(self):
assert True
def test_it_doesnt_work(self):
assert False
When we run it, we find it prints the proper test failure/success bool
$ py.test --no-header --no-summary -qs
============================= test session starts =============================
collected 2 items
tests/tests.py::DescribeIt::test_it_doesnt_work FAILED
self.did_pass = False
tests/tests.py::DescribeIt::test_it_works PASSED
self.did_pass = True
========================= 1 failed, 1 passed in 0.02s =========================
I'm trying to pass the result of one test to another in pytest - or more specifically, reuse an object created by the first test in the second test.
This is how I currently do it.
#pytest.fixture(scope="module")
def result_holder:
return []
def test_creation(result_holder):
object = create_object()
assert object.status == 'created' # test that creation works as expected
result_holder.append(object.id) # I need this value for the next test
# ideally this test should only run if the previous test was successful
def test_deletion(result_holder):
previous_id = result_holder.pop()
object = get_object(previous_id) # here I retrieve the object created in the first test
object.delete()
assert object.status == 'deleted' # test for deletion
(before we go further, I'm aware of py.test passing results of one test to another - but the single answer on that question is off-topic, and the question itself is 2 years old)
Using fixtures like this doesn't feel super clean... And the behavior is not clear if the first test fails (although that can be remedied by testing for the content of the fixture, or using something like the incremental fixture in the pytest doc and the comments below). Is there a better/more canonical way to do this?
For sharing data between tests, you could use the pytest namespace or cache.
Namespace
Example with sharing data via namespace. Declare the shared variable via hook in conftest.py:
# conftest.py
import pytest
def pytest_namespace():
return {'shared': None}
Now access and redefine it in tests:
import pytest
def test_creation():
pytest.shared = 'spam'
assert True
def test_deletion():
assert pytest.shared == 'spam'
Cache
The cache is a neat feature because it is persisted on disk between test runs, so usually it comes handy when reusing results of some long-running tasks to save time on repeated test runs, but you can also use it for sharing data between tests. The cache object is available via config. You can access it i.e. via request fixture:
def test_creation(request):
request.config.cache.set('shared', 'spam')
assert True
def test_deletion(request):
assert request.config.cache.get('shared', None) == 'spam'
ideally this test should only run if the previous test was successful
There is a plugin for that: pytest-dependency. Example:
import pytest
#pytest.mark.dependency()
def test_creation():
assert False
#pytest.mark.dependency(depends=['test_creation'])
def test_deletion():
assert True
will yield:
$ pytest -v
============================= test session starts =============================
...
collected 2 items
test_spam.py::test_creation FAILED [ 50%]
test_spam.py::test_deletion SKIPPED [100%]
================================== FAILURES ===================================
________________________________ test_creation ________________________________
def test_creation():
> assert False
E assert False
test_spam.py:5: AssertionError
===================== 1 failed, 1 skipped in 0.09 seconds =====================
#Use return and then call it later so it'll look like:
def test_creation():
object = create_object()
assert object.status == 'created'
return(object.id) #this doesn't show on stdout but it will hand it to what's calling it
def test_update(id):
object = test_creation
object.id = id
object.update()
assert object.status == 'updated' # some more tests
#If this is what youre thinking of there ya go
I have following code in my tests module
def teardown_module():
clean_database()
def test1(): pass
def test2(): assert 0
and I want teardown_module() (some cleanup code) to be called only if some test failed. Otherwise (if all passed) this code shouldn't have to be called.
Can I do such a trick with PyTest?
You can. But it is a little bit of a hack.
As written here: http://pytest.org/latest/example/simple.html#making-test-result-information-available-in-fixtures
you do the following, to set up an attribute for saving the status of each phase of the testcall:
# content of conftest.py
import pytest
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
rep = __multicall__.execute()
setattr(item, "rep_" + rep.when, rep)
return rep
and in the fixture you just examine the condition on those attributes like this:
import pytest
#pytest.yield_fixture(scope="module", autouse=True)
def myfixture(request):
print "SETUP"
yield
# probably should not use "_collected" to iterate over test functions
if any(call.rep_call.outcome != "passed" for call in request.node._collected):
print "TEARDOWN"
This way if any of the tests associated with that module fixture is not "passed" (so "failed" or "skipped") then the condition holds.
The answer posted here and link to documentation was helpful but not sufficient for my needs. I needed a module teardown function to execute for each module independently if any test in that module (.py) file failed.
A complete sample project is available on GitHub
To start with, we need a hook to attach the test function result to
the test node. This is taken directly from the pytest docs:
# in conftest.py
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
var_name = "rep_" + rep.when
setattr(item, var_name, rep)
After that, we need another hook for the test case to find the module and
store itself there, so the module can easily find its test cases. Perhaps
there's a better way, but I was unable to find one.
# also in conftest.py
#pytest.fixture(scope="function", autouse=True)
def _testcase_exit(request):
yield
parent = request.node.parent
while not isinstance(parent, pytest.Module):
parent = parent.parent
try:
parent.test_nodes.append(request.node)
except AttributeError:
parent.test_nodes = [request.node]
Once we do that, it's nice to have a decorator function to have the module on
completion look through its test nodes, find if there are any failures, and
then if there were call the function associated with the decorator:
# also also in conftest.py
def module_error_teardown(f):
#wraps(f)
#pytest.fixture(scope="module", autouse=True)
def wrapped(request, *args, **kwargs):
yield
try:
test_nodes = request.node.test_nodes
except AttributeError:
test_nodes = []
something_failed = False
for x in test_nodes:
try:
something_failed |= x.rep_setup.failed
something_failed |= x.rep_call.failed
something_failed |= x.rep_teardown.failed
except AttributeError:
pass
if something_failed:
f(*args, **kwargs)
return wrapped
Now we have all the necessary framework to work with. Now, a test file with a failing test case is easy to write:
from conftest import module_error_teardown
def test_something_that_fails():
assert False, "Yes, it failed."
def test_something_else_that_fails():
assert False, "It failed again."
#module_error_teardown
def _this_gets_called_at_the_end_if_any_test_in_this_file_fails():
print('')
print("Here's where we would do module-level cleanup!")
I am using py.test and wonder if/how it is possible to retrieve the name of the currently executed test within the setup method that is invoked before running each test. Consider this code:
class TestSomething(object):
def setup(self):
test_name = ...
def teardown(self):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
Right before TestSomething.test_the_power becomes executed, I would like to have access to this name in setup as outlined in the code via test_name = ... so that test_name == "TestSomething.test_the_power".
Actually, in setup, I allocate some resource for each test. In the end, looking at the resources that have been created by various unit tests, I would like to be able to see which one was created by which test. Best thing would be to just use the test name upon creation of the resource.
You can also do this using the Request Fixture like this:
def test_name1(request):
testname = request.node.name
assert testname == 'test_name1'
You can also use the PYTEST_CURRENT_TEST environment variable set by pytest for each test case.
PYTEST_CURRENT_TEST environment variable
To get just the test name:
os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
The setup and teardown methods seem to be legacy methods for supporting tests written for other frameworks, e.g. nose. The native pytest methods are called setup_method as well as teardown_method which receive the currently executed test method as an argument. Hence, what I want to achieve, can be written like so:
class TestSomething(object):
def setup_method(self, method):
print "\n%s:%s" % (type(self).__name__, method.__name__)
def teardown_method(self, method):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
The output of py.test -s then is:
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
plugins: cov
collected 2 items
test_pytest.py
TestSomething:test_the_power
.
TestSomething:test_something_else
.
=========================== 2 passed in 0.03 seconds ===========================
Short answer:
Use fixture called request
This fixture has the following interesting attributes:
request.node.originalname = the name of the function/method
request.node.name = name of the function/method and ids of the parameters
request.node.nodeid = relative path to the test file, name of the test class (if in a class), name of the function/method and ids of the parameters
Long answer:
I inspected the content of request.node. Here are the most interesting attributes I found:
class TestClass:
#pytest.mark.parametrize("arg", ["a"])
def test_stuff(self, request, arg):
print("originalname:", request.node.originalname)
print("name:", request.node.name)
print("nodeid:", request.node.nodeid)
Prints the following:
originalname: test_stuff
name: test_stuff[a]
nodeid: relative/path/to/test_things.py::TestClass::test_stuff[a]
NodeID is the most promising if you want to completely identify the test (including the parameters). Note that if the test is as a function (instead of in a class), the class name (::TestClass) is simply missing.
You can parse nodeid as you wish, for example:
components = request.node.nodeid.split("::")
filename = components[0]
test_class = components[1] if len(components) == 3 else None
test_func_with_params = components[-1]
test_func = test_func_with_params.split('[')[0]
test_params = test_func_with_params.split('[')[1][:-1].split('-')
In my example this results to:
filename = 'relative/path/to/test_things.py'
test_class = 'TestClass'
test_func = 'test_stuff'
test_params = ['a']
# content of conftest.py
#pytest.fixture(scope='function', autouse=True)
def test_log(request):
# Here logging is used, you can use whatever you want to use for logs
log.info("STARTED Test '{}'".format(request.node.name))
def fin():
log.info("COMPLETED Test '{}' \n".format(request.node.name))
request.addfinalizer(fin)
Try my little wrapper function which returns the full name of the test, the file and the test name. You can use whichever you like later.
I used it within conftest.py where fixtures do not work as far as I know.
def get_current_test():
full_name = os.environ.get('PYTEST_CURRENT_TEST').split(' ')[0]
test_file = full_name.split("::")[0].split('/')[-1].split('.py')[0]
test_name = full_name.split("::")[1]
return full_name, test_file, test_name
You might have multiple tests, in which case...
test_names = [n for n in dir(self) if n.startswith('test_')]
...will give you all the functions and instance variables that begin with "test_" in self. As long as you don't have any variables named "test_something" this will work.
You can also define a method setup_method(self, method) instead of setup(self) and that will be called before each test method invocation. Using this, you're simply given each method as a parameter. See: http://pytest.org/latest/xunit_setup.html
You could give the inspect module are try.
import inspect
def foo():
print "My name is: ", inspect.stack()[0][3]
foo()
Output: My name is: foo
Try type(self).__name__ perhaps?