I want to share fixtures between different instantiations of the same parametrized tests, where the fixtures themselves are also parametrized:
#!/usr/bin/py.test -sv
import pytest
numbers_for_fixture = [0]
def pytest_generate_tests(metafunc):
if "config_field" in metafunc.fixturenames:
metafunc.parametrize("config_field", [1], scope='session')
#pytest.fixture(scope = 'session')
def fixture_1(config_field):
numbers_for_fixture[0] += 1
return '\tfixture_1(%s)' % numbers_for_fixture[0]
#pytest.fixture(scope = 'session')
def fixture_2():
numbers_for_fixture[0] += 1
return '\tfixture_2(%s)' % numbers_for_fixture[0]
def test_a(fixture_1):
print('\ttest_a:', fixture_1)
def test_b(fixture_1):
print('\ttest_b:', fixture_1)
#pytest.mark.parametrize('i', range(3))
def test_c(fixture_1, i):
print('\ttest_c[%s]:' % i, fixture_1)
#pytest.mark.parametrize('i', range(3))
def test_d(fixture_2, i):
print('\ttest_d[%s]:' % i, fixture_2)
I get this output:
platform linux -- Python 3.4.1 -- py-1.4.26 -- pytest-2.6.4 -- /usr/bin/python
collecting ... collected 8 items
test.py::test_a[1] test_a: fixture_1(1)
PASSED
test.py::test_b[1] test_b: fixture_1(1)
PASSED
test.py::test_c[1-0] test_c[0]: fixture_1(1)
PASSED
test.py::test_c[1-1] test_c[1]: fixture_1(2)
PASSED
test.py::test_c[1-2] test_c[2]: fixture_1(3)
PASSED
test.py::test_d[0] test_d[0]: fixture_2(4)
PASSED
test.py::test_d[1] test_d[1]: fixture_2(4)
PASSED
test.py::test_d[2] test_d[2]: fixture_2(4)
PASSED
test_a, test_b and test_c[0] all share fixture_1(1). All the test_ds share fixture_2(4). The problem is that the test_cs use different versions of fixture_1.
This also happens when scopes are set to "module" and "class", and it only happens when both the test and the fixture are parametrized.
From the way pytest prints the test parametesr, it seems like it doesn't distinguish between the different types of parameters used for each item, so it creates a fixture for each set of parameters rather then for each unique subset of a parameters list that the fixture uses.
Is this a bug in pytest, or did I neglect to set some configuration or something? Is there a workaround?
In a nut shell, there's a bug where the scoping is effectively ignored in this case. Run pytest with the --setup-show switch to get a better view of when the fixtures are set up and torn down.
Workaround
Comment out the pytest_generate_tests function and add the following:
#pytest.fixture('session', [1])
def config_field(request):
return request.param
Now you should see the correct set up and tear down nesting.
Workaround for dynamic parameters
I needed to set my parameters from a cli agument and had to use pytest_generate_tests
Set indirect to True and paramters to the cli argument:
def pytest_generate_tests(metafunc):
if "config_field" in metafunc.fixturenames:
metafunc.parametrize(
"config_field",
metafunc.config.getoption('some_option'),
True,
'session'
)
Add a place holder fixture:
#pytest.fixture('session')
def config_field(request):
return request.param
Well, You can return yield from your feature, this might return the same value:
#pytest.fixture(scope = 'session')
def fixture_1(config_field):
.. yield <fixture-properties>
Related
I have a BaseTest class which has tear_down and I want to have inside tear_down a variable representing wether or not the test has failed.
I tried look at A LOT of older posts but I coulden't implement them as they were hooks or mixture of hook and fixture and something did not work on my end.
What is the best practice for doing that?
Last thing I've tried was -
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
Then pass request fixture to teardown and inside use
has_failed = request.node.rep_call.failed
But request had no attributes at all, it was a method.
Also tried -
#pytest.fixture
def has_failed(request):
yield
return True if request.node.rep_call.failed else False
and pass it like that.
def teardown_method(self, has_failed):
And again, no attributes.
Isn't there a simple fixture to just do like request.test_status or something like that?
It's important that the teardown will have that bool parameter wether or not it failed and not do stuff outside the teardown.
Thanks!
There doesn't appear to be any super simple fixture offering the test report as a fixture. And I see what you mean: most examples of recording the test report are geared toward non-unittest use cases (including the official docs). However, we can adjust these examples to work with unittest TestCases.
There appears to be a private _testcase attribute on the item arg passed to pytest_runtest_makereport, which contains the instance of the TestCase. We can set an attribute on it, which can then be accessed within teardown_method.
# conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call' and hasattr(item, '_testcase'):
item._testcase.did_pass = report.passed
And here's a dinky little example TestCase
import unittest
class DescribeIt(unittest.TestCase):
def setup_method(self, method):
self.did_pass = None
def teardown_method(self, method):
print('\nself.did_pass =', self.did_pass)
def test_it_works(self):
assert True
def test_it_doesnt_work(self):
assert False
When we run it, we find it prints the proper test failure/success bool
$ py.test --no-header --no-summary -qs
============================= test session starts =============================
collected 2 items
tests/tests.py::DescribeIt::test_it_doesnt_work FAILED
self.did_pass = False
tests/tests.py::DescribeIt::test_it_works PASSED
self.did_pass = True
========================= 1 failed, 1 passed in 0.02s =========================
So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"
Is there a way to save the value of parameter, provided by pytest fixture:
Here is an example of conftest.py
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--parameter", action="store", default="default",
help="configuration file path")
#pytest.fixture
def param(request):
parameter = request.config.getoption("--parameter")
return parameter
Here is an example of pytest module:
# content of my_test.py
def test_parameters(param):
assert param == "yes"
OK - everything works fine, but is there a way to get the value of param outside the test - for example with some build-in pytest function pytest.get_fixture_value["parameter"]
EDITED - DETAILED EXPLANATION WHAT I WANT TO ACHIEV
I am writing an module, that deploys and after that provides parameters to tests, writen in pytest. My idea is if someones test looks like that:
class TestApproachI:
#load_params_as_kwargs(parameters_A)
def setup_class(cls, param_1, param_2, ... , param_n):
# code of setup_class
def teardown_class(cls):
# some code
def test_01(self):
# test code
And this someone gives me a configuration file, that explains with what parameters to run his code, I will analyze those parameters (in some other script) and I will run his tests with the command pytest --parameters=path_to_serialized_python_tuple test_to_run where this tuple will contain the provided values for this someone parameters in the right order. And I will tell that guy (with the tests) to add this decorator to all the tests he wants me to provide parameters. This decorator would look like this:
class TestApproachI:
# this path_to_serialized_tuple should be provided by 'pytest --parameters=path_to_serialized_python_tuple test_to_run'
#load_params(path_to_serialized_tuple)
def setup_class(cls, param_1, param_2, ... , param_n):
# code of setup_class
def teardown_class(cls):
# some code
def test_01(self):
# test code
The decorator function should look like that:
def load_params(parameters):
def decorator(func_to_decorate):
#wraps(func_to_decorate)
def wrapper(self):
# deserialize the tuple and decorates replaces the values of test parameters
return func_to_decorate(self, *parameters)
return wrapper
return decorator
Set that parameter as os environment variable, and than use it anywhere in your test through os.getenv('parameter')
So, you can use like,
#pytest.fixture
def param(request):
parameter = request.config.getoption("--parameter")
os.environ["parameter"]=parameter
return parameter
#pytest.mark.usefixtures('param')
def test_parameters(param):
assert os.getenv('parameter') == "yes"
I am using pytest-lazy-fixture to get the value any fixture:
first install it using pip install pytest-lazy-fixture or pipenv install pytest-lazy-fixture
then, simply assign the fixture to a variable like this if you want:
fixture_value = pytest.lazy_fixture('fixture')
the fixture has to wrapped with quotations
You can use the pytest function config.cache, like this
def function_1(request):
request.config.cache.set("user_data", "name")
...
def function_2(request):
request.config.cache.get("user_data", None)
...
Here is more info about it
https://docs.pytest.org/en/latest/reference/reference.html#std-fixture-cache
https://docs.pytest.org/en/6.2.x/cache.html
Admittedly it is not the best way to do it to start with and more importantly the fixture parameters are resolved i.e. Options.get_option() is called before everything else.
Recommendations and suggestions would be appreciated.
From config.py
class Options(object):
option = None
#classmethod
def get_option(cls):
return cls.option
From conftest.py
#pytest.yield_fixture(scope='session', autouse=True)
def session_setup():
Options.option = pytest.config.getoption('--remote')
def pytest_addoption(parser):
parser.addoption("--remote", action="store_true", default=False, help="Runs tests on a remote service.")
#pytest.yield_fixture(scope='function', params=Options.get_option())
def setup(request):
if request.param is None:
raise Exception("option is none")
Don't use custom Options class but directly ask for option from config.
pytest_generate_tests may be used for parametrizing fixture-like argument for tests.
conftest.py
def pytest_addoption(parser):
parser.addoption("--pg_tag", action="append", default=[],
help=("Postgres server versions. "
"May be used several times. "
"Available values: 9.3, 9.4, 9.5, all"))
def pytest_generate_tests(metafunc):
if 'pg_tag' in metafunc.fixturenames:
tags = set(metafunc.config.option.pg_tag)
if not tags:
tags = ['9.5']
elif 'all' in tags:
tags = ['9.3', '9.4', '9.5']
else:
tags = list(tags)
metafunc.parametrize("pg_tag", tags, scope='session')
#pytest.yield_fixture(scope='session')
def pg_server(pg_tag):
# pg_tag is parametrized parameter
# the fixture is called 1-3 times depending on --pg_tag cmdline
Edit: Replaced old example with metafunc.parametrize usage.
There is an example in the latest docs on how to do this. It's a little buried and honestly I glazed over it the first time reading through the documentation: https://docs.pytest.org/en/latest/parametrize.html#basic-pytest-generate-tests-example.
Basic pytest_generate_tests example
Sometimes you may want to implement your own parametrization scheme or
implement some dynamism for determining the parameters or scope of a
fixture. For this, you can use the pytest_generate_tests hook which is
called when collecting a test function. Through the passed in metafunc
object you can inspect the requesting test context and, most
importantly, you can call metafunc.parametrize() to cause
parametrization.
For example, let’s say we want to run a test taking string inputs
which we want to set via a new pytest command line option. Let’s first
write a simple test accepting a stringinput fixture function argument:
# content of test_strings.py
def test_valid_string(stringinput):
assert stringinput.isalpha()
Now we add a conftest.py file containing the addition of a command
line option and the parametrization of our test function:
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--stringinput", action="append", default=[],
help="list of stringinputs to pass to test functions")
def pytest_generate_tests(metafunc):
if 'stringinput' in metafunc.fixturenames:
metafunc.parametrize("stringinput",
metafunc.config.getoption('stringinput'))
If we now pass two stringinput values, our test will run twice:
$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py`
..
2 passed in 0.12 seconds
I am using py.test and wonder if/how it is possible to retrieve the name of the currently executed test within the setup method that is invoked before running each test. Consider this code:
class TestSomething(object):
def setup(self):
test_name = ...
def teardown(self):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
Right before TestSomething.test_the_power becomes executed, I would like to have access to this name in setup as outlined in the code via test_name = ... so that test_name == "TestSomething.test_the_power".
Actually, in setup, I allocate some resource for each test. In the end, looking at the resources that have been created by various unit tests, I would like to be able to see which one was created by which test. Best thing would be to just use the test name upon creation of the resource.
You can also do this using the Request Fixture like this:
def test_name1(request):
testname = request.node.name
assert testname == 'test_name1'
You can also use the PYTEST_CURRENT_TEST environment variable set by pytest for each test case.
PYTEST_CURRENT_TEST environment variable
To get just the test name:
os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
The setup and teardown methods seem to be legacy methods for supporting tests written for other frameworks, e.g. nose. The native pytest methods are called setup_method as well as teardown_method which receive the currently executed test method as an argument. Hence, what I want to achieve, can be written like so:
class TestSomething(object):
def setup_method(self, method):
print "\n%s:%s" % (type(self).__name__, method.__name__)
def teardown_method(self, method):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
The output of py.test -s then is:
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
plugins: cov
collected 2 items
test_pytest.py
TestSomething:test_the_power
.
TestSomething:test_something_else
.
=========================== 2 passed in 0.03 seconds ===========================
Short answer:
Use fixture called request
This fixture has the following interesting attributes:
request.node.originalname = the name of the function/method
request.node.name = name of the function/method and ids of the parameters
request.node.nodeid = relative path to the test file, name of the test class (if in a class), name of the function/method and ids of the parameters
Long answer:
I inspected the content of request.node. Here are the most interesting attributes I found:
class TestClass:
#pytest.mark.parametrize("arg", ["a"])
def test_stuff(self, request, arg):
print("originalname:", request.node.originalname)
print("name:", request.node.name)
print("nodeid:", request.node.nodeid)
Prints the following:
originalname: test_stuff
name: test_stuff[a]
nodeid: relative/path/to/test_things.py::TestClass::test_stuff[a]
NodeID is the most promising if you want to completely identify the test (including the parameters). Note that if the test is as a function (instead of in a class), the class name (::TestClass) is simply missing.
You can parse nodeid as you wish, for example:
components = request.node.nodeid.split("::")
filename = components[0]
test_class = components[1] if len(components) == 3 else None
test_func_with_params = components[-1]
test_func = test_func_with_params.split('[')[0]
test_params = test_func_with_params.split('[')[1][:-1].split('-')
In my example this results to:
filename = 'relative/path/to/test_things.py'
test_class = 'TestClass'
test_func = 'test_stuff'
test_params = ['a']
# content of conftest.py
#pytest.fixture(scope='function', autouse=True)
def test_log(request):
# Here logging is used, you can use whatever you want to use for logs
log.info("STARTED Test '{}'".format(request.node.name))
def fin():
log.info("COMPLETED Test '{}' \n".format(request.node.name))
request.addfinalizer(fin)
Try my little wrapper function which returns the full name of the test, the file and the test name. You can use whichever you like later.
I used it within conftest.py where fixtures do not work as far as I know.
def get_current_test():
full_name = os.environ.get('PYTEST_CURRENT_TEST').split(' ')[0]
test_file = full_name.split("::")[0].split('/')[-1].split('.py')[0]
test_name = full_name.split("::")[1]
return full_name, test_file, test_name
You might have multiple tests, in which case...
test_names = [n for n in dir(self) if n.startswith('test_')]
...will give you all the functions and instance variables that begin with "test_" in self. As long as you don't have any variables named "test_something" this will work.
You can also define a method setup_method(self, method) instead of setup(self) and that will be called before each test method invocation. Using this, you're simply given each method as a parameter. See: http://pytest.org/latest/xunit_setup.html
You could give the inspect module are try.
import inspect
def foo():
print "My name is: ", inspect.stack()[0][3]
foo()
Output: My name is: foo
Try type(self).__name__ perhaps?