Possible to skip a test depending on fixture value? - python

I have a lot of tests broken into many different files. In my conftest.py I have something like this:
#pytest.fixture(scope="session",
params=["foo", "bar", "baz"])
def special_param(request):
return request.param
While the majority of the tests work with all values, some only work with foo and baz. This means I have this in several of my tests:
def test_example(special_param):
if special_param == "bar":
pytest.skip("Test doesn't work with bar")
I find this a little ugly and was hoping for a better way to do it. Is there anyway to use the skip decorator to achieve this? If not, is it possible to right my own decorator that can do this?

You can, as one of the comments from #abarnert, write a custom decorator using functools.wraps for this purpose exactly. In my example below, I am skipping a test if a fixture configs (some configuration dictionary) value for report type is enhanced, versus standard (but it could be whatever condition you want to check).
For example, here's an example of a fixture we'll use to determine whether to skip a test or not:
#pytest.fixture
def configs()-> Dict:
return {"report_type": "enhanced", "some_other_fixture_params": 123}
Now we write a decorator that will skip the test by inspecting the fixture configs contents for its report_type key value:
def skip_if_report_enhanced(test_function: Callable) -> Callable:
#wraps(test_function)
def wrapper(*args, **kwargs):
configs = kwargs.get("configs") # configs is a fixture passed into the pytest function
report_type = configs.get("report_type", "standard")
if report_type is ReportType.ENHANCED:
return pytest.skip(f"Skipping {test_function.__name__}") # skip!
return test_function(*args, **kwargs) # otherwise, run the pytest
return wrapper # return the decorated callable pytest
Note here that I am using kwargs.get("configs") to pull the fixture out here.
Below the test itself, the logic of which is irrelevant, just that the test runs or not:
#skip_if_report_enhanced
def test_that_it_ran(configs):
print("The test ran!") # shouldn't get here if the report type is set to enhanced
The output from running this test:
============================== 1 skipped in 0.55s ==============================
Process finished with exit code 0 SKIPPED
[100%] Skipped: Skipping test_that_it_ran

One solution is to override the fixture with #pytest.mark.parametrize. For example
#pytest.mark.parametrize("special_param", ["foo"])
def test_example(special_param):
# do test
Another possibility is to not use the special_param fixture at all and explicitly use the value "foo" where needed. The downside is that this only works if there are no other fixtures that also rely on special_param.

Related

Pytest: How to know which fixture is used in a test

Maybe I'm not "getting" the philosophy of py.test... I'm trying to re-write a bunch of tests for aws lambda code that receives events (webhooks with json payloads) and processes them. I have stored a bunch of these events in .json files and have used them as fixtures. Now, in some tests, I would like to test that the code I'm running returns the correct value for different specific fixtures. Currently I have it structured like so
OD_SHIPMENT_EVENT_FILE = 'od_shipment_event.json'
def event_file_path(file_name):
return os.path.join(
os.path.dirname(__file__),
'events',
file_name
)
#pytest.fixture()
def event(event_file=EVENT_FILE):
'''Trigger event'''
with open(event_file) as f:
return json.load(f)
def load_event(event_file_path):
with open(event_file_path) as f:
return json.load(f)
#pytest.fixture(params=[event_file_path(OD_SHIPMENT_EVENT_FILE),
event_file_path(OD_SHIPMENT_EVENT_FILE_EU),
event_file_path(OD_SHIPMENT_EVENT_FILE_MULTIPLE),
event_file_path(OD_BADART_SHIPMENT_EVENT_FILE),
])
def od_event(request):
return load_event(request.param)
...
def test__get_order_item_ids_from_od_shipment(od_event):
items = get_order_item_ids_from_od_shipment_event(od_event)
assert items
That last test will be run with each of the fixtures passed in as parameters. But depending on which one it is, I would like to check that 'items' is some value.
The closest thing I found was Parametrizing fixtures and test functions but I'm not sure this is the correct way to go or if I'm missing something in the philosophy of Pytest. Would love any pointers or feedback.
Also, that even file loading code is probably bloated and could be cleaned up. Suggestions are welcome.
Update
Based on the answer by Christian Karcher below, this helps a bit
#pytest.fixture
def parametrized_od_event(request):
yield load_event(request.param)
#pytest.mark.parametrize("parametrized_od_event",
[event_file_path(OD_BADART_ORDER_UPDATE),],
indirect=True)
def test__get_badart_items_from_order_metadata(parametrized_od_event):
bad_art_items = get_badart_item_ids_from_order_metadata(parametrized_od_event)
assert 3 == len(bad_art_items)
But I would like to do something a bit cleaner like this:
#pytest.mark.parametrize("parametrized_od_event,expected",
[(event_file_path(OD_BADART_ORDER_UPDATE), 3),
(event_file_path(OD_NOBADART_ORDER_UPDATE), 0)],
indirect=True)
def test__get_badart_items_from_order_metadata_multi(parametrized_od_event,expected):
bad_art_items = get_badart_item_ids_from_order_metadata(parametrized_od_event)
assert expected == len(bad_art_items)
In the second example, if I use indirect=True it can't find the expected fixture, and if I don't use indirect=True it doesn't actually call the parametrized_od_event fixture and simply passes the path to the file, without loading it.
Your way of parametrizing the fixture looks okay to me.
An alternative way would be indirect parametrization of the fixture during the test. This way, each test can have its own subset of individual parameters:
import pytest
#pytest.fixture
def od_event(request):
yield request.param * 5
#pytest.mark.parametrize("od_event", [1, 2, 3], indirect=True)
def test_get_order_item_ids_from_od_shipment(od_event):
assert od_event < 10
Some further pointers:
Make your fixtures yield instead of return its value, this way you can optionally include teardown code afterwards.
Suggestion for the file loading code: pathlib.Path with slash as a path join operator is always a nice option:
from pathlib import Path
def event_file_path(file_name):
return Path(__file__).parent / 'events' / file_name

How can I rewrite this fixture call so it won't be called directly?

I defined the following fixture in a test file:
import os
from dotenv import load_dotenv, find_dotenv
from packaging import version # for comparing version numbers
load_dotenv(find_dotenv())
VERSION = os.environ.get("VERSION")
API_URL = os.environ.get("API_URL")
#pytest.fixture()
def skip_before_version():
"""
Creates a fixture that takes parameters
skips a test if it depends on certain features implemented in a certain version
:parameter target_version:
:parameter type: string
"""
def _skip_before(target_version):
less_than = version.parse(current_version) < version.parse(VERSION)
return pytest.mark.skipif(less_than)
return _skip_before
skip_before = skip_before_version()("0.0.1")
I want to use skip_before as a fixture in certain tests. I call it like this:
##skip_before_version("0.0.1") # tried this before and got the same error, so tried reworking it...
#when(parsers.cfparse("{categories} are added as categories"))
def add_categories(skip_before, create_tree, categories): # now putting the fixture alongside parameters
pass
When I run this, I get the following error:
Fixture "skip_before_version" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
How is this still being called directly? How can I fix this?
If I understand your goal correctly, you want to be able to skip tests based on a version restriction specifier. There are many ways to do that; I can suggest an autouse fixture that will skip the test based on a custom marker condition. Example:
import os
import pytest
from packaging.specifiers import SpecifierSet
VERSION = "1.2.3" # read from environment etc.
#pytest.fixture(autouse=True)
def skip_based_on_version_compat(request):
# get the version_compat marker
version_compat = request.node.get_closest_marker("version_compat")
if version_compat is None: # test is not marked
return
if not version_compat.args: # no specifier passed to marker
return
spec_arg = version_compat.args[0]
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
pytest.skip(f"Current version {VERSION} doesn't match test specifiers {spec_arg!r}.")
The fixture skip_based_on_version_compat will be invoked on each test, but only do something if the test is marked with #pytest.mark.version_compat. Example tests:
#pytest.mark.version_compat(">=1.0.0")
def test_current_gen():
assert True
#pytest.mark.version_compat(">=2.0.0")
def test_next_gen():
raise NotImplementedError()
With VERSION = "1.2.3", the first test will be executed, the second one will be skipped. Notice the invocation of pytest.skip to immediately skip the test. Returning pytest.mark.skip in the fixture will bring you nothing since the markers are already evaluated long before that.
Also, I noticed you are writing gherkin tests (using pytest-bdd presumably). With the above approach, skipping the whole scenarios should be also possible:
#pytest.mark.version_compat(">=1.0.0")
#scenario("my.feature", "my scenario")
def test_scenario():
pass
Alternatively, you can mark the scenarios in feature files:
Feature: Foo
Lorem ipsum dolor sit amet.
#version_compat(">=1.0.0")
Scenario: doing future stuff
Given foo is implemented
When I see foo
Then I do bar
and use pytest-bdd-own hooks:
def pytest_bdd_apply_tag(tag, function):
matcher = re.match(r'^version_compat\("(?P<spec_arg>.*)"\)$', tag)
spec_arg = matcher.groupdict()["spec_arg"]
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
marker = pytest.mark.skip(
reason=f"Current version {VERSION} doesn't match restriction {spec_arg!r}."
)
marker(function)
return True
Unfortunately, neither custom fixtures nor markers will work with skipping in single steps (and you will still be skipping the whole scenario since it is an atomic test unit in gherkin). I didn't find a reliable way to befriend pytest-bdd steps with pytest stuff; looks like they are simply ignored. Nevertheless, you can easily write a custom decorator serving the same purpose:
import functools
def version_compat(spec_arg):
def deco(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
pytest.skip(f"Current version {VERSION} doesn't match test specifiers {spec_arg!r}.")
return func(*args, **kwargs)
return wrapper
return deco
Using version_compat deco in a step:
#when('I am foo')
#version_compat(">=2.0.0")
def i_am_foo():
...
Pay attention to the ordering - placing decorators outside of pytest-bdd's own stuff will not trigger them (I guess worth opening an issue, but meh).

Is it possible to use a fixture inside pytest_generate_tests()?

I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?
From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).
#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.
This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.

Chicken or the egg with pytest fixtures

I want to use the record_xml_property fixture.
No problem. It works perfectly when it is presently available.
However, I want my tests to run smoothly whether this fixture is installed or not. When I create a 'wrapper' fixture,
(something like this)
//this one works nicely when record_xml_property is there
#pytest.fixture()
def real_property_handler( record_xml_property, mykey, myval):
record_xml_property( mykey, myval)
//this does a harmless print instead
#pytest.fixture()
def fallback_property_handler( mykey, myval):
print('{0}={1}].format( mykey, myval))
def MyXMLWrapper( mykey, myval):
try: # I want to use the REAL one if I can
real_property_handler( record_xml_property, mykey, myval)
except: # but still do something nice if it's not
fallback_property_handler( mykey, myval)
My test should not have to be cognizant
of any fixture(s) that may or may not underlie my wrapper function
def test_simple():
MyXMLWrapper( 'mykeyname', 'mykeyvalue')
assert True
I'm stuck because in order for my tests to ever work properly it appears that I have to pass the record_xml_property fixture as a parameter which I can never do in environments that don't have this fixture installed.
I've tried several things.
If I make MyXMLWrapper a fixture itself then I have to pass it the record_xml_fixture, but if I define MyXMLWrapper as a function (above) then I have no way to reference the record_xml_property in case it DOES exist.
What am I not understanding here about how fixtures work?
Thanks.

How to use xunit-style setup_function

The docs say:
If you would rather define test functions directly at module level you can also use the following functions to implement fixtures:
def setup_function(function):
""" setup any state tied to the execution of the given function.
Invoked for every test function in the module.
"""
def teardown_function(function):
""" teardown any state that was previously setup with a setup_function
call.
"""
but actually it's unclear how you're supposed to use them.
I tried putting them in my test file exactly as shown above, but they don't get called. Looking closer at them, with that function arg they look like decorators. So I tried to find a way to do:
#pytest.setup_function
def my_setup():
# not called
I couldn't find anywhere to import one from as a decorator though, so that can't be right.
I found this thread on grokbase where someone explains you can do:
#pytest.fixture(autouse=True)
def setup_function(request):
# this one gets called
It works, but it doesn't seem related to the docs any more... there's no reason to call it setup_function here.
You just implement the body of setup_function(), that function is called for each function whose name starts with test_ and the function is handed in as a parameter:
def setup_function(fun):
print ("in setup function: " + fun.__name__)
def test_test():
assert False
This will give as output, when run with py.test:
============================= test session starts ==============================
platform linux2 -- Python 2.7.6 -- pytest-2.3.5
collected 1 items
test/test_test.py F
=================================== FAILURES ===================================
__________________________________ test_test ___________________________________
def test_test():
> assert False
E assert False
test/test_test.py:7: AssertionError
------------------------------- Captured stdout --------------------------------
in setup function: test_test
=========================== 1 failed in 0.01 seconds ===========================
The line before the last one shows the output from the actual call to setup_function()
A slightly more useful example, actually doing something that influences the test function:
def setup_function(function):
function.answer = 17
def teardown_function(function):
del function.answer
def test_modlevel():
assert modlevel[0] == 42
assert test_modlevel.answer == 17
This was taken from py.test's own tests, always a good (and hopefully complete) set of examples of all the features of py.test.

Categories