Pytest: How to know which fixture is used in a test - python

Maybe I'm not "getting" the philosophy of py.test... I'm trying to re-write a bunch of tests for aws lambda code that receives events (webhooks with json payloads) and processes them. I have stored a bunch of these events in .json files and have used them as fixtures. Now, in some tests, I would like to test that the code I'm running returns the correct value for different specific fixtures. Currently I have it structured like so
OD_SHIPMENT_EVENT_FILE = 'od_shipment_event.json'
def event_file_path(file_name):
return os.path.join(
os.path.dirname(__file__),
'events',
file_name
)
#pytest.fixture()
def event(event_file=EVENT_FILE):
'''Trigger event'''
with open(event_file) as f:
return json.load(f)
def load_event(event_file_path):
with open(event_file_path) as f:
return json.load(f)
#pytest.fixture(params=[event_file_path(OD_SHIPMENT_EVENT_FILE),
event_file_path(OD_SHIPMENT_EVENT_FILE_EU),
event_file_path(OD_SHIPMENT_EVENT_FILE_MULTIPLE),
event_file_path(OD_BADART_SHIPMENT_EVENT_FILE),
])
def od_event(request):
return load_event(request.param)
...
def test__get_order_item_ids_from_od_shipment(od_event):
items = get_order_item_ids_from_od_shipment_event(od_event)
assert items
That last test will be run with each of the fixtures passed in as parameters. But depending on which one it is, I would like to check that 'items' is some value.
The closest thing I found was Parametrizing fixtures and test functions but I'm not sure this is the correct way to go or if I'm missing something in the philosophy of Pytest. Would love any pointers or feedback.
Also, that even file loading code is probably bloated and could be cleaned up. Suggestions are welcome.
Update
Based on the answer by Christian Karcher below, this helps a bit
#pytest.fixture
def parametrized_od_event(request):
yield load_event(request.param)
#pytest.mark.parametrize("parametrized_od_event",
[event_file_path(OD_BADART_ORDER_UPDATE),],
indirect=True)
def test__get_badart_items_from_order_metadata(parametrized_od_event):
bad_art_items = get_badart_item_ids_from_order_metadata(parametrized_od_event)
assert 3 == len(bad_art_items)
But I would like to do something a bit cleaner like this:
#pytest.mark.parametrize("parametrized_od_event,expected",
[(event_file_path(OD_BADART_ORDER_UPDATE), 3),
(event_file_path(OD_NOBADART_ORDER_UPDATE), 0)],
indirect=True)
def test__get_badart_items_from_order_metadata_multi(parametrized_od_event,expected):
bad_art_items = get_badart_item_ids_from_order_metadata(parametrized_od_event)
assert expected == len(bad_art_items)
In the second example, if I use indirect=True it can't find the expected fixture, and if I don't use indirect=True it doesn't actually call the parametrized_od_event fixture and simply passes the path to the file, without loading it.

Your way of parametrizing the fixture looks okay to me.
An alternative way would be indirect parametrization of the fixture during the test. This way, each test can have its own subset of individual parameters:
import pytest
#pytest.fixture
def od_event(request):
yield request.param * 5
#pytest.mark.parametrize("od_event", [1, 2, 3], indirect=True)
def test_get_order_item_ids_from_od_shipment(od_event):
assert od_event < 10
Some further pointers:
Make your fixtures yield instead of return its value, this way you can optionally include teardown code afterwards.
Suggestion for the file loading code: pathlib.Path with slash as a path join operator is always a nice option:
from pathlib import Path
def event_file_path(file_name):
return Path(__file__).parent / 'events' / file_name

Related

Mocking a path which is given in a method - Python

How can I mock the path (".test/locations.yml), because it does not exist in this project where I run my test. It exists in the CI environment.
I test my function get_matches_mr and then it says path location file not found
Do you have any idea?
Code
def read_location_file():
locations_file_path = os.path.join(".test/location.yml")
if not os.path.isfile(locations_file_path):
raise RuntimeError("Location file not found: " + locations_file_path)
with open(locations_file_path, "r") as infile:
location_file = yaml.safe_load(infile.read())
test_locations= location_file["paths"]
return test_locations
def get_matches_mr(self):
merge_request = MergeRequest()
locations = self.read_location_file()
data_locations= merge_request.get_matches(locations)
return data_locations
Like suggested in the comment, I would also say the best way to test such a scenario is to mock read_location_file. Because mocking the file system methods like os.path.join would mean that you limit the test to a certain implementation, which is a bad practice. The unit test suite should not know about the implementation detail, but only about the interfaces to be tested. Usually, in test driven development you write the test before the logic is implemented. This way you would not even know os.path.join is used.
The following code shows how to mock the read_location_file method. Assuming the class containing your two methods is called ClassToBeTested (replace with your actual class name).
import os.path
from class_to_test import ClassToBeTested
def test_function_to_test(tmpdir, monkeypatch):
def mockreturn():
return [
os.path.join(tmpdir, "sample/path/a"),
os.path.join(tmpdir, "sample/path/b"),
os.path.join(tmpdir, "sample/path/c"),
]
monkeypatch.setattr(ClassToBeTested, 'read_location_file', mockreturn)
c = ClassToBeTested()
assert c.get_matches_mr()
Note: I use the fixtures tmpdir and monkeypatch, which are both built-ins of pytest:
See this answer to find some info about tmpdir (in the linked answer I explained tmp_path, but it provides the same concept as tmpdir; the difference is tmp_path returns a pathlib.Path object, and tmpdir returns a py.path.local object).
monkeypatch is a pytest fixture that provides methods for mocking/patching of objects.
Split your function into two parts:
Finding and opening the correct file.
Reading and parsing the opened file.
Your function only does the second part; the call can be responsible for the first part.
def read_location_file(infile):
location_file = yaml.safe_load(infile.read())
test_locations= location_file["paths"]
return test_locations
Your test code can then use something like io.StringIO to verify that your function can parse it correctly.
def test_read_location():
assert read_location_file(io.StringIO("...")) == ...
Your production code will handle opening the file:
with open(location_file_path) as f:
locations = read_location_file(f)

Is it possible to use a fixture inside pytest_generate_tests()?

I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?
From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).
#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.
This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.

Possible to skip a test depending on fixture value?

I have a lot of tests broken into many different files. In my conftest.py I have something like this:
#pytest.fixture(scope="session",
params=["foo", "bar", "baz"])
def special_param(request):
return request.param
While the majority of the tests work with all values, some only work with foo and baz. This means I have this in several of my tests:
def test_example(special_param):
if special_param == "bar":
pytest.skip("Test doesn't work with bar")
I find this a little ugly and was hoping for a better way to do it. Is there anyway to use the skip decorator to achieve this? If not, is it possible to right my own decorator that can do this?
You can, as one of the comments from #abarnert, write a custom decorator using functools.wraps for this purpose exactly. In my example below, I am skipping a test if a fixture configs (some configuration dictionary) value for report type is enhanced, versus standard (but it could be whatever condition you want to check).
For example, here's an example of a fixture we'll use to determine whether to skip a test or not:
#pytest.fixture
def configs()-> Dict:
return {"report_type": "enhanced", "some_other_fixture_params": 123}
Now we write a decorator that will skip the test by inspecting the fixture configs contents for its report_type key value:
def skip_if_report_enhanced(test_function: Callable) -> Callable:
#wraps(test_function)
def wrapper(*args, **kwargs):
configs = kwargs.get("configs") # configs is a fixture passed into the pytest function
report_type = configs.get("report_type", "standard")
if report_type is ReportType.ENHANCED:
return pytest.skip(f"Skipping {test_function.__name__}") # skip!
return test_function(*args, **kwargs) # otherwise, run the pytest
return wrapper # return the decorated callable pytest
Note here that I am using kwargs.get("configs") to pull the fixture out here.
Below the test itself, the logic of which is irrelevant, just that the test runs or not:
#skip_if_report_enhanced
def test_that_it_ran(configs):
print("The test ran!") # shouldn't get here if the report type is set to enhanced
The output from running this test:
============================== 1 skipped in 0.55s ==============================
Process finished with exit code 0 SKIPPED
[100%] Skipped: Skipping test_that_it_ran
One solution is to override the fixture with #pytest.mark.parametrize. For example
#pytest.mark.parametrize("special_param", ["foo"])
def test_example(special_param):
# do test
Another possibility is to not use the special_param fixture at all and explicitly use the value "foo" where needed. The downside is that this only works if there are no other fixtures that also rely on special_param.

Python: how to create a positive test for procedures?

I have a class with some #staticmethod's that are procedures, thus they do not return anything / their return type is None.
If they fail during their execution, they throw an Exception.
I want to unittest this class, but I am struggling with designing positive tests.
For negative tests this task is easy:
assertRaises(ValueError, my_static_method(*args))
assertRaises(MyCustomException, my_static_method(*args))
...but how do I create positive tests? Should I redesign my procedures to always return True after execution, so that I can use assertTrue on them?
Without seeing the actual code it is hard to guess, however I will make some assumptions:
The logic in the static methods is deterministic.
After doing some calculation on the input value there is a result
and some operation is done with this result.
python3.4 (mock has evolved and moved over the last few versions)
In order to test code one has to check that at least in the end it produces the expected results. If there is no return value then the result is usually stored or send somewhere. In this case we can check that the method that stores or sends the result is called with the expected arguments.
This can be done with the tools available in the mock package that has become part of the unittest package.
e.g. the following static method in my_package/my_module.py:
import uuid
class MyClass:
#staticmethod
def my_procedure(value):
if isinstance(value, str):
prefix = 'string'
else:
prefix = 'other'
with open('/tmp/%s_%s' % (prefix, uuid.uuid4()), 'w') as f:
f.write(value)
In the unit test I will check the following:
open has been called.
The expected file name has been calculated.
openhas been called in write mode.
The write() method of the file handle has been called with the expected argument.
Unittest:
import unittest
from unittest.mock import patch
from my_package.my_module import MyClass
class MyClassTest(unittest.TestCase):
#patch('my_package.my_module.open', create=True)
def test_my_procedure(self, open_mock):
write_mock = open_mock.return_value.write
MyClass.my_procedure('test')
self.assertTrue(open_mock.call_count, 1)
file_name, mode = open_mock.call_args[0]
self.assertTrue(file_name.startswith('/tmp/string_'))
self.assertEqual(mode, 'w')
self.assertTrue(write_mock.called_once_with('test'))
If your methods do something, then I'm sure there should be a logic there. Let's consider this dummy example:
cool = None
def my_static_method(something):
try:
cool = int(something)
except ValueError:
# logs here
for negative test we have:
assertRaises(ValueError, my_static_method(*args))
and for possitive test we can check cool:
assertIsNotNone(cool)
So you're checking if invoking my_static_method affects on cool.

pytest fixture of fixtures

I am currently writing tests for a medium sized library (~300 files).
Many classes in this library share the same testing scheme which were coded using pytest:
File test_for_class_a.py:
import pytest
#pytest.fixture()
def setup_resource_1():
...
#pytest.fixture()
def setup_resource_2():
...
#pytest.fixture()
def setup_class_a(setup_resource_1, setup_resource_2):
...
def test_1_for_class_a(setup_class_a):
...
def test_2_for_class_a(setup_class_a):
...
similar files exist for class_b, class_c etc ... The only difference being the content of setup_resource_1 & setup_resource_2.
Now I would like to re-use the fixtures setup_class_a, setup_class_b, setup_class_c defined in test_for_class_a.py, test_for_class_b.py and test_for_class_c.py to run tests on them.
In a file test_all_class.py, this works but it is limited to one fixture per test:
from test_for_class_a import *
#pytest.mark.usefixtures('setup_class_a') # Fixture was defined in test_for_class_a.py
def test_some_things_on_class_a(request)
...
But I am looking for a way to perform something more general:
from test_for_class_a import *
from test_for_class_b import * # I can make sure I have no collision here
from test_for_class_c import * # I can make sure I have no collision here
==> #generate_test_for_fixture('setup_class_a', 'setup_class_b', 'setup_class_c')
def test_some_things_on_all_classes(request)
...
Is there any way to do something close to that?
I have been looking at factories of factories and abstract pytest factories but I am struggling with the way pytest defines fixture.
Is there any way to solve this problems?
We had same problem at work and I was hoping to write fixture just once for every case. So I wrote plugin pytest-data which does that. Example:
#pytest.fixture
def resource(request):
resource_data = get_data(reqeust, 'resource_data', {'some': 'data', 'foo': 'foo'})
return Resource(resource_data)
#use_data(resource_data={'foo': 'bar'})
def test_1_for_class_a(resource):
...
#use_data(resource_data={'foo': 'baz'})
def test_2_for_class_a(resource):
...
What's great about it is that you write fixture just once with some defaults. When you just need that fixture/resource and you don't care about specific setup, you just use it. When you need in test some specific attribute, let's say to check out if that resource can handle also 100 character long value, you can pass it by use_data decorator instead of writing another fixture.
With that you don't have to care about conflicts, because everything will be there just once. And then you can use conftest.py for all of your fixtures without importing in test modules. For example we did separate deep module of all fixtures and all included in top conftest.py.
Documentation of plugin pytest-data: http://horejsek.github.io/python-pytest-data/
One solution I found is to abuse the test cases as following:
from test_for_class_a import *
from test_for_class_b import *
from test_for_class_c import *
list_of_all_fixtures = []
# This will force pytest to generate all sub-fixture for class a
#pytest.mark.usefixtures(setup_class_a)
def test_register_class_a_fixtures(setup_class_a):
list_of_fixtures.append(setup_class_a)
# This will force pytest to generate all sub-fixture for class b
#pytest.mark.usefixtures(setup_class_b)
def test_register_class_b_fixtures(setup_class_b):
list_of_fixtures.append(setup_class_b)
# This will force pytest to generate all sub-fixture for class c
#pytest.mark.usefixtures(setup_class_c)
def test_register_class_b_fixtures(setup_class_c):
list_of_fixtures.append(setup_class_c)
# This is the real test to apply on all fixtures
def test_all_fixtures():
for my_fixture in list_of_all_fixtures:
# do something with my_fixture
This implicitly rely on the fact that all test_all_fixture is executed after all the test_register_class*. It is obviously quite dirty but it works...
I think, only pytest_generate_test() (example) could give you such power of customization:
def pytest_generate_tests(metafunc):
if 'db' in metafunc.funcargnames:
metafunc.addcall(param="d1")
metafunc.addcall(param="d2")
EDIT: Ooops, answered the question that older than python experience I have o.O

Categories