pytest fixture of fixtures - python

I am currently writing tests for a medium sized library (~300 files).
Many classes in this library share the same testing scheme which were coded using pytest:
File test_for_class_a.py:
import pytest
#pytest.fixture()
def setup_resource_1():
...
#pytest.fixture()
def setup_resource_2():
...
#pytest.fixture()
def setup_class_a(setup_resource_1, setup_resource_2):
...
def test_1_for_class_a(setup_class_a):
...
def test_2_for_class_a(setup_class_a):
...
similar files exist for class_b, class_c etc ... The only difference being the content of setup_resource_1 & setup_resource_2.
Now I would like to re-use the fixtures setup_class_a, setup_class_b, setup_class_c defined in test_for_class_a.py, test_for_class_b.py and test_for_class_c.py to run tests on them.
In a file test_all_class.py, this works but it is limited to one fixture per test:
from test_for_class_a import *
#pytest.mark.usefixtures('setup_class_a') # Fixture was defined in test_for_class_a.py
def test_some_things_on_class_a(request)
...
But I am looking for a way to perform something more general:
from test_for_class_a import *
from test_for_class_b import * # I can make sure I have no collision here
from test_for_class_c import * # I can make sure I have no collision here
==> #generate_test_for_fixture('setup_class_a', 'setup_class_b', 'setup_class_c')
def test_some_things_on_all_classes(request)
...
Is there any way to do something close to that?
I have been looking at factories of factories and abstract pytest factories but I am struggling with the way pytest defines fixture.
Is there any way to solve this problems?

We had same problem at work and I was hoping to write fixture just once for every case. So I wrote plugin pytest-data which does that. Example:
#pytest.fixture
def resource(request):
resource_data = get_data(reqeust, 'resource_data', {'some': 'data', 'foo': 'foo'})
return Resource(resource_data)
#use_data(resource_data={'foo': 'bar'})
def test_1_for_class_a(resource):
...
#use_data(resource_data={'foo': 'baz'})
def test_2_for_class_a(resource):
...
What's great about it is that you write fixture just once with some defaults. When you just need that fixture/resource and you don't care about specific setup, you just use it. When you need in test some specific attribute, let's say to check out if that resource can handle also 100 character long value, you can pass it by use_data decorator instead of writing another fixture.
With that you don't have to care about conflicts, because everything will be there just once. And then you can use conftest.py for all of your fixtures without importing in test modules. For example we did separate deep module of all fixtures and all included in top conftest.py.
Documentation of plugin pytest-data: http://horejsek.github.io/python-pytest-data/

One solution I found is to abuse the test cases as following:
from test_for_class_a import *
from test_for_class_b import *
from test_for_class_c import *
list_of_all_fixtures = []
# This will force pytest to generate all sub-fixture for class a
#pytest.mark.usefixtures(setup_class_a)
def test_register_class_a_fixtures(setup_class_a):
list_of_fixtures.append(setup_class_a)
# This will force pytest to generate all sub-fixture for class b
#pytest.mark.usefixtures(setup_class_b)
def test_register_class_b_fixtures(setup_class_b):
list_of_fixtures.append(setup_class_b)
# This will force pytest to generate all sub-fixture for class c
#pytest.mark.usefixtures(setup_class_c)
def test_register_class_b_fixtures(setup_class_c):
list_of_fixtures.append(setup_class_c)
# This is the real test to apply on all fixtures
def test_all_fixtures():
for my_fixture in list_of_all_fixtures:
# do something with my_fixture
This implicitly rely on the fact that all test_all_fixture is executed after all the test_register_class*. It is obviously quite dirty but it works...

I think, only pytest_generate_test() (example) could give you such power of customization:
def pytest_generate_tests(metafunc):
if 'db' in metafunc.funcargnames:
metafunc.addcall(param="d1")
metafunc.addcall(param="d2")
EDIT: Ooops, answered the question that older than python experience I have o.O

Related

Using conftest.py vs. importing fixtures from dedicate modules

I have been familiarizing with pytest lately and on how you can use conftest.py to define fixtures that are automatically discovered and imported within my tests. It is pretty clear to me how conftest.py works and how it can be used, but I'm not sure about why this is considered a best practice in some basic scenarios.
Let's say my tests are structured in this way:
tests/
--test_a.py
--test_b.py
The best practice, as suggested by the documentation and various articles about pytest around the web, would be to define a conftest.py file with some fixtures to be used in both test_a.py and test_b.py. In order to better organize my fixtures, I might have the need of splitting them into separate files in a semantically meaningful way, ex. db_session_fixtures.py, dataframe_fixtures.py, and then import them as plugins in conftest.py.
tests/
--test_a.py
--test_b.py
--conftest.py
--db_session_fixtures.py
--dataframe_fixtures.py
In conftest.py I would have:
import pytest
pytest_plugins = ["db_session_fixtures", "dataframe_fixtures"]
and I would be able to use db_session_fixtures and dataframe_fixtures seamlessly in my test cases without any additional code.
While this is handy, I feel it might hurt readability. For example, if I would not use conftest.py as described above, I might write in test_a.py
from .dataframe_fixtures import my_dataframe_fixture
def test_case_a(my_dataframe_fixture):
#some tests
and use the fixtures as usual.
The downside is that it requires me to import the fixture, but the explicit import improves the readability of my test case, letting me know in a glance where the fixture come from, just as any other python module.
Are there downsides I am overlooking on about this solution or other advantages that conftest.py brings to the table, making it the best practice when setting up pytest test suites?
There's not a huge amount of difference, it's mainly just down to preference. I mainly use conftest.py to pull in fixures that are required, but not directly used by your test. So you may have a fixture that does something useful with a database, but needs a database connection to do so. So you make the db_connection fixture available in conftest.py, and then your test only has to do something like:
conftest.py
from tests.database_fixtures import db_connection
__all__ = ['db_connection']
tests/database_fixtures.py
import pytest
#pytest.fixture
def db_connection():
...
#pytest.fixture
def new_user(db_connection):
...
test/test_user.py
from tests.database_fixtures import new_user
def test_user(new_user):
assert new_user.id > 0 # or whatever the test needs to do
If you didn't make db_connection available in conftest.py or directly import it then pytest would fail to find the db_connection fixture when trying to use the new_user fixture. If you directly import db_connection into your test file, then linters will complain that it is an unused import. Worse, some may remove it, and cause your tests to fail. So making the db_connection available in conftest.py, to me, is the simplest solution.
Overriding Fixtures
The one significant difference is that it is easier to override fixtures using conftest.py. Say you have a directory layout of:
./
├─ conftest.py
└─ tests/
├─ test_foo.py
└─ bar/
├─ conftest.py
└─ test_foobar.py
In conftest.py you could have:
import pytest
#pytest.fixture
def some_value():
return 'foo'
And then in tests/bar/conftest.py you could have:
import pytest
#pytest.fixture
def some_value(some_value):
return some_value + 'bar'
Having multiple conftests allows you to override a fixture whilst still maintaining access to the original fixture. So following tests would all work.
tests/test_foo.py
def test_foo(some_value):
assert some_value == 'foo'
tests/bar/test_foobar.py
def test_foobar(some_value):
assert some_value == 'foobar'
You can still do this without conftest.py, but it's a bit more complicated. You'd need to do something like:
import pytest
# in this scenario we would have something like:
# mv contest.py tests/custom_fixtures.py
from tests.custom_fixtures import some_value as original_some_value
#pytest.fixture
def some_value(original_some_value):
return original_some_value + 'bar'
def test_foobar(some_value):
assert some_value == 'foobar'
For me there is no fundamental difference, from the execution point of view the result will be the same whatever the code organization.
pytest --setup-show
# test_a.py
# SETUP F f_a
# test_a.py::test_a (fixtures used: f_a).
# TEARDOWN F f_a
So it is just a matter of code organisation and it should fit the way you organise your code.
For small code base it is perfectly Ok to define all the code in a single python file and so to use the same approach for the tests by using a single conftest.py file.
For bigger code base it will become cumbersome if you do not define several modules. In my opinion it goes the same for the test and it seems perfectly fine in this case to define fixtures by module if it makes sense.
A variant that avoid importing explicitly fixtures either in the test modules or in conftest.py could be to stick to a convention (here assuming fixtures modules start by fixture_ but it can be everything else) and to import it dynamically in conftest.py.
pytest_plugins = [
fixture.replace("/", ".").replace(".py", "")
for fixture in glob(
"**/fixture_*.py",
recursive=True
)
]

Pytest: How to know which fixture is used in a test

Maybe I'm not "getting" the philosophy of py.test... I'm trying to re-write a bunch of tests for aws lambda code that receives events (webhooks with json payloads) and processes them. I have stored a bunch of these events in .json files and have used them as fixtures. Now, in some tests, I would like to test that the code I'm running returns the correct value for different specific fixtures. Currently I have it structured like so
OD_SHIPMENT_EVENT_FILE = 'od_shipment_event.json'
def event_file_path(file_name):
return os.path.join(
os.path.dirname(__file__),
'events',
file_name
)
#pytest.fixture()
def event(event_file=EVENT_FILE):
'''Trigger event'''
with open(event_file) as f:
return json.load(f)
def load_event(event_file_path):
with open(event_file_path) as f:
return json.load(f)
#pytest.fixture(params=[event_file_path(OD_SHIPMENT_EVENT_FILE),
event_file_path(OD_SHIPMENT_EVENT_FILE_EU),
event_file_path(OD_SHIPMENT_EVENT_FILE_MULTIPLE),
event_file_path(OD_BADART_SHIPMENT_EVENT_FILE),
])
def od_event(request):
return load_event(request.param)
...
def test__get_order_item_ids_from_od_shipment(od_event):
items = get_order_item_ids_from_od_shipment_event(od_event)
assert items
That last test will be run with each of the fixtures passed in as parameters. But depending on which one it is, I would like to check that 'items' is some value.
The closest thing I found was Parametrizing fixtures and test functions but I'm not sure this is the correct way to go or if I'm missing something in the philosophy of Pytest. Would love any pointers or feedback.
Also, that even file loading code is probably bloated and could be cleaned up. Suggestions are welcome.
Update
Based on the answer by Christian Karcher below, this helps a bit
#pytest.fixture
def parametrized_od_event(request):
yield load_event(request.param)
#pytest.mark.parametrize("parametrized_od_event",
[event_file_path(OD_BADART_ORDER_UPDATE),],
indirect=True)
def test__get_badart_items_from_order_metadata(parametrized_od_event):
bad_art_items = get_badart_item_ids_from_order_metadata(parametrized_od_event)
assert 3 == len(bad_art_items)
But I would like to do something a bit cleaner like this:
#pytest.mark.parametrize("parametrized_od_event,expected",
[(event_file_path(OD_BADART_ORDER_UPDATE), 3),
(event_file_path(OD_NOBADART_ORDER_UPDATE), 0)],
indirect=True)
def test__get_badart_items_from_order_metadata_multi(parametrized_od_event,expected):
bad_art_items = get_badart_item_ids_from_order_metadata(parametrized_od_event)
assert expected == len(bad_art_items)
In the second example, if I use indirect=True it can't find the expected fixture, and if I don't use indirect=True it doesn't actually call the parametrized_od_event fixture and simply passes the path to the file, without loading it.
Your way of parametrizing the fixture looks okay to me.
An alternative way would be indirect parametrization of the fixture during the test. This way, each test can have its own subset of individual parameters:
import pytest
#pytest.fixture
def od_event(request):
yield request.param * 5
#pytest.mark.parametrize("od_event", [1, 2, 3], indirect=True)
def test_get_order_item_ids_from_od_shipment(od_event):
assert od_event < 10
Some further pointers:
Make your fixtures yield instead of return its value, this way you can optionally include teardown code afterwards.
Suggestion for the file loading code: pathlib.Path with slash as a path join operator is always a nice option:
from pathlib import Path
def event_file_path(file_name):
return Path(__file__).parent / 'events' / file_name

Is it possible to use a fixture inside pytest_generate_tests()?

I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?
From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).
#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.
This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.

Mock an entire module in python

I have an application that imports a module from PyPI.
I want to write unittests for that application's source code, but I do not want to use the module from PyPI in those tests.
I want to mock it entirely (the testing machine will not contain that PyPI module, so any import will fail).
Currently, each time I try to load the class I want to test in the unittests, I immediately get an import error. so I thought about maybe using
try:
except ImportError:
and catch that import error, then use command_module.run().
This seems pretty risky/ugly and I was wondering if there's another way.
Another idea was writing an adapter to wrap that PyPI module, but I'm still working on that.
If you know any way I can mock an entire python package, I would appreciate it very much.
Thanks.
If you want to dig into the Python import system, I highly recommend David Beazley's talk.
As for your specific question, here is an example that tests a module when its dependency is missing.
bar.py - the module you want to test when my_bogus_module is missing
from my_bogus_module import foo
def bar(x):
return foo(x) + 1
mock_bogus.py - a file in with your tests that will load a mock module
from mock import Mock
import sys
import types
module_name = 'my_bogus_module'
bogus_module = types.ModuleType(module_name)
sys.modules[module_name] = bogus_module
bogus_module.foo = Mock(name=module_name+'.foo')
test_bar.py - tests bar.py when my_bogus_module is not available
import unittest
from mock_bogus import bogus_module # must import before bar module
from bar import bar
class TestBar(unittest.TestCase):
def test_bar(self):
bogus_module.foo.return_value = 99
x = bar(42)
self.assertEqual(100, x)
You should probably make that a little safer by checking that my_bogus_module isn't actually available when you run your test. You could also look at the pydoc.locate() method that will try to import something, and return None if it fails. It seems to be a public method, but it isn't really documented.
While #Don Kirkby's answer is correct, you might want to look at the bigger picture. I borrowed the example from the accepted answer:
import pypilib
def bar(x):
return pypilib.foo(x) + 1
Since pypilib is only available in production, it is not suprising that you have some trouble when you try to unit test bar. The function requires the external library to run, therefore it has to be tested with this library. What you need is an integration test.
That said, you might want to force unit testing, and that's generally a good idea because it will improve the confidence you (and others) have in the quality of your code. To widen the unit test area, you have to inject dependencies. Nothing prevents you (in Python!) from passing a module as a parameter (the type is types.ModuleType):
try:
import pypilib # production
except ImportError:
pypilib = object() # testing
def bar(x, external_lib = pypilib):
return external_lib.foo(x) + 1
Now, you can unit test the function:
import unittest
from unittest.mock import Mock
class Test(unittest.TestCase):
def test_bar(self):
external_lib = Mock(foo = lambda x: 3*x)
self.assertEqual(10, bar(3, external_lib))
if __name__ == "__main__":
unittest.main()
You might disapprove the design. The try/except part is a bit cumbersome, especially if you use the pypilib module in several modules of your application. And you have to add a parameter to each function that relies on the external library.
However, the idea to inject a dependency to the external library is useful, because you can control the input and test the output of your class methods, even if the external library is not within your control. Especially if the imported module is stateful, the state might be difficult to reproduce in a unit test. In this case, passing the module as a parameter may be a solution.
But the usual way to deal with this situation is called dependency inversion principle (the D of SOLID): you should define the (abstract) boundaries of your application, ie what you need from the outside world. Here, this is bar and other functions, preferably grouped in one or many classes:
import pypilib
import other_pypilib
class MyUtil:
"""
All I need from outside world
"""
#staticmethod
def bar(x):
return pypilib.foo(x) + 1
#staticmethod
def baz(x, y):
return other_pypilib.foo(x, y) * 10.0
...
# not every method has to be static
Each time you need one of these functions, just inject an instance of the class in your code:
class Application:
def __init__(self, util: MyUtil):
self._util = util
def something(self, x, y):
return self._util.baz(self._util.bar(x), y)
The MyUtil class must be as slim as possible, but must remain abstract from the underlying library. It is a tradeoff. Obviously, Application can be unit tested (just inject a Mock instead of an instance of MyUtil) while, under some circumstances (like a PyPi library not available during tests, a module that runs inside a framework only, etc.), MyUtil can be only tested within an integration test. If you need to unit test the boundaries of your application, you can use #Don Kirkby's method.
Note that the second benefit, after unit testing, is that if you change the libraries you are using (deprecation, license issue, cost, ...), you just have to rewrite the MyUtil class, using some other libraries or coding it from scratch. Your application is protected from the wild outside world.
Clean Code by Robert C. Martin has a full chapter on the boundaries.
Summary Before using #Don Kirkby's method or any other method, be sure to define the boundaries of your application irrespective of the specific libraries you are using. This, of course, does not apply to the Python standard library...
For a more explicit and granular approach:
import unittest
from unittest.mock import MagicMock, patch
try:
import bogus_module
except ModuleNotFoundError:
bogus_module = MagicMock()
#patch.dict('sys.modules', bogus_module=bogus_module)
class PlatformTests(unittest.TestCase):
...
Using the patch.dict decorator gives you granular control: it only applies to the class / method it is applied to.

py.test: programmaticalyl ignore unittest tests

I'm trying to promote our a team to migrate to py.test from unittest in hope that less boilerplate and faster runs will reduce accuses as to why they don't write as much unittests as they should.
One problem we have is that almost all of our old django.unittest.TestCase fail due to errors. Also, most of them are really slow.
We had decided that the new test system will ignore the old tests, and will be used for new tests only. I tried to get py.test to ignore old tests by creating the following in conftest.py:
def pytest_collection_modifyitems(session, config, items):
print ("Filtering unittest.TestCase tests")
selected = []
for test in items:
parent = test.getparent(pytest.Class)
if not parent or not issubclass(parent.obj, unittest.TestCase) or hasattr(parent.obj, 'use_pytest'):
selected.append(test)
print("Filtered {} tests out of {}".format(len(items) - len(selected), len(items)))
items[:] = selected
Problem is, it filters all tests, also this one:
import pytest
class SanityCheckTest(object):
def test_something(self):
assert 1
Using some different naming pattern for the new tests would be a rather poor solution.
My test class did not conform to the naming convention. I change it to:
import pytest
class TestSanity(object):
def test_something(self):
assert 1
And also fixed a bug in my pytest_collection_modifyitems and it works.

Categories