Using conftest.py vs. importing fixtures from dedicate modules - python

I have been familiarizing with pytest lately and on how you can use conftest.py to define fixtures that are automatically discovered and imported within my tests. It is pretty clear to me how conftest.py works and how it can be used, but I'm not sure about why this is considered a best practice in some basic scenarios.
Let's say my tests are structured in this way:
tests/
--test_a.py
--test_b.py
The best practice, as suggested by the documentation and various articles about pytest around the web, would be to define a conftest.py file with some fixtures to be used in both test_a.py and test_b.py. In order to better organize my fixtures, I might have the need of splitting them into separate files in a semantically meaningful way, ex. db_session_fixtures.py, dataframe_fixtures.py, and then import them as plugins in conftest.py.
tests/
--test_a.py
--test_b.py
--conftest.py
--db_session_fixtures.py
--dataframe_fixtures.py
In conftest.py I would have:
import pytest
pytest_plugins = ["db_session_fixtures", "dataframe_fixtures"]
and I would be able to use db_session_fixtures and dataframe_fixtures seamlessly in my test cases without any additional code.
While this is handy, I feel it might hurt readability. For example, if I would not use conftest.py as described above, I might write in test_a.py
from .dataframe_fixtures import my_dataframe_fixture
def test_case_a(my_dataframe_fixture):
#some tests
and use the fixtures as usual.
The downside is that it requires me to import the fixture, but the explicit import improves the readability of my test case, letting me know in a glance where the fixture come from, just as any other python module.
Are there downsides I am overlooking on about this solution or other advantages that conftest.py brings to the table, making it the best practice when setting up pytest test suites?

There's not a huge amount of difference, it's mainly just down to preference. I mainly use conftest.py to pull in fixures that are required, but not directly used by your test. So you may have a fixture that does something useful with a database, but needs a database connection to do so. So you make the db_connection fixture available in conftest.py, and then your test only has to do something like:
conftest.py
from tests.database_fixtures import db_connection
__all__ = ['db_connection']
tests/database_fixtures.py
import pytest
#pytest.fixture
def db_connection():
...
#pytest.fixture
def new_user(db_connection):
...
test/test_user.py
from tests.database_fixtures import new_user
def test_user(new_user):
assert new_user.id > 0 # or whatever the test needs to do
If you didn't make db_connection available in conftest.py or directly import it then pytest would fail to find the db_connection fixture when trying to use the new_user fixture. If you directly import db_connection into your test file, then linters will complain that it is an unused import. Worse, some may remove it, and cause your tests to fail. So making the db_connection available in conftest.py, to me, is the simplest solution.
Overriding Fixtures
The one significant difference is that it is easier to override fixtures using conftest.py. Say you have a directory layout of:
./
├─ conftest.py
└─ tests/
├─ test_foo.py
└─ bar/
├─ conftest.py
└─ test_foobar.py
In conftest.py you could have:
import pytest
#pytest.fixture
def some_value():
return 'foo'
And then in tests/bar/conftest.py you could have:
import pytest
#pytest.fixture
def some_value(some_value):
return some_value + 'bar'
Having multiple conftests allows you to override a fixture whilst still maintaining access to the original fixture. So following tests would all work.
tests/test_foo.py
def test_foo(some_value):
assert some_value == 'foo'
tests/bar/test_foobar.py
def test_foobar(some_value):
assert some_value == 'foobar'
You can still do this without conftest.py, but it's a bit more complicated. You'd need to do something like:
import pytest
# in this scenario we would have something like:
# mv contest.py tests/custom_fixtures.py
from tests.custom_fixtures import some_value as original_some_value
#pytest.fixture
def some_value(original_some_value):
return original_some_value + 'bar'
def test_foobar(some_value):
assert some_value == 'foobar'

For me there is no fundamental difference, from the execution point of view the result will be the same whatever the code organization.
pytest --setup-show
# test_a.py
# SETUP F f_a
# test_a.py::test_a (fixtures used: f_a).
# TEARDOWN F f_a
So it is just a matter of code organisation and it should fit the way you organise your code.
For small code base it is perfectly Ok to define all the code in a single python file and so to use the same approach for the tests by using a single conftest.py file.
For bigger code base it will become cumbersome if you do not define several modules. In my opinion it goes the same for the test and it seems perfectly fine in this case to define fixtures by module if it makes sense.
A variant that avoid importing explicitly fixtures either in the test modules or in conftest.py could be to stick to a convention (here assuming fixtures modules start by fixture_ but it can be everything else) and to import it dynamically in conftest.py.
pytest_plugins = [
fixture.replace("/", ".").replace(".py", "")
for fixture in glob(
"**/fixture_*.py",
recursive=True
)
]

Related

Is it possible to use a fixture inside pytest_generate_tests()?

I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?
From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).
#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.
This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.

pytest fixture of fixtures

I am currently writing tests for a medium sized library (~300 files).
Many classes in this library share the same testing scheme which were coded using pytest:
File test_for_class_a.py:
import pytest
#pytest.fixture()
def setup_resource_1():
...
#pytest.fixture()
def setup_resource_2():
...
#pytest.fixture()
def setup_class_a(setup_resource_1, setup_resource_2):
...
def test_1_for_class_a(setup_class_a):
...
def test_2_for_class_a(setup_class_a):
...
similar files exist for class_b, class_c etc ... The only difference being the content of setup_resource_1 & setup_resource_2.
Now I would like to re-use the fixtures setup_class_a, setup_class_b, setup_class_c defined in test_for_class_a.py, test_for_class_b.py and test_for_class_c.py to run tests on them.
In a file test_all_class.py, this works but it is limited to one fixture per test:
from test_for_class_a import *
#pytest.mark.usefixtures('setup_class_a') # Fixture was defined in test_for_class_a.py
def test_some_things_on_class_a(request)
...
But I am looking for a way to perform something more general:
from test_for_class_a import *
from test_for_class_b import * # I can make sure I have no collision here
from test_for_class_c import * # I can make sure I have no collision here
==> #generate_test_for_fixture('setup_class_a', 'setup_class_b', 'setup_class_c')
def test_some_things_on_all_classes(request)
...
Is there any way to do something close to that?
I have been looking at factories of factories and abstract pytest factories but I am struggling with the way pytest defines fixture.
Is there any way to solve this problems?
We had same problem at work and I was hoping to write fixture just once for every case. So I wrote plugin pytest-data which does that. Example:
#pytest.fixture
def resource(request):
resource_data = get_data(reqeust, 'resource_data', {'some': 'data', 'foo': 'foo'})
return Resource(resource_data)
#use_data(resource_data={'foo': 'bar'})
def test_1_for_class_a(resource):
...
#use_data(resource_data={'foo': 'baz'})
def test_2_for_class_a(resource):
...
What's great about it is that you write fixture just once with some defaults. When you just need that fixture/resource and you don't care about specific setup, you just use it. When you need in test some specific attribute, let's say to check out if that resource can handle also 100 character long value, you can pass it by use_data decorator instead of writing another fixture.
With that you don't have to care about conflicts, because everything will be there just once. And then you can use conftest.py for all of your fixtures without importing in test modules. For example we did separate deep module of all fixtures and all included in top conftest.py.
Documentation of plugin pytest-data: http://horejsek.github.io/python-pytest-data/
One solution I found is to abuse the test cases as following:
from test_for_class_a import *
from test_for_class_b import *
from test_for_class_c import *
list_of_all_fixtures = []
# This will force pytest to generate all sub-fixture for class a
#pytest.mark.usefixtures(setup_class_a)
def test_register_class_a_fixtures(setup_class_a):
list_of_fixtures.append(setup_class_a)
# This will force pytest to generate all sub-fixture for class b
#pytest.mark.usefixtures(setup_class_b)
def test_register_class_b_fixtures(setup_class_b):
list_of_fixtures.append(setup_class_b)
# This will force pytest to generate all sub-fixture for class c
#pytest.mark.usefixtures(setup_class_c)
def test_register_class_b_fixtures(setup_class_c):
list_of_fixtures.append(setup_class_c)
# This is the real test to apply on all fixtures
def test_all_fixtures():
for my_fixture in list_of_all_fixtures:
# do something with my_fixture
This implicitly rely on the fact that all test_all_fixture is executed after all the test_register_class*. It is obviously quite dirty but it works...
I think, only pytest_generate_test() (example) could give you such power of customization:
def pytest_generate_tests(metafunc):
if 'db' in metafunc.funcargnames:
metafunc.addcall(param="d1")
metafunc.addcall(param="d2")
EDIT: Ooops, answered the question that older than python experience I have o.O

py.test: programmaticalyl ignore unittest tests

I'm trying to promote our a team to migrate to py.test from unittest in hope that less boilerplate and faster runs will reduce accuses as to why they don't write as much unittests as they should.
One problem we have is that almost all of our old django.unittest.TestCase fail due to errors. Also, most of them are really slow.
We had decided that the new test system will ignore the old tests, and will be used for new tests only. I tried to get py.test to ignore old tests by creating the following in conftest.py:
def pytest_collection_modifyitems(session, config, items):
print ("Filtering unittest.TestCase tests")
selected = []
for test in items:
parent = test.getparent(pytest.Class)
if not parent or not issubclass(parent.obj, unittest.TestCase) or hasattr(parent.obj, 'use_pytest'):
selected.append(test)
print("Filtered {} tests out of {}".format(len(items) - len(selected), len(items)))
items[:] = selected
Problem is, it filters all tests, also this one:
import pytest
class SanityCheckTest(object):
def test_something(self):
assert 1
Using some different naming pattern for the new tests would be a rather poor solution.
My test class did not conform to the naming convention. I change it to:
import pytest
class TestSanity(object):
def test_something(self):
assert 1
And also fixed a bug in my pytest_collection_modifyitems and it works.

Running pytest tests in another Python package

Right now, I have a Python package (let's call it mypackage) with a bunch of tests that I run with pytest. One particular feature can have many possible implementations, so I have used the funcarg mechanism to run these tests with a reference implementation.
# In mypackage/tests/conftest.py
def pytest_funcarg__Feature(request):
return mypackage.ReferenceImplementation
# In mypackage/tests/test_stuff.py
def test_something(Feature):
assert Feature(1).works
Now, I am creating a separate Python package with a fancier implementation (fancypackage). Is it possible to run all of the tests in mypackage that contain the Feature funcarg, only with different implementations?
I would like to avoid having to change fancypackage if I add new tests in mypackage, so explicit imports aren't ideal. I know that I can run all of the tests with pytest.main(), but since I have several implementations of my feature, I don't want to call pytest.main() multiple times. Ideally, it would look like something like this:
# In fancypackage/tests/test_impl1.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation1
## XXX: Do pytest collection on mypackage.tests, but don't run them
# In fancypackage/tests/test_impl2.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation2
## XXX: Do pytest collection on mypackage.tests, but don't run them
Then, when I run pytest in fancypackage, it would collect each of the mypackage.tests tests twice, once for each feature implementation. I have tried doing this with explicit imports, and it seems to work fine, but I don't want to explicitly import everything.
Bonus
An additional nice bonus would be to only collect those tests that contain the Feature funcarg. Is that possible?
Example with unittest
Before switching to py.test, I did this with the standard library's unittest. The function for that is the following:
def mypackage_test_suite(Feature):
loader = unittest.TestLoader()
suite = unittest.TestSuite()
mypackage_tests = loader.discover('mypackage.tests')
for test in all_testcases(mypackage_tests):
if hasattr(test, 'Feature'):
test.Feature = Feature
suite.addTest(test)
return suite
def all_testcases(test_suite_or_case):
try:
suite = iter(test_suite_or_case)
except TypeError:
yield test_suite_or_case
else:
for test in suite:
for subtest in all_testcases(test):
yield subtest
Obviously things are different now because we're dealing with test functions and classes instead of just classes, but it seems like there should be some equivalent in py.test that builds the test suite and allows you to iterate through it.
You could parameterise your Feature fixture:
#pytest.fixture(params=['ref', 'fancy'])
def Feature(request):
if request.param == 'ref':
return mypackage.ReferenceImplementation
else:
return fancypackage.Implementation1
Now if you run py.test it will test both.
Selecting tests on the fixture they use is not possible AFAIK, you could probably cobble something together using request.applymarker() and -m. however.

Avoid unneeded imports with unit tests

The PyCharm IDE encourages me to write unit tests in the same module as my classes lie. I like the idea of every module being tested automatically as I develop, but what bothers me is that I have additional imports that are only used for these unit tests. I can live with import unittest, but consider:
from lxml import etree
class Foobar(object):
def __init__(self):
schema_root = etree.parse("schema/myschema.xsd")
schema = etree.XMLSchema(schema_root)
self.parser = etree.XMLParser(schema=schema)
def valid(self, filename):
try:
etree.parse(filename, self.parser)
return True
except etree.XMLSyntaxError:
return False
import unittest
from io import StringIO
class _FoobarTest(unittest.TestCase):
def test_empty_object_is_valid(self):
foobar = Foobar()
self.assertTrue(foobar.valid(StringIO("<object />")))
I thought about instead doing it this way:
class _FoobarTest(unittest.TestCase):
from io import StringIO as StringIO_
def test_empty_object_is_valid(self):
foobar = Foobar()
self.assertTrue(foobar.valid(self.StringIO_("<object />")))
but that does not feel very natural to me. Since Python is a language that does care about best practice a lot; is there a somewhat official statement on this? I wasn't able to find anything in the PEP documents on this, which made me wonder if it is a good idea to unit test in the same module at all.
Neither PyCharm nor the Python community encourage having unit tests in the same file. See the PyCharm tutorial on creating unit tests. As demonstrated in those instructions, the better way to do unittests is to have them in separate files with a "test_" prefix, which can be discovered and run automatically both by the built in unittest module, or other libraries.
If you look under the documentation for unittest you will find there is already a built-in system for test discovery that works great.
python -m unittest discover
will find all tests that match the pattern "test*.py" in the current directory. This is the built in default, and best practice until you need something else.

Categories