Is it possible to use a fixture inside pytest_generate_tests()? - python

I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?

From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).

#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.

This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.

Related

Possible to skip a test depending on fixture value?

I have a lot of tests broken into many different files. In my conftest.py I have something like this:
#pytest.fixture(scope="session",
params=["foo", "bar", "baz"])
def special_param(request):
return request.param
While the majority of the tests work with all values, some only work with foo and baz. This means I have this in several of my tests:
def test_example(special_param):
if special_param == "bar":
pytest.skip("Test doesn't work with bar")
I find this a little ugly and was hoping for a better way to do it. Is there anyway to use the skip decorator to achieve this? If not, is it possible to right my own decorator that can do this?
You can, as one of the comments from #abarnert, write a custom decorator using functools.wraps for this purpose exactly. In my example below, I am skipping a test if a fixture configs (some configuration dictionary) value for report type is enhanced, versus standard (but it could be whatever condition you want to check).
For example, here's an example of a fixture we'll use to determine whether to skip a test or not:
#pytest.fixture
def configs()-> Dict:
return {"report_type": "enhanced", "some_other_fixture_params": 123}
Now we write a decorator that will skip the test by inspecting the fixture configs contents for its report_type key value:
def skip_if_report_enhanced(test_function: Callable) -> Callable:
#wraps(test_function)
def wrapper(*args, **kwargs):
configs = kwargs.get("configs") # configs is a fixture passed into the pytest function
report_type = configs.get("report_type", "standard")
if report_type is ReportType.ENHANCED:
return pytest.skip(f"Skipping {test_function.__name__}") # skip!
return test_function(*args, **kwargs) # otherwise, run the pytest
return wrapper # return the decorated callable pytest
Note here that I am using kwargs.get("configs") to pull the fixture out here.
Below the test itself, the logic of which is irrelevant, just that the test runs or not:
#skip_if_report_enhanced
def test_that_it_ran(configs):
print("The test ran!") # shouldn't get here if the report type is set to enhanced
The output from running this test:
============================== 1 skipped in 0.55s ==============================
Process finished with exit code 0 SKIPPED
[100%] Skipped: Skipping test_that_it_ran
One solution is to override the fixture with #pytest.mark.parametrize. For example
#pytest.mark.parametrize("special_param", ["foo"])
def test_example(special_param):
# do test
Another possibility is to not use the special_param fixture at all and explicitly use the value "foo" where needed. The downside is that this only works if there are no other fixtures that also rely on special_param.

Chicken or the egg with pytest fixtures

I want to use the record_xml_property fixture.
No problem. It works perfectly when it is presently available.
However, I want my tests to run smoothly whether this fixture is installed or not. When I create a 'wrapper' fixture,
(something like this)
//this one works nicely when record_xml_property is there
#pytest.fixture()
def real_property_handler( record_xml_property, mykey, myval):
record_xml_property( mykey, myval)
//this does a harmless print instead
#pytest.fixture()
def fallback_property_handler( mykey, myval):
print('{0}={1}].format( mykey, myval))
def MyXMLWrapper( mykey, myval):
try: # I want to use the REAL one if I can
real_property_handler( record_xml_property, mykey, myval)
except: # but still do something nice if it's not
fallback_property_handler( mykey, myval)
My test should not have to be cognizant
of any fixture(s) that may or may not underlie my wrapper function
def test_simple():
MyXMLWrapper( 'mykeyname', 'mykeyvalue')
assert True
I'm stuck because in order for my tests to ever work properly it appears that I have to pass the record_xml_property fixture as a parameter which I can never do in environments that don't have this fixture installed.
I've tried several things.
If I make MyXMLWrapper a fixture itself then I have to pass it the record_xml_fixture, but if I define MyXMLWrapper as a function (above) then I have no way to reference the record_xml_property in case it DOES exist.
What am I not understanding here about how fixtures work?
Thanks.

Py.test: Run initialization method before pytest_generate_tests

I'm making a test suite using py.test that starts by generating randomly simulated files and the filenames are stored in an initialization object. The tests are then generated by pytest_generate_tests; file0.txt, file1.txt, etc.
Tests are generated from a YAML file which includes in input string like cat %s and a substitution string like file*.txt, which generates 1 test per file it matches in pytest_generate_tests. Thus, I need the files to exist before pytest_generate_tests is called, else files won't be matched.
Before I had encountered the issue, I had an initialization fixture in conftest.py:
#pytest.fixture(scope="session", autouse=True)
def initializer(request):
# ...do some stuff with the request
return InitializeTests(...)
class InitializeTests():
def __init__(self, n):
# ...generate n files
which I could then use in the file tests_0.py:
test_a(initializer, input_string):
# ...
and test_a's are generated by:
def pytest_generate_tests(metafunc):
input_strings = manipulate_the_yaml_file() # This requires the files to exist.
if "input_string" in metafunc.fixturenames:
metafunc.parametrize("input_string", input_strings)
I then tried using a global variable to get the initializer and share it across the files as explained in here. I then put the initialization at the top of pytest_generate_tests and calling conftest.initializer from within test_a, but then the initialization step gets run for every test method I add, test_b etc.
So the question is, how can I run a method before pytest_generate_tests and keep the instance of the initialization class across all tests in the session?
Just writing the problem gave me an obvious solution given the second method using globals:
if "initializer" not in globals():
initialize()
where initialize creates the global variable initializer and thus only creates it once. However, I don't really like working with globals as I thought fixtures or some other py.test technique could help me, and would gladly hear a better answer.

Django Test Case Can't run method

I am just getting started with Django, so this may be something stupid, but I am not even sure what to google at this point.
I have a method that looks like this:
def get_user(self,user):
return Utilities.get_userprofile(user)
The method looks like this:
#staticmethod
def get_userprofile(user):
return UserProfile.objects.filter(user_auth__username=user)[0]
When I include this in the view, everything is fine. When I write a test case to use any of the methods inside the Utility class, I get None back:
Two test cases:
def test_stack_overflow(self):
a = ObjName()
print(a.get_user('admin'))
def test_Utility(self):
print(Utilities.get_user('admin'))
Results:
Creating test database for alias 'default'...
None
..None
.
----------------------------------------------------------------------
Can someone tell me why this is working in the view, but not working inside of the test case and does not generate any error messages?
Thanks
Verify whether your unit test comply the followings,
TestClass must be written in a file name test*.py
TestClass must have been subclassed from unittest.TestCase
TestClass should have a setUp function to create objects(usually done in this way, but objects creation can happen in the test functions as well) in the database
TestClass functions should start with test so as to be identified and run by the ./manage.py test command
TestClass may have tearDown to properly end the unit test case.
Test Case Execution process:
When you run ./manage.py test django sets up a test_your_database_name and creates all the objects mentioned in the setUp function(Usually) and starts executing the test functions in the order of placement inside the class and once when all the test functions are executed, finally looks for the tearDown function executes if any present in the test class and destroys the test database.
It may be because that you might not have invoked objects creation in setUp function or elsewhere in the TestClass.
Can you kindly post the entire traceback and test file to help you better?

Running pytest tests in another Python package

Right now, I have a Python package (let's call it mypackage) with a bunch of tests that I run with pytest. One particular feature can have many possible implementations, so I have used the funcarg mechanism to run these tests with a reference implementation.
# In mypackage/tests/conftest.py
def pytest_funcarg__Feature(request):
return mypackage.ReferenceImplementation
# In mypackage/tests/test_stuff.py
def test_something(Feature):
assert Feature(1).works
Now, I am creating a separate Python package with a fancier implementation (fancypackage). Is it possible to run all of the tests in mypackage that contain the Feature funcarg, only with different implementations?
I would like to avoid having to change fancypackage if I add new tests in mypackage, so explicit imports aren't ideal. I know that I can run all of the tests with pytest.main(), but since I have several implementations of my feature, I don't want to call pytest.main() multiple times. Ideally, it would look like something like this:
# In fancypackage/tests/test_impl1.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation1
## XXX: Do pytest collection on mypackage.tests, but don't run them
# In fancypackage/tests/test_impl2.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation2
## XXX: Do pytest collection on mypackage.tests, but don't run them
Then, when I run pytest in fancypackage, it would collect each of the mypackage.tests tests twice, once for each feature implementation. I have tried doing this with explicit imports, and it seems to work fine, but I don't want to explicitly import everything.
Bonus
An additional nice bonus would be to only collect those tests that contain the Feature funcarg. Is that possible?
Example with unittest
Before switching to py.test, I did this with the standard library's unittest. The function for that is the following:
def mypackage_test_suite(Feature):
loader = unittest.TestLoader()
suite = unittest.TestSuite()
mypackage_tests = loader.discover('mypackage.tests')
for test in all_testcases(mypackage_tests):
if hasattr(test, 'Feature'):
test.Feature = Feature
suite.addTest(test)
return suite
def all_testcases(test_suite_or_case):
try:
suite = iter(test_suite_or_case)
except TypeError:
yield test_suite_or_case
else:
for test in suite:
for subtest in all_testcases(test):
yield subtest
Obviously things are different now because we're dealing with test functions and classes instead of just classes, but it seems like there should be some equivalent in py.test that builds the test suite and allows you to iterate through it.
You could parameterise your Feature fixture:
#pytest.fixture(params=['ref', 'fancy'])
def Feature(request):
if request.param == 'ref':
return mypackage.ReferenceImplementation
else:
return fancypackage.Implementation1
Now if you run py.test it will test both.
Selecting tests on the fixture they use is not possible AFAIK, you could probably cobble something together using request.applymarker() and -m. however.

Categories