Running pytest tests in another Python package - python

Right now, I have a Python package (let's call it mypackage) with a bunch of tests that I run with pytest. One particular feature can have many possible implementations, so I have used the funcarg mechanism to run these tests with a reference implementation.
# In mypackage/tests/conftest.py
def pytest_funcarg__Feature(request):
return mypackage.ReferenceImplementation
# In mypackage/tests/test_stuff.py
def test_something(Feature):
assert Feature(1).works
Now, I am creating a separate Python package with a fancier implementation (fancypackage). Is it possible to run all of the tests in mypackage that contain the Feature funcarg, only with different implementations?
I would like to avoid having to change fancypackage if I add new tests in mypackage, so explicit imports aren't ideal. I know that I can run all of the tests with pytest.main(), but since I have several implementations of my feature, I don't want to call pytest.main() multiple times. Ideally, it would look like something like this:
# In fancypackage/tests/test_impl1.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation1
## XXX: Do pytest collection on mypackage.tests, but don't run them
# In fancypackage/tests/test_impl2.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation2
## XXX: Do pytest collection on mypackage.tests, but don't run them
Then, when I run pytest in fancypackage, it would collect each of the mypackage.tests tests twice, once for each feature implementation. I have tried doing this with explicit imports, and it seems to work fine, but I don't want to explicitly import everything.
Bonus
An additional nice bonus would be to only collect those tests that contain the Feature funcarg. Is that possible?
Example with unittest
Before switching to py.test, I did this with the standard library's unittest. The function for that is the following:
def mypackage_test_suite(Feature):
loader = unittest.TestLoader()
suite = unittest.TestSuite()
mypackage_tests = loader.discover('mypackage.tests')
for test in all_testcases(mypackage_tests):
if hasattr(test, 'Feature'):
test.Feature = Feature
suite.addTest(test)
return suite
def all_testcases(test_suite_or_case):
try:
suite = iter(test_suite_or_case)
except TypeError:
yield test_suite_or_case
else:
for test in suite:
for subtest in all_testcases(test):
yield subtest
Obviously things are different now because we're dealing with test functions and classes instead of just classes, but it seems like there should be some equivalent in py.test that builds the test suite and allows you to iterate through it.

You could parameterise your Feature fixture:
#pytest.fixture(params=['ref', 'fancy'])
def Feature(request):
if request.param == 'ref':
return mypackage.ReferenceImplementation
else:
return fancypackage.Implementation1
Now if you run py.test it will test both.
Selecting tests on the fixture they use is not possible AFAIK, you could probably cobble something together using request.applymarker() and -m. however.

Related

Python mock patch tests work separately but not combined

I have written units test using unittest.mock and I am facing some issues with running tests together and singularly. I am mocking jira.JIRA from the JIRA SDK (on PyPI). My tests look something like:
import ...
class MyTests(BaseTest):
setUp ...
tearDown ...
#mock.patch('os.system')
#mock.patch('jira.JIRA')
def test_my_lambda(self, mock_jira, mock_os) -> None:
# run some code
mock_jira().transition_issue.assert_called_once()
mock_jira().transition_issue.assert_called_with(param1, param2)
# other mock conditions
Now of course from my understanding, the MagicMock returned and I can do sub calls on it etc and I can still call assertions on those sub functions etc. This test runs in isolation, however when running many tests together it says that the transition_issue call was not called, despite me going through the code and seeing it is called.
Am I missing something?
I tried to look into resetting the mock after each test, but I assumed the patch took care of that.
I also tried to to do imports relating to the jira.JIRA within the function (anything which imports my class which has the jira.JIRA() call in), but this did not work.
The fact it works as a single test is confusing me.

Is it possible to use a fixture inside pytest_generate_tests()?

I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?
From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).
#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.
This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.

Chicken or the egg with pytest fixtures

I want to use the record_xml_property fixture.
No problem. It works perfectly when it is presently available.
However, I want my tests to run smoothly whether this fixture is installed or not. When I create a 'wrapper' fixture,
(something like this)
//this one works nicely when record_xml_property is there
#pytest.fixture()
def real_property_handler( record_xml_property, mykey, myval):
record_xml_property( mykey, myval)
//this does a harmless print instead
#pytest.fixture()
def fallback_property_handler( mykey, myval):
print('{0}={1}].format( mykey, myval))
def MyXMLWrapper( mykey, myval):
try: # I want to use the REAL one if I can
real_property_handler( record_xml_property, mykey, myval)
except: # but still do something nice if it's not
fallback_property_handler( mykey, myval)
My test should not have to be cognizant
of any fixture(s) that may or may not underlie my wrapper function
def test_simple():
MyXMLWrapper( 'mykeyname', 'mykeyvalue')
assert True
I'm stuck because in order for my tests to ever work properly it appears that I have to pass the record_xml_property fixture as a parameter which I can never do in environments that don't have this fixture installed.
I've tried several things.
If I make MyXMLWrapper a fixture itself then I have to pass it the record_xml_fixture, but if I define MyXMLWrapper as a function (above) then I have no way to reference the record_xml_property in case it DOES exist.
What am I not understanding here about how fixtures work?
Thanks.

py.test: programmaticalyl ignore unittest tests

I'm trying to promote our a team to migrate to py.test from unittest in hope that less boilerplate and faster runs will reduce accuses as to why they don't write as much unittests as they should.
One problem we have is that almost all of our old django.unittest.TestCase fail due to errors. Also, most of them are really slow.
We had decided that the new test system will ignore the old tests, and will be used for new tests only. I tried to get py.test to ignore old tests by creating the following in conftest.py:
def pytest_collection_modifyitems(session, config, items):
print ("Filtering unittest.TestCase tests")
selected = []
for test in items:
parent = test.getparent(pytest.Class)
if not parent or not issubclass(parent.obj, unittest.TestCase) or hasattr(parent.obj, 'use_pytest'):
selected.append(test)
print("Filtered {} tests out of {}".format(len(items) - len(selected), len(items)))
items[:] = selected
Problem is, it filters all tests, also this one:
import pytest
class SanityCheckTest(object):
def test_something(self):
assert 1
Using some different naming pattern for the new tests would be a rather poor solution.
My test class did not conform to the naming convention. I change it to:
import pytest
class TestSanity(object):
def test_something(self):
assert 1
And also fixed a bug in my pytest_collection_modifyitems and it works.

Python unittest with expensive setup

My test file is basically:
class Test(unittest.TestCase):
def testOk():
pass
if __name__ == "__main__":
expensiveSetup()
try:
unittest.main()
finally:
cleanUp()
However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at Caching result of setUp() using Python unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed.
How can I do the setup and cleanup once for all the tests in my TestSuite?
The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.
If you use Python >= 2.7 (or unittest2 for Python >= 2.4 & <= 2.6), the best approach would be be to use
def setUpClass(cls):
# ...
setUpClass = classmethod(setUpClass)
to perform some initialization once for all tests belonging to the given class.
And to perform the cleanup, use:
#classmethod
def tearDownClass(cls):
# ...
See also the unittest standard library documentation on setUpClass and tearDownClass classmethods.
This is what I do:
class TestSearch(unittest.TestCase):
"""General Search tests for...."""
matcher = None
counter = 0
num_of_tests = None
def setUp(self): # pylint: disable-msg=C0103
"""Only instantiate the matcher once"""
if self.matcher is None:
self.__class__.matcher = Matcher()
self.__class__.num_of_tests = len(filter(self.isTestMethod, dir(self)))
self.__class__.counter = self.counter + 1
def tearDown(self): # pylint: disable-msg=C0103
"""And kill it when done"""
if self.counter == self.num_of_tests:
print 'KILL KILL KILL'
del self.__class__.matcher
Sadly (because I do want my tests to be independent and deterministic), I do this a lot (because system testing that take less than 5 minutes are also important).
First of all, what S. Lott said. However!, you do not want to do that. There is a reason setUp and tearDown are wrapped around each test: they help preserve the determinism of testing.
Otherwise, if some test places the system in a bad state, your next tests may fail. Ideally, each of your tests should be independent.
Also, if you insist on doing it this way, instead of writing by hand self.runTest1(), self.runTest2(), you might want to do a bit of introspection in order to find the methods to run.
Won't package-level initialization do it for you? From the Nose Wiki:
nose allows tests to be grouped into
test packages. This allows
package-level setup; for instance, if
you need to create a test database or
other data fixture for your tests, you
may create it in package setup and
remove it in package teardown once per
test run, rather than having to create
and tear it down once per test module
or test case.
To create package-level setup and
teardown methods, define setup and/or
teardown functions in the __init__.py
of a test package. Setup methods may
be named setup, setup_package, setUp,
or setUpPackage; teardown may be named
teardown, teardown_package, tearDown
or tearDownPackage. Execution of tests
in a test package begins as soon as
the first test module is loaded from
the test package.
You can save the state if expensiveSetup() is run or not.
__expensiveSetup_has_run = False
class ExpensiveSetupMixin(unittest.TestCase):
def setUp(self):
global __expensiveSetup_has_run
super(ExpensiveSetupMixin, self).setUp()
if __expensiveSetup_has_run is False:
expensiveSetup()
__expensiveSetup_has_run = True
Or some kind of variation of this. Maybe pinging xml-rpc server and create a new one if it isn't answering.
But the unit-testing way AFAIK is to setup and teardown per unittest even if it is expensive.
I know nothing about Netbeans, but I though I should mention zope.testrunner and it's support for a nifty thing: Layers. Basically, you do the testsetup in separate classes, and attach those classes to the tests. These classes can inherit from each other, forming a layer of setups. The testrunner will then only call each setup once, and saving the state of that in memory, and instead of setting up and tearing down, it will simply just copy the relevant layer context as a setup.
This speeds up test setup a lot, and is used when you test Zope products and Plone, where the testsetup often needs you to start a Plone CMS server, create a Plone site and add loads of content, a process that can take upwards half a minute. Doing that for each test method is obviously impossible, but with layers it is done only once. This shortens the test setup and protects the test methods from each other, and therefore means that the testing continues to be determenistic.
So I don't know of zope.testrunner will work for you, but it's worth a try.
shuld be possible to do it by defining startTestRun,stopTestRun of unittest.TestResult class. answer https://stackoverflow.com/a/64892396/2679740
You can assure setUp and tearDown execute once if you have only one test method, runTest. This method can do whatever else it wants. Just be sure you don't have any methods with names that start with test.
class MyExpensiveTest( unittest.TestCase ):
def setUp( self ):
self.resource = owThatHurts()
def tearDown( self ):
self.resource.flush()
self.resource.finish()
def runTest( self ):
self.runTest1()
self.tunTest2()
def runTest1( self ):
self.assertEquals(...)
def runTest2( self ):
self.assertEquals(...)
It doesn't automagically figure out what to run. If you add a test method, you also have to update runTest.

Categories