py.test: programmaticalyl ignore unittest tests - python

I'm trying to promote our a team to migrate to py.test from unittest in hope that less boilerplate and faster runs will reduce accuses as to why they don't write as much unittests as they should.
One problem we have is that almost all of our old django.unittest.TestCase fail due to errors. Also, most of them are really slow.
We had decided that the new test system will ignore the old tests, and will be used for new tests only. I tried to get py.test to ignore old tests by creating the following in conftest.py:
def pytest_collection_modifyitems(session, config, items):
print ("Filtering unittest.TestCase tests")
selected = []
for test in items:
parent = test.getparent(pytest.Class)
if not parent or not issubclass(parent.obj, unittest.TestCase) or hasattr(parent.obj, 'use_pytest'):
selected.append(test)
print("Filtered {} tests out of {}".format(len(items) - len(selected), len(items)))
items[:] = selected
Problem is, it filters all tests, also this one:
import pytest
class SanityCheckTest(object):
def test_something(self):
assert 1
Using some different naming pattern for the new tests would be a rather poor solution.

My test class did not conform to the naming convention. I change it to:
import pytest
class TestSanity(object):
def test_something(self):
assert 1
And also fixed a bug in my pytest_collection_modifyitems and it works.

Related

pytest fixture of fixtures

I am currently writing tests for a medium sized library (~300 files).
Many classes in this library share the same testing scheme which were coded using pytest:
File test_for_class_a.py:
import pytest
#pytest.fixture()
def setup_resource_1():
...
#pytest.fixture()
def setup_resource_2():
...
#pytest.fixture()
def setup_class_a(setup_resource_1, setup_resource_2):
...
def test_1_for_class_a(setup_class_a):
...
def test_2_for_class_a(setup_class_a):
...
similar files exist for class_b, class_c etc ... The only difference being the content of setup_resource_1 & setup_resource_2.
Now I would like to re-use the fixtures setup_class_a, setup_class_b, setup_class_c defined in test_for_class_a.py, test_for_class_b.py and test_for_class_c.py to run tests on them.
In a file test_all_class.py, this works but it is limited to one fixture per test:
from test_for_class_a import *
#pytest.mark.usefixtures('setup_class_a') # Fixture was defined in test_for_class_a.py
def test_some_things_on_class_a(request)
...
But I am looking for a way to perform something more general:
from test_for_class_a import *
from test_for_class_b import * # I can make sure I have no collision here
from test_for_class_c import * # I can make sure I have no collision here
==> #generate_test_for_fixture('setup_class_a', 'setup_class_b', 'setup_class_c')
def test_some_things_on_all_classes(request)
...
Is there any way to do something close to that?
I have been looking at factories of factories and abstract pytest factories but I am struggling with the way pytest defines fixture.
Is there any way to solve this problems?
We had same problem at work and I was hoping to write fixture just once for every case. So I wrote plugin pytest-data which does that. Example:
#pytest.fixture
def resource(request):
resource_data = get_data(reqeust, 'resource_data', {'some': 'data', 'foo': 'foo'})
return Resource(resource_data)
#use_data(resource_data={'foo': 'bar'})
def test_1_for_class_a(resource):
...
#use_data(resource_data={'foo': 'baz'})
def test_2_for_class_a(resource):
...
What's great about it is that you write fixture just once with some defaults. When you just need that fixture/resource and you don't care about specific setup, you just use it. When you need in test some specific attribute, let's say to check out if that resource can handle also 100 character long value, you can pass it by use_data decorator instead of writing another fixture.
With that you don't have to care about conflicts, because everything will be there just once. And then you can use conftest.py for all of your fixtures without importing in test modules. For example we did separate deep module of all fixtures and all included in top conftest.py.
Documentation of plugin pytest-data: http://horejsek.github.io/python-pytest-data/
One solution I found is to abuse the test cases as following:
from test_for_class_a import *
from test_for_class_b import *
from test_for_class_c import *
list_of_all_fixtures = []
# This will force pytest to generate all sub-fixture for class a
#pytest.mark.usefixtures(setup_class_a)
def test_register_class_a_fixtures(setup_class_a):
list_of_fixtures.append(setup_class_a)
# This will force pytest to generate all sub-fixture for class b
#pytest.mark.usefixtures(setup_class_b)
def test_register_class_b_fixtures(setup_class_b):
list_of_fixtures.append(setup_class_b)
# This will force pytest to generate all sub-fixture for class c
#pytest.mark.usefixtures(setup_class_c)
def test_register_class_b_fixtures(setup_class_c):
list_of_fixtures.append(setup_class_c)
# This is the real test to apply on all fixtures
def test_all_fixtures():
for my_fixture in list_of_all_fixtures:
# do something with my_fixture
This implicitly rely on the fact that all test_all_fixture is executed after all the test_register_class*. It is obviously quite dirty but it works...
I think, only pytest_generate_test() (example) could give you such power of customization:
def pytest_generate_tests(metafunc):
if 'db' in metafunc.funcargnames:
metafunc.addcall(param="d1")
metafunc.addcall(param="d2")
EDIT: Ooops, answered the question that older than python experience I have o.O

How to print the skipped test in the final result

Im using nosetests framework for writing test cases. I'm using #attr to pick right test cases
my test.py is something like this...
class Test_module1_tcs:
#classmethod
def setup_class(cls):
...
def setup(self):
...
#attr(mod=['mod1', 'mod2'])
def test_particular_func(self):
...
def teardown(self):
...
#classmethod
def teardown_class(cls):
...
if i execute this test case, the result will be as below
$nosetests test.py -a mod=mod3
----------------------------------------------------------------------
Ran 0 tests in 0.001s
OK
is there a way where i can get skipped test case info? Mainly because i have more than 1000 test cases, its getting hard to know which test case was skipped.
To achieve what you ask exactly, you will have to make a choice: either run all tests and throw SkipTest exception when attributes are introspected and found matching within a test, or create your own plugin for nose (by cloning attrib.py to keep track of skipped tests).
The best way to guarantee that you have test coverage is to use Ned Batchelder's coverage. Nose has a great support for covering tests as well as code with --cover-tests. You can also set the passing threshold with --cover-min-percentage=100. So with right combination of cover package selection, accumulation of the test results, and passing coverage threshold, you will get assurance that all your tests got executed.
Do not forget to have a look at --cover-html generated files, its an eye candy!

Running pytest tests in another Python package

Right now, I have a Python package (let's call it mypackage) with a bunch of tests that I run with pytest. One particular feature can have many possible implementations, so I have used the funcarg mechanism to run these tests with a reference implementation.
# In mypackage/tests/conftest.py
def pytest_funcarg__Feature(request):
return mypackage.ReferenceImplementation
# In mypackage/tests/test_stuff.py
def test_something(Feature):
assert Feature(1).works
Now, I am creating a separate Python package with a fancier implementation (fancypackage). Is it possible to run all of the tests in mypackage that contain the Feature funcarg, only with different implementations?
I would like to avoid having to change fancypackage if I add new tests in mypackage, so explicit imports aren't ideal. I know that I can run all of the tests with pytest.main(), but since I have several implementations of my feature, I don't want to call pytest.main() multiple times. Ideally, it would look like something like this:
# In fancypackage/tests/test_impl1.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation1
## XXX: Do pytest collection on mypackage.tests, but don't run them
# In fancypackage/tests/test_impl2.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation2
## XXX: Do pytest collection on mypackage.tests, but don't run them
Then, when I run pytest in fancypackage, it would collect each of the mypackage.tests tests twice, once for each feature implementation. I have tried doing this with explicit imports, and it seems to work fine, but I don't want to explicitly import everything.
Bonus
An additional nice bonus would be to only collect those tests that contain the Feature funcarg. Is that possible?
Example with unittest
Before switching to py.test, I did this with the standard library's unittest. The function for that is the following:
def mypackage_test_suite(Feature):
loader = unittest.TestLoader()
suite = unittest.TestSuite()
mypackage_tests = loader.discover('mypackage.tests')
for test in all_testcases(mypackage_tests):
if hasattr(test, 'Feature'):
test.Feature = Feature
suite.addTest(test)
return suite
def all_testcases(test_suite_or_case):
try:
suite = iter(test_suite_or_case)
except TypeError:
yield test_suite_or_case
else:
for test in suite:
for subtest in all_testcases(test):
yield subtest
Obviously things are different now because we're dealing with test functions and classes instead of just classes, but it seems like there should be some equivalent in py.test that builds the test suite and allows you to iterate through it.
You could parameterise your Feature fixture:
#pytest.fixture(params=['ref', 'fancy'])
def Feature(request):
if request.param == 'ref':
return mypackage.ReferenceImplementation
else:
return fancypackage.Implementation1
Now if you run py.test it will test both.
Selecting tests on the fixture they use is not possible AFAIK, you could probably cobble something together using request.applymarker() and -m. however.

Giving parameters into TestCase from Suite in python

From python documentation(http://docs.python.org/library/unittest.html):
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
self.widget = None
def test_default_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
Here is, how invoke those testcase:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_size'))
suite.addTest(WidgetTestCase('test_resize'))
return suite
Is it possible to insert parameter custom_parameter into WidgetTestCase like:
class WidgetTestCase(unittest.TestCase):
def setUp(self,custom_parameter):
self.widget = Widget('The widget')
self.custom_parameter=custom_parameter
?
What I've done is in test_suite module just added
WidgetTestCase.CustomParameter="some_address"
The simplest solutions are the best :)
I've found a way to do this, but it's a bit of a cludge.
Basically, what I do is add, to the TestCase, an __init__ method which defines a 'default' parameter and a __str__ so that we can distinguish cases:
class WidgetTestCase(unittest.TestCase):
def __init__(self, methodName='runTest'):
self.parameter = default_parameter
unittest.TestCase.__init__(self, methodName)
def __str__(self):
''' Override this so that we know which instance it is '''
return "%s(%s) (%s)" % (self._testMethodName, self.currentTest, unittest._strclass(self.__class__))
Then in suite(), I iterate over my test parameters, replacing the default parameter with one specific to each test:
def suite():
suite = unittest.TestSuite()
for test_parameter in test_parameters:
loadedtests = unittest.TestLoader().loadTestsFromTestCase(WidgetTestCase)
for t in loadedtests:
t.parameter = test_parameter
suite.addTests(loadedtests)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(OtherWidgetTestCases))
return suite
where OtherWidgetTestCases are tests which don't need to be parameterised.
For instance I have a bunch of tests on real data for which a suite of tests need to be applied to each, but I also have some synthetic data sets, designed to test certain edge cases not normally present in the data, and I only need to apply certain tests to those, so they get their own tests in OtherWidgetTestCases.
This is something that has been on my mind recently. Yes it is very possible to do. I called it scenario testing, but I think parameterized may be more accurate. I put a proof of concept up as a gist here. In short it is a meta class that allows you to define a scenario and run the tests against it a bunch. With it your example can be something like this:
class WidgetTestCase(unittest.TestCase):
__metaclass__ = ScenarioMeta
class widget_width(ScenerioTest):
scenarios = [
dict(widget_in=Widget("One Way"), expected_tuple=(50, 50)),
dict(widget_in=Widget("Another Way"), expected_tuple=(100, 150))
]
def __test__(self, widget_in, expected_tuple):
self.assertEqual(widget_in.size, expected_tuple)
When run, the meta class writes 2 seperate tests out so the output would be something like:
$ python myscerariotest.py -v
test_widget_width_0 (__main__.widget_width) ... ok
test_widget_width_1 (__main__.widget_width) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
As you can see the scenarios are converted to tests at runtime.
Now I am not yet sure if this is even a good idea. I use it in tests where I have a lot of text centric cases that repeat the same assertions on slightly different data, which helps me to catch the little edge cases. But the classes in that gist do work and I believe it accomplishes what you are after.
Note that the with some trickery the test cases can be given names and even pulled from an external source like a text file or database. Its not documented yet but some digging around in the meta class should get you started. There is also some more info and examples on my post here.
Edit
This is an ugly hack that I do not support anymore. The implementation should have been done as a subclass of TestCase, not as a hacked meta class. Live and learn. An even better solution would be to use nose generators.
I don't believe so, the signature for setUp needs to be what unittest is expecting, afaik, setUp is automagically called within the testcase's run method as setUp()... you're not going to be able to pass it unless you override run to pass in the var you want. But I think what you want defeats the purpose of unit testing. Don't try to use a DRY philosophy with this, each unit you're testing should be a part of a class or even part of a function/method.
I don't think this is a good idea. Unit tests should be thorough enough that you test all functionality in your cases so passing in different parameteres shouldn't be required.
You mention you're passing in a www address - this is almost certainly not a good idea. What happens if you try and run the tests on a machine where the 'net connection is down? Your tests should be:
Automatic - they will run on all machines and platforms where your app is supported, without user intervention. They shouldn't rely on external environment to pass. This means (amongst other things) that relying on a properly set up connection to the Internet is a bad idea. You can get around this by providing dummy data. Instead of passing in a URL to a resource, abstract away the data source and pass in a data-stream or whatever. This is especially easy in python since you can make use of python's duck-typing to present a stream-like object (python frequently uses a "file-like" object for this very reason!).
Thorough - your unit tests should have 100% code coverage, and cover all possible situations. You want to test your code with multiple sites? Instead, test your code with all the possible features that a site may include. Without knowing more about what your application does, I can't offer much advice in this point.
Now, it looks like you're tests are going to be heavily data-driven. There are many tools that allow you to define data-sets for unit tests and load them in the tests. Check out python test fixtures, for example.
I realise that this isn't the answer you're looking for, but I think you'll have more joy in the long-run if you follow these principles.

Python unittest with expensive setup

My test file is basically:
class Test(unittest.TestCase):
def testOk():
pass
if __name__ == "__main__":
expensiveSetup()
try:
unittest.main()
finally:
cleanUp()
However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at Caching result of setUp() using Python unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed.
How can I do the setup and cleanup once for all the tests in my TestSuite?
The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.
If you use Python >= 2.7 (or unittest2 for Python >= 2.4 & <= 2.6), the best approach would be be to use
def setUpClass(cls):
# ...
setUpClass = classmethod(setUpClass)
to perform some initialization once for all tests belonging to the given class.
And to perform the cleanup, use:
#classmethod
def tearDownClass(cls):
# ...
See also the unittest standard library documentation on setUpClass and tearDownClass classmethods.
This is what I do:
class TestSearch(unittest.TestCase):
"""General Search tests for...."""
matcher = None
counter = 0
num_of_tests = None
def setUp(self): # pylint: disable-msg=C0103
"""Only instantiate the matcher once"""
if self.matcher is None:
self.__class__.matcher = Matcher()
self.__class__.num_of_tests = len(filter(self.isTestMethod, dir(self)))
self.__class__.counter = self.counter + 1
def tearDown(self): # pylint: disable-msg=C0103
"""And kill it when done"""
if self.counter == self.num_of_tests:
print 'KILL KILL KILL'
del self.__class__.matcher
Sadly (because I do want my tests to be independent and deterministic), I do this a lot (because system testing that take less than 5 minutes are also important).
First of all, what S. Lott said. However!, you do not want to do that. There is a reason setUp and tearDown are wrapped around each test: they help preserve the determinism of testing.
Otherwise, if some test places the system in a bad state, your next tests may fail. Ideally, each of your tests should be independent.
Also, if you insist on doing it this way, instead of writing by hand self.runTest1(), self.runTest2(), you might want to do a bit of introspection in order to find the methods to run.
Won't package-level initialization do it for you? From the Nose Wiki:
nose allows tests to be grouped into
test packages. This allows
package-level setup; for instance, if
you need to create a test database or
other data fixture for your tests, you
may create it in package setup and
remove it in package teardown once per
test run, rather than having to create
and tear it down once per test module
or test case.
To create package-level setup and
teardown methods, define setup and/or
teardown functions in the __init__.py
of a test package. Setup methods may
be named setup, setup_package, setUp,
or setUpPackage; teardown may be named
teardown, teardown_package, tearDown
or tearDownPackage. Execution of tests
in a test package begins as soon as
the first test module is loaded from
the test package.
You can save the state if expensiveSetup() is run or not.
__expensiveSetup_has_run = False
class ExpensiveSetupMixin(unittest.TestCase):
def setUp(self):
global __expensiveSetup_has_run
super(ExpensiveSetupMixin, self).setUp()
if __expensiveSetup_has_run is False:
expensiveSetup()
__expensiveSetup_has_run = True
Or some kind of variation of this. Maybe pinging xml-rpc server and create a new one if it isn't answering.
But the unit-testing way AFAIK is to setup and teardown per unittest even if it is expensive.
I know nothing about Netbeans, but I though I should mention zope.testrunner and it's support for a nifty thing: Layers. Basically, you do the testsetup in separate classes, and attach those classes to the tests. These classes can inherit from each other, forming a layer of setups. The testrunner will then only call each setup once, and saving the state of that in memory, and instead of setting up and tearing down, it will simply just copy the relevant layer context as a setup.
This speeds up test setup a lot, and is used when you test Zope products and Plone, where the testsetup often needs you to start a Plone CMS server, create a Plone site and add loads of content, a process that can take upwards half a minute. Doing that for each test method is obviously impossible, but with layers it is done only once. This shortens the test setup and protects the test methods from each other, and therefore means that the testing continues to be determenistic.
So I don't know of zope.testrunner will work for you, but it's worth a try.
shuld be possible to do it by defining startTestRun,stopTestRun of unittest.TestResult class. answer https://stackoverflow.com/a/64892396/2679740
You can assure setUp and tearDown execute once if you have only one test method, runTest. This method can do whatever else it wants. Just be sure you don't have any methods with names that start with test.
class MyExpensiveTest( unittest.TestCase ):
def setUp( self ):
self.resource = owThatHurts()
def tearDown( self ):
self.resource.flush()
self.resource.finish()
def runTest( self ):
self.runTest1()
self.tunTest2()
def runTest1( self ):
self.assertEquals(...)
def runTest2( self ):
self.assertEquals(...)
It doesn't automagically figure out what to run. If you add a test method, you also have to update runTest.

Categories