py.test skips test class if constructor is defined - python

I have following unittest code running via py.test.
Mere presence of the constructor make the entire class skip when running
py.test -v -s
collected 0 items / 1 skipped
Can anyone please explain to me this behaviour of py.test?
I am interested in understanding py.test behaviour, I know the constructor is not needed.
Thanks,
Zdenek
class TestClassName(object):
def __init__(self):
pass
def setup_method(self, method):
print "setup_method called"
def teardown_method(self, method):
print "teardown_method called"
def test_a(self):
print "test_a called"
assert 1 == 1
def test_b(self):
print "test_b called"
assert 1 == 1

The documentation for py.test says that py.test implements the following standard test discovery:
collection starts from the initial command line arguments which may be directories, filenames or test ids.
recurse into directories, unless they match norecursedirs
test_*.py or *_test.py files, imported by their package name.
Test prefixed test classes (without an __init__ method) [<-- notice this one here]
test_ prefixed test functions or methods are test items
So it's not that the constructor isn't needed, py.test just ignores classes that have a constructor. There is also a guide for changing the standard test discovery.

As already mentioned in the answer by Matti Lyra py.test purposely skips classes which have a constructor. The reason for this is that classes are only used for structural reasons in py.test and do not have any inherent behaviour, while when actually writing code it is the opposite and much rarer to not have an .__init__() method for a class. So in practice skipping a class with a constructor will likely be what was desired, usually it is just a class which happens to have a conflicting name.
Lastly py.test needs to instantiate the class in order to execute the tests. If the constructor takes any arguments it can't instantiate it, so again skipping is the right thing to do.

All the above answers clearly explain the underlying cause, I just thought to share my experience and workaround the warnings.
I got my test to work without the warnings by aliasing the imported Class
from app.core.utils import model_from_meta
from app.core.models import Panel, TestType as _TestType
from app.core.serializers import PanelSerializer, TestType as _TestTypeSerializer
def test_model_from_meta():
assert (Panel is model_from_meta(PanelSerializer))
assert (_TestType is model_from_meta(_TestTypeSerializer))
After importing the class using aliases the warnings no longer get printed
I hope this helps someone.

In my case, I just so happened to have a parameter's class names TestParams, which conflicts with pytest looking for classes beginning with the name test....
Solution: rename your own class
Source

Related

Pytest collection warning __init__ constructor

I really couldn't find a solution for :
PytestCollectionWarning: cannot collect test class 'TestBlaBla' because it has a init constructor
Here is my testing module and it needs to take the arguments outside of this file because at the end of the day I'm gonna call all test modules in a single file and run them all over different names and there is a bunch of names. I have to init them but when I run pytest it always ignores these classes. Idk how to handle without initializing them. If there is any suggestions I would be glad to hear.
tests/test_bla_bla.py
class TestBlaBla():
def __init__(self, **kwargs):
self.name1 = kwargs.get("name1")
self.name2 = kwargs.get("name2")
#pytest.fixture(scope='session')
def load_data_here(self):
return load_data(self.name1) # a function comes from a utils file. it only stands for load data and it needs take a name for path
..
"continue with test_ functions that use output of load_data_here"
tests/main.py
class TestingAll:
def __init__(self, *args, **kwargs):
self.name1 = kwargs.get("name1")
self.name2 = kwargs.get("name2")
self._process()
def _process(self):
TestBlaBla(name1 = self.name1, name2= self.name2)
TestBlaBla2(name1 = self.name1, name2= self.name2)
if __name__ == "__main__":
Test = TestingAll(name1 = "name1", name2= "name2")
Python test modules cannot have init methods as python test instantiates the class itself and there is not any way (IMHO?) to extend the instantiation and add arguments.
Yes, it is a natural idea to want to make your tests flexible by passing in command-line arguments. But you can't :-P. So you need to find another way of doing this.
Note also the if name == 'main': can work if you call the test file with python and add some code to explicitly call a py test runner. Note you do not just call your test class. A python test runner needs to be instantiated itself and run tests in a particular way.
e.g. we can have this which will allow python to instantiate and run tests in a TestingAll class (as long as it doesn't have an init method).
This uses the unittest.TextTestRunner
Note there are all sorts of python test runners, also runners like nose2 or like py.test which use a different test library.
if __name__ == '__main__':
unittest.main()
suite = unittest.TestLoader().loadTestsFromTestCase(TestingAll)
unittest.TextTestRunner(verbosity=3).run(suite)
You could maybe have an ArgsProcess class to process command line args.
Then iterate and set a global var with each value to be used and call the test runner each time.
But it depends on how your tests will be used.
The answers on this question already mentioned in comments explain and link to documentation for this warning:
py.test skips test class if constructor is defined
The answer on this question shows how an init method is replaced by using fixture:
Pytest collection warning due to __init__ constructor
Maybe something like this would work for you.

Can a python unittest class add an assert statement for each test method in a parent class?

I have a unit test class that is a sub-class of python's unittest:
import unittest
class MyTestClass(unittest.TestCase):
run_parameters = {param1: 'on'}
def someTest(self):
self.assertEquals(something, something_else)
Now I want to create a child class that modifies, say run_parameters, and adds an additional assert statement on top of what was already written:
class NewWayToRunThings_TestClass(MyTestClass):
run_parameters = {param1: 'blue'}
# Want someTest, and all other tests in MyTestClass to now run
# with an additional assert statement
Is there someway to accomplish this so that each test runs with an additional assert statement to check that my parameter change worked properly across all my tests?
Yes there is but it may not be a good idea because:
assertions are hidden behind difficult to understand python magic
assertions aren't explicit
Could you update your methods to reflect the new contract, and expected param?
Also if a single parameter change breaks a huge amount of tests, where it is easier to dynamically patch the test class then it is to update the tests, the test suite may not be focused enough.

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

#Patch decorator is not compatible with pytest fixture

I have encountered something mysterious, when using patch decorator from mock package integrated with pytest fixture.
I have two modules:
-----test folder
-------func.py
-------test_test.py
in func.py:
def a():
return 1
def b():
return a()
in test_test.py:
import pytest
from func import a,b
from mock import patch,Mock
#pytest.fixture(scope="module")
def brands():
return 1
mock_b=Mock()
#patch('test_test.b',mock_b)
def test_compute_scores(brands):
a()
It seems that patch decorate is not compatible with pytest fixture. Does anyone have a insight on that? Thanks
When using pytest fixture with mock.patch, test parameter order is crucial.
If you place a fixture parameter before a mocked one:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(my_fixture, mocked_class):
then the mock object will be in my_fixture and mocked_class will be search as a fixture:
fixture 'mocked_class' not found
But, if you reverse the order, placing the fixture parameter at the end:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(mocked_class, my_fixture):
then all will be fine.
As of Python3.3, the mock module has been pulled into the unittest library. There is also a backport (for previous versions of Python) available as the standalone library mock.
Combining these 2 libraries within the same test-suite yields the above-mentioned error:
E fixture 'fixture_name' not found
Within your test-suite's virtual environment, run pip uninstall mock, and make sure you aren't using the backported library alongside the core unittest library. When you re-run your tests after uninstalling, you would see ImportErrors if this were the case.
Replace all instances of this import with from unittest.mock import <stuff>.
Hopefully this answer on an old question will help someone.
First off, the question doesn't include the error, so we don't really know what's up. But I'll try to provide something that helped me.
If you want a test decorated with a patched object, then in order for it to work with pytest you could just do this:
#mock.patch('mocked.module')
def test_me(*args):
mocked_module = args[0]
Or for multiple patches:
#mock.patch('mocked.module1')
#mock.patch('mocked.module')
def test_me(*args):
mocked_module1, mocked_module2 = args
pytest is looking for the names of the fixtures to look up in the test function/method. Providing the *args argument gives us a good workaround the lookup phase. So, to include a fixture with patches, you could do this:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
#mock.patch('mocked.module1')
def test_me(brands, *args):
mocked_module1 = args[0]
This worked for me running python 3.6 and pytest 3.0.6.
If you have multiple patches to be applied, order they are injected is important:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
# notice the order
#patch('my.module.my.class1')
#patch('my.module.my.class2')
def test_list_instance_elb_tg(mocked_class2, mocked_class1, brands):
pass
This doesn't address your question directly, but there is the pytest-mock plugin which allows you to write this instead:
def test_compute_scores(brands, mock):
mock_b = mock.patch('test_test.b')
a()
a) For me the solution was to use a with block inside the test function instead of using a #patch decoration before the test function:
class TestFoo:
def test_baa(self, my_fixture):
with patch(
'module.Class.function_to_patch',
MagicMock(return_value='mocked_result')
) as mocked_function_to_patch:
result= my_fixture.baa('mocked_input')
assert result == 'mocked_result'
mocked_function_to_patch.assert_has_calls([
call('mocked_input')
])
This solution does work inside classes (that are used to structure/group my test methods). Using the with block, you don't need to worry about the order of the arguments. I find it more explicit then the injection mechanism but the code becomes ugly if you patch more then one variable. If you need to patch many dependencies, that might be a signal that your tested function does too many things and that you should refactor it, e.g. by extracting some of the functionality to extra functions.
b) If you are outside classes and do want a patched object to be injected as extra argument in a test method... please note that #patch does not support to define the mock as second argument of the decoration:
#patch('path.to.foo', MagicMock(return_value='foo_value'))
def test_baa(self, my_fixture, mocked_foo):
does not work.
=> Make sure to pass the path as only argument to the decoration. Then define the return value inside the test function:
#patch('path.to.foo')
def test_baa(self, my_fixture, mocked_foo):
mocked_foo.return_value = 'foo_value'
(Unfortunately, this does not seem to work inside classes.)
First let inject the fixture(s), then let inject the variables of the #patch decorations (e.g. 'mocked_foo').
The name of the injected fixture 'my_fixture' needs to be correct. It needs to match the name of the decorated fixture function (or the explicit name used in the fixture decoration).
The name of the injected patch variable 'mocked_foo' does not follow a distinct naming pattern. You can choose it as you like, independent from the corresponding path of the #patch decoration.
If you inject several patched variables, note that the order is reversed: the mocked instance belonging to the last #patch decoration is injected first:
#patch('path.to.foo')
#patch('path.to.qux')
def test_baa(self, my_fixture, mocked_qux, mocked_foo):
mocked_foo.return_value = 'foo_value'
I had the same problem and solution for me was to use mock library in 1.0.1 version (before I was using unittest.mock in 2.6.0 version). Now it works like a charm :)

How do I write a nose2 plugin that separates different types of tests?

I'm writing a plugin that will separate treat my unit tests, functional tests & integration tests differently.
My tests folder will have the following structure exactly:
/tests
-- /unit
-- /functional
-- /integration
Each unit test will reside in the unit directory and each functional test will reside in the functional directory and so on.
I am familiar with the Layers plugin but I'd rather have my tests follow a convention.
Which hook exactly should I use to inject the appropriate Layer before tests are run?
Should it be the loadTestsFromModule hook? Can you show me an example?
I'd also like to separate the summary report for each type of test.
Which hook should I use?
I got this working with nose2 by using the nose2 attrib plugin for discovery and some code copied from the nose1 attrib plugin which allowed me to decorate my tests.
Using the nose2 attrib plugin
You will see the nose2 attrib plugin allows for custom attributes to be defined on test functions and classes.
For this to work, you have to specify the attributes of the tests after defining the test function.
class MyTestCase(unittest.TestCase):
def test_function(self):
self.assertEqual(1+1, 2)
test_function.custom_attr1 = True
test_function.custom_attr2 = ['foo', 'bar']
Then you can run a set of filtered tests by specifying -A or --attribute as a nose2 command-line argument to list the attribute(s) you to match against your test suite. You can even use the expression command-line argument of -E or --eval-attribute which allows more complex Python expressions for matching test attributes.
e.g. nose2 -v -A custom_attr1
will run all tests which have a custom_attr1 specified with a truthy value.
Using decorators to specify test attributes
This wasn't quite good enough for me though because I didn't like the idea of defining these attributes on tests after their definition. I wanted to use a decorator instead but nose2 didn't have a built-in decorator for doing this.
I went to the nose1 source code for its attrib plugin and copied the source for the attr function.
def attr(*args, **kwargs):
"""Decorator that adds attributes to classes or functions
for use with the Attribute (-a) plugin.
"""
def wrap_ob(ob):
for name in args:
setattr(ob, name, True)
for name, value in kwargs.iteritems():
setattr(ob, name, value)
return ob
return wrap_ob
I put this into a test/attrib_util.py file. Now I can specify attributes using the decorator instead. My original test class code from above can be converted to the (IMO) simpler:
from test.attrib_util import attr
class MyTestCase(unittest.TestCase):
#attr('custom_attr1', custom_attr2=['foo', 'bar'])
def test_function(self):
self.assertEqual(1+1, 2)
You will notice that the attributes can be specified as either args or kwargs; all args will get a default value of True.
You can also even use this attr decorator on a test class or base class and the attributes will be applied to all test functions defined within. This allows for very easy separation of unit and functional tests.
from test.attrib_util import attr
#attr('functional')
class FunctionalTestCase(unittest.TestCase):
pass
class MyFunctionalCase(FunctionalTestCase):
def test_function(self):
print 'this will be considered a "functional" test function'
You don't need to write a plug-in, the built-in attr module is designed for this purpose. It does not depend on your file hierarchy, however. Instead, you mark individual tests as unit, functional, or integration. This would look like:
from nose.plugins import attrib
#attrib.attr("functional")
class FunctionalTestCase(unittest.TestCase):
pass
To run only the functional tests, you would then do:
nosetests -a functional
If I were creating this test layout, I would probably have 3 unittest.TestCase subclasses, already marked with "unit", "functional", and "integration". New tests could easily inherit the proper test type.
If you already have the tests sorted into directories (as you mentioned), you could write a plugin that uses the wantDirectory method.
import os.path
from nose.plugins import Plugin
class TestCategory(Plugin):
"""
Run tests in a defined category (unittest, functional, integration. Always
runs uncategorized tests.
"""
def wantDirectory(self, dirname):
dirname = os.path.basename(dirname)
if (dirname in ('unit', 'functional', 'integration') and
dirname != self.category):
return False
return None
You will want to write options() and configure() methods for this plug-in to deal with enabling and disabling it and gleaning the user's choice of category. When running nosetests you would choose from the three categories:
nosetests --category functional
Since only one test category is run at a time, you would get a separate report for each test category. You could always, of course, run all tests by not enabling this plugin.
(adding as a different answer because it is a completely different approach).

Categories