pytest: get parametrize values during setup/teardown method - python

My automation framework uses pytest setup/teardown type of testing instead of fixtures. I also have several levels of classes:
BaseClass - highest, all tests inhriet from it
FeatureClass - medium, all tests related to the programs feature inherit from it
TestClass - hold the actual tests
edit, for examples sake, i change the DB calls to a simple print
I want to add DB report in all setup/teardowns. i.e. i want that the general BaseClass setup_method will create a DB entry for the test and teardown_method will alter the entry with the results. i have tried but i can't seem to get out of method the values of currently running test during run time. is it possible even? and if not, how could i do it otherwise?
samples:
(in base.py)
class Base(object):
test_number = 0
def setup_method(self, method):
Base.test_number += 1
self.logger.info(color.Blue("STARTING TEST"))
self.logger.info(color.Blue("Current Test: {}".format(method.__name__)))
self.logger.info(color.Blue("Test Number: {}".format(self.test_number)))
# --->here i'd like to do something with the actual test parameters<---
self.logger.info("print parameters here")
def teardown_method(self, method):
self.logger.info(color.Blue("Current Test: {}".format(method.__name__)))
self.logger.info(color.Blue("Test Number: {}".format(self.test_number)))
self.logger.info(color.Blue("END OF TEST"))
(in my_feature.py)
class MyFeature(base.Base):
def setup_method(self, method):
# enable this feature in program
return True
(in test_my_feature.py)
class TestClass(my_feature.MyFeature):
#pytest.mark.parametrize("fragment_length", [1,5,10])
def test_my_first_test(self):
# do stuff that is changed based on fragment_length
assert verify_stuff(fragment_length)
so how can i get the parameters in setup_method, of the basic parent class of the testing framework?

The brief answer: NO, you cannot do this. And YES, you can work around it.
A bit longer: these unittest-style setups & teardowns are done only for compatibility with the unittest-style tests. They do not support the pytest's fixture, which make pytest nice.
Due to this, neither pytest nor pytest's unittest plugin provide the context for these setup/teardown methods. If you would have a request, function or some other contextual objects, you could get the fixture's values dynamically via request.getfuncargvalue('my_fixture_name').
However, all you have is self/cls, and method as the test method object itself (i.e. not the pytest's node).
If you look inside of the _pytest/unittest.py plugin, you will find this code:
class TestCaseFunction(Function):
_excinfo = None
def setup(self):
self._testcase = self.parent.obj(self.name)
self._fix_unittest_skip_decorator()
self._obj = getattr(self._testcase, self.name)
if hasattr(self._testcase, 'setup_method'):
self._testcase.setup_method(self._obj)
if hasattr(self, "_request"):
self._request._fillfixtures()
First, note that the setup_method() is called fully isolated from the pytest's object (e.g. self as the test node).
Second, note that the fixtures are prepared after the setup_method() is called. So even if you could access them, they will not be ready.
So, generally, you cannot do this without some trickery.
For the trickery, you have to define a pytest hook/hookwrapper once, and remember the pytest node being executed:
conftest.py or any other plugin:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
item.cls._item = item
yield
test_me.py:
import pytest
class Base(object):
def setup_method(self, method):
length = self._item.callspec.getparam('fragment_length')
print(length)
class MyFeature(Base):
def setup_method(self, method):
super().setup_method(method)
class TestClass(MyFeature):
#pytest.mark.parametrize("fragment_length", [1,5,10])
def test_my_first_test(self, fragment_length):
# do stuff that is changed based on fragment_length
assert True # verify_stuff(fragment_length)
Also note that MyFeature.setup_method() must call the parent's super(...).setup_method() for obvious reasons.
The cls._item will be set on each callspec (i.e. each function call with each parameter). You can also put the item or the specific parameters into some other global state, if you wish.
Also be carefull not to save the field in the item.instance. The instance of the class will be created later, and you have to use setup_instance/teardown_instance method for that. Otherwise, the saved instance's field is not preserved and is not available as self._item in setup_method().
Here is the execution:
============ test session starts ============
......
collected 3 items
test_me.py::TestClass::test_my_first_test[1] 1
PASSED
test_me.py::TestClass::test_my_first_test[5] 5
PASSED
test_me.py::TestClass::test_my_first_test[10] 10
PASSED
============ 3 passed in 0.04 seconds ============

Related

How to use an object's __subclasses__() method in bulk pytest execution?

I'm looking to write unit tests for some code that uses an object's __subclasses__() method. Ultimately, I was trying to use __subclassess__() to keep track of classes dynamically imported into my application through a namespace package.
In some cases, my code base has test classes created within a pytest file. My problem is that when I run pytest in bulk on my source directory I find failures or errors in my code due to import polutions from these one-off test classes. This is because the pytest run maintains all of the imports as it runs through the source tree. On their own the tests pass fine, but in a sequence they fail, sometimes depending on the order in which they ran.
In my current code branch these __subclasses__() invocations are in the application code, but I have moved them out to tests here to demonstrate with a MVE:
In my/example.py
class MyClass(object):
def __init__(self):
pass
class MySubClass(MyClass):
def __init__(self):
super().__init__()
In my/test_subclass.py
from my.example import MyClass
class TestSubClass(MyClass):
def __init__(self):
super().__init__()
def test_TestSubClass():
assert issubclass(TestSubClass, MyClass)
In my/test_subclasses.py
from my.example import MySubClass, MyClass
def test_find_subclasses():
assert all([cls == MySubClass for cls in MyClass.__subclasses__()])
The result, when run on my machine, is that the test_find_subclasses() test fails due to the discovery of the TestSubClass when running after test_subclass.py:
def test_find_subclasses():
> assert all([cls == MySubClass for cls in MyClass.__subclasses__()])
E assert False
E + where False = all([True, False])
What is the best way to maintain a "clean" state during sequenced pytest runs so that I can avoid mangling imports?
As discussed in the comments, you probably don't want to hard-code the types that may extend MyClass, since you really can't predict what your application will need in the future. If you want to check subclassing, just check that it works at all:
def test_find_subclasses():
assert MySubClass in MyClass.__subclasses__()
Even more concisely, you could simply do
def test_find_subclasses():
assert issubclass(MySubClass, MyClass)
That being said, you can technically filter the classes you are looking through. In your particular case, you have a distinctive naming convention, so you can do something like
def only_production_classes(iterable):
return [cls for cls in iterable if not cls.__name__.lower().startswith('test')]
def test_find_subclasses():
assert all([cls == MySubClass for cls in only_production_classes(MyClass.__subclasses__())])
You can define only_production_classes using other criteria, like the module that the class appears in:
def only_production_classes(iterable):
return [cls for cls in iterable if not cls.__module__.lower().startswith('test_')]
You can't easily unlink class objects that have been loaded, so your idea of a "clean" test environment is not quite feasible. But you do have the option of filtering the data that you work with based on where it was imported from.

How to share object(s) among methods of a TestClass without mentioning them in each method (using pytest)?

Suppose I have a class something like this:
class Class:
def __init__(self, data):
self.tab1 = data.A
self.tab2 = data.B
# Other methods which try to change the state of object
def add1(self, num):
self.tab1 += num
def mul2(self, num):
self.tab2 *= num
Using pytest, I'm trying to test the state of object of Class (say obj) when several operations are done. For each operation I want a different test function but all of them should manipulate the same object. Since pytest doesn't allow using __init__ for test classes, only options I have is to create obj as class variable or return it from a class-scoped fixture as mentioned in this SO thread. But I can't make obj class variable because it needs class_data fixture for instantiating, so I return it from a class-scoped fixture, like this:
#pytest.fixture(scope='class')
def class_data():
return pd.DataFrame({'A': [1,2,3], 'B': [4,5,6]})
class TestClass:
#pytest.fixture(scope='class')
def obj(self, class_data):
return Class(class_data)
def test_state1(self, obj):
obj.add1(5)
print(obj.tab1, obj.tab2)
assert obj.tab1.equals(pd.Series([1, 2, 3])+5)
def test_state2(self, obj):
obj.mul2(2)
print(obj.tab1, obj.tab2)
assert obj.tab2.equals(pd.Series([4, 5, 6])*2)
So far so good, but what if I have so many methods - I don't want to pass it individually to each method. Thus following this SO answer, I tried to create a autouse fixture (with class scope), like this:
class TestClass:
#pytest.fixture(autouse=True, scope='class')
def setup_the_class(self, class_data):
self.obj = Class(class_data)
def test_state1(self):
self.obj.add1(5)
print(self.obj.tab1, self.obj.tab2)
assert self.obj.tab1.equals(pd.Series([1, 2, 3])+5)
def test_state2(self):
self.obj.mul2(2)
print(self.obj.tab1, self.obj.tab2)
assert self.obj.tab2.equals(pd.Series([4, 5, 6])*2)
But it gives AttributeError: 'TestClass' object has no attribute 'obj'.
I don't understand why this happens? Even pytest docs mentions:
The class-level transact fixture is marked with autouse=True which implies that all test methods in the class will use this fixture without a need to state it in the test function signature or with a class-level usefixtures decorator.
Can anyone explain to me what is going wrong here and how autouse fixtures actually work? Is there a better way to do what I'm trying to achieve?
Besides, this autouse fixture approach also seems useful to me because it allows me (in theory) to define several other objects which use a fixture for their creation (like obj does) as members of TestClass (as self.<some_variable>) so that I will not need to define each of them in the separate class-scoped fixture that returns them. This theory is coming from my observation (assumption perhaps) that fixtures with autouse (or usefixtures) do not return/yield anything they just change the state of an entity that will be used by test methods, indirectly i.e. without passing as an argument. So my second question is whether I'm thinking in the right direction or not?
If you are using a class scoped fixture, the self parameter is not the same instance as the instance used in tests. The instance used in the fixture is created before all test instances, the fixture code until the yield is called, then for each test a new test instance is created, executed and deletd, and after that the code after the yield (e.g. the teardown code, if there is any) is called, and the fixture instance is deleted.
So, while you can't use self to store your object, you can store it in the class instead using self.__class__:
#pytest.fixture(autouse=True, scope='class')
def setup_the_class(self, class_data):
self.__class__.obj = Class(class_data)
The data can be accessed from the tests as before as self.obj, because the lookup will find it in the class if it does not exist in the instance.
That being said, your test code does not look like you want to have a class scoped fixture. You change the value of the fixture in each test, and it stays changed that way, so te test outcome depends on the sequence the tests are executed. This is true for both versions of the test. In case this true in your real test, you probably want to use a function scoped fixture instead, that recreates the object for each test (and wehre you can save it in an instance variable).

Python - Make unit tests independent for classes with class variables

I have a class with a dictionary that is used to cache response from server for a particular input. Since this is used for caching purpose, this is kept as a class variable.
class MyClass:
cache_dict = {}
def get_info_server(self, arg):
if arg not in self.cache_dict:
self.cache_dict[arg] = Client.get_from_server(arg)
return cache_dict[arg]
def do_something(self, arg):
# Do something based on get_info_server(arg)
And when writing unit tests, since the dictionary is a class variable, the values are cached across test cases.
Test Cases
# Assume that Client is mocked.
def test_caching():
m = MyClass()
m.get_info_server('foo')
m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
def test_do_something():
m = MyClass()
mock_client.get_from_server.return_value = 'bar'
m.do_something('foo') # This internally calls get_info_server('foo')
If test_caching executes first, the cached value will be some mock object. If test_do_something executes first, then the assertion that the test case is called exactly once will fail.
How do I make the tests independent of each other, besides manipulating the dictionary directly (since this is like requiring intimate knowledge of the inner working of the code. what if the inner working were to change later. All I need to verify is the API itself, and not rely on the inner workings)?
You can't really avoid resetting your cache here. If you are unittesting this class, then your unittest will need to have an intimate knowledge of the inner workings of the class, so just reset the cache. You rarely can change how your class works without adjusting your unittests anyway.
If you feel that that still will create a maintenance burden, then make cache handling explicit by adding a class method:
class MyClass:
cache_dict = {}
#classmethod
def _clear_cache(cls):
# for testing only, hook to clear the class-level cache.
cls.cache_dict.clear()
Note that I still gave it a name with a leading underscore; this is not a method that a 3rd party should call, it is only there for tests. But now you have centralised clearing the cache, giving you control over how it is implemented.
If you are using the unittest framework to run your tests, clear the cache before each test in a TestCase.setUp() method. If you are using a different testing framework, that framework will have a similar hook. Clearing the cache before each test ensures that you always have a clean state.
Do take into account that your cache is not thread safe, if you are running tests in parallel with threading you'll have issues here. Since this also applies to the cache implementation itself, this is probably not something you are worried about right now.
You didn't put it in the question explicitly, but I'm assuming your test methods are in a subclass of unittest.TestCase called MyClassTests.
Explicitly set MyClass.cache_dict in the method under test. If it's just a dictionary, without any getters / setters for it, you don't need a Mock.
If you want to guarantee that every test method is independent, set MyClass.cache_dict = {} in MyClassTests.setup().
You need to make use of Python's built in UnitTest TestCase and implement setup and teardown methods.
If you define setUp() and tearDown() in your tests, these will execute each time one of the single test methods gets called (before and after, respectively)
Example:
# set up any global, consistent state here
# subclass unit test test case here.
def setUp(self):
# prepare your state if needed for each test, if this is not considered "fiddling", use this method to set your cache to a fresh state each time
your_cache_dict_variable = {}
### Your test methods here
def tearDown(self):
# this will handle resetting the state, as needed
Check out the docs for more info: https://docs.python.org/2/library/unittest.html
One thing I can suggest is to use setUp() and tearDown() methods in your test class.
from unittest import TestCase
class MyTest(TestCase):
def setUp(self):
self.m = MyClass()
//anything else you need to load before testing
def tearDown(self):
self.m = None
def test_caching(self):
self.m.get_info_server('foo')
self.m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')

How To Send Arguments to a UnitTest Without Global Variables

Problem
Yes I know you shouldn't send arguments to a unittest but in this case I have to so that it can access a piece of hardware within the current framework. For reasons for which the detail is irrelevant, I can't use a simple board = myBoard.getBoard() method within the tests, otherwise I wouldn't be here asking you all.
Horrible Attempt 1
class Test(object):
#staticmethod
def __call__(board):
_board = board
global _board
logging.info('Entering a Unit Test')
suite = unittest.TestLoader()
suite = suite.loadTestsFromTestCase(TestCase)
unittest.TextTestRunner().run(suite)
This method will be called upon to perform a unittest at somepoint. This is the way things are set out in the current framework, I'm just following the rules.
You can see here I assign a global variable (horrible). This can then be accessed by the test methods easily:
class TestCase(unittest.TestCase):
def runTest(self):
logging.critical('running')
def setUp(self):
logging.critical('setup')
def test_Me(self):
logging.critical('me')
The result is it won't enter runTest but it will setUp before each test_ method. Pretty much what I'd like, I can deal with it not entering runTest
Horrible Attempt 2
I'd like to do the above but without global variables so I tried this method:
class Test(object):
#staticmethod
def __call__(board):
logging.info('Entering a basic Unit Test')
suite = unittest.TestSuite() #creates a suite for testing
suite.addTest(BasicCase(board))
unittest.TextTestRunner().run(suite)
class BasicCase(unittest.TestCase):
def __init__(self, board):
self.board = board
logging.critical('init')
super(BasicCase, self).__init__()
def runTest(self):
logging.critical('runtest')
def setUp(self):
logging.critical('reset')
infra.Reset.__call__(self.board) #allows me to pass the board in a much nicer way
def test_Connect(self):
logging.critical('connection')
Problem with the above is that it will obviously only do init, setUp and runTest but it'll miss any test_ methods.
If I add:
suite.addTest(BasicCase(board).test_Connect)
Then it unnecessarily calls init twice and I have to add an extra member variable to test_Connect to accept the test object. I can deal with this too as I can return a list of the test_ methods but it just seems like a pain to do. Also I don't think it runs setUp every time it enters a new test_ method. Not ideal.
So ...
So is there some other method I can use that allows me to quickly run everything like a unittest but allows me to pass the board object?
If I should be using a different python module, then please say but I'm using unittest due to the lovely outputs and assert methods I get.

How to run the same test-case for different classes?

I have several classes that share some invariants and have a common interface, and I would like to run automatically the same test for each of them.
As an example, suppose I have several classes that implement different approaches for partitioning a data-set. The common invariant here would be, that for all of these classes the union over all partitions should equal the original data-set.
What I currently have looks something like this:
class PartitionerInvariantsTests(unittest.TestCase):
def setUp(self):
self.testDataSet = range(100) # create test-data-set
def impl(self, partitioner):
self.assertEqual(self.testDataSet,
chain.from_iterable(partitioner(self.testDataSet))
Then I add a different function that calls impl for each of the classes I want to test with an instance of that class. The problem with this becomes apparent when doing this for more than one test-function. Suppose I have 5 test-functions and 5 classes I want to test. This would make 25 functions that look almost identical for invoking all the tests.
Another approach I was thinking about was to implement the template as a super-class, and then create a sub-class for each of the classes I want to test. The sub-classes could provide a function for instantiating the class. The problem with that is that the default test-loader would consider the (unusable) base-class a valid test-case and try to run it, which would fail.
So, what are your suggestions?
P.S.: I am using Python 2.6
You could use multiple inheritance.
class PartitionerInvariantsFixture(object):
def setUp(self):
self.testDataSet = range(100) # create test-data-set
super(PartitionInvariantsFixture, self).setUp()
def test_partitioner(self):
TestCase.assertEqual(self.testDataSet,
chain.from_iterable(self.partitioner(self.testDataSet))
class MyClassTests(TestCase, PartitionerInvariantsFixture):
partitioner = Partitioner
Subclass PartitionerInvariantsTests:
class PartitionerInvariantsTests(unittest.TestCase):
def test_impl(self):
self.assertEqual(self.testDataSet,
chain.from_iterable(self.partitioner(self.testDataSet))
class PartitionerATests(PartitionerInvariantsTests):
for each Partitioner class you wish to test. Then test_impl would be run for each Partitioner class, by virtue of inheritance.
Following up on Nathon's comment, you can prevent the base class from being tested by having it inherit only from object:
import unittest
class Test(object):
def test_impl(self):
print('Hi')
class TestA(Test,unittest.TestCase):
pass
class TestB(Test,unittest.TestCase):
pass
if __name__ == '__main__':
unittest.sys.argv.insert(1,'--verbose')
unittest.main(argv = unittest.sys.argv)
Running test.py yields
test_impl (__main__.TestA) ... Hi
ok
test_impl (__main__.TestB) ... Hi
ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK

Categories