Python - Make unit tests independent for classes with class variables - python

I have a class with a dictionary that is used to cache response from server for a particular input. Since this is used for caching purpose, this is kept as a class variable.
class MyClass:
cache_dict = {}
def get_info_server(self, arg):
if arg not in self.cache_dict:
self.cache_dict[arg] = Client.get_from_server(arg)
return cache_dict[arg]
def do_something(self, arg):
# Do something based on get_info_server(arg)
And when writing unit tests, since the dictionary is a class variable, the values are cached across test cases.
Test Cases
# Assume that Client is mocked.
def test_caching():
m = MyClass()
m.get_info_server('foo')
m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
def test_do_something():
m = MyClass()
mock_client.get_from_server.return_value = 'bar'
m.do_something('foo') # This internally calls get_info_server('foo')
If test_caching executes first, the cached value will be some mock object. If test_do_something executes first, then the assertion that the test case is called exactly once will fail.
How do I make the tests independent of each other, besides manipulating the dictionary directly (since this is like requiring intimate knowledge of the inner working of the code. what if the inner working were to change later. All I need to verify is the API itself, and not rely on the inner workings)?

You can't really avoid resetting your cache here. If you are unittesting this class, then your unittest will need to have an intimate knowledge of the inner workings of the class, so just reset the cache. You rarely can change how your class works without adjusting your unittests anyway.
If you feel that that still will create a maintenance burden, then make cache handling explicit by adding a class method:
class MyClass:
cache_dict = {}
#classmethod
def _clear_cache(cls):
# for testing only, hook to clear the class-level cache.
cls.cache_dict.clear()
Note that I still gave it a name with a leading underscore; this is not a method that a 3rd party should call, it is only there for tests. But now you have centralised clearing the cache, giving you control over how it is implemented.
If you are using the unittest framework to run your tests, clear the cache before each test in a TestCase.setUp() method. If you are using a different testing framework, that framework will have a similar hook. Clearing the cache before each test ensures that you always have a clean state.
Do take into account that your cache is not thread safe, if you are running tests in parallel with threading you'll have issues here. Since this also applies to the cache implementation itself, this is probably not something you are worried about right now.

You didn't put it in the question explicitly, but I'm assuming your test methods are in a subclass of unittest.TestCase called MyClassTests.
Explicitly set MyClass.cache_dict in the method under test. If it's just a dictionary, without any getters / setters for it, you don't need a Mock.
If you want to guarantee that every test method is independent, set MyClass.cache_dict = {} in MyClassTests.setup().

You need to make use of Python's built in UnitTest TestCase and implement setup and teardown methods.
If you define setUp() and tearDown() in your tests, these will execute each time one of the single test methods gets called (before and after, respectively)
Example:
# set up any global, consistent state here
# subclass unit test test case here.
def setUp(self):
# prepare your state if needed for each test, if this is not considered "fiddling", use this method to set your cache to a fresh state each time
your_cache_dict_variable = {}
### Your test methods here
def tearDown(self):
# this will handle resetting the state, as needed
Check out the docs for more info: https://docs.python.org/2/library/unittest.html

One thing I can suggest is to use setUp() and tearDown() methods in your test class.
from unittest import TestCase
class MyTest(TestCase):
def setUp(self):
self.m = MyClass()
//anything else you need to load before testing
def tearDown(self):
self.m = None
def test_caching(self):
self.m.get_info_server('foo')
self.m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')

Related

pytest: get parametrize values during setup/teardown method

My automation framework uses pytest setup/teardown type of testing instead of fixtures. I also have several levels of classes:
BaseClass - highest, all tests inhriet from it
FeatureClass - medium, all tests related to the programs feature inherit from it
TestClass - hold the actual tests
edit, for examples sake, i change the DB calls to a simple print
I want to add DB report in all setup/teardowns. i.e. i want that the general BaseClass setup_method will create a DB entry for the test and teardown_method will alter the entry with the results. i have tried but i can't seem to get out of method the values of currently running test during run time. is it possible even? and if not, how could i do it otherwise?
samples:
(in base.py)
class Base(object):
test_number = 0
def setup_method(self, method):
Base.test_number += 1
self.logger.info(color.Blue("STARTING TEST"))
self.logger.info(color.Blue("Current Test: {}".format(method.__name__)))
self.logger.info(color.Blue("Test Number: {}".format(self.test_number)))
# --->here i'd like to do something with the actual test parameters<---
self.logger.info("print parameters here")
def teardown_method(self, method):
self.logger.info(color.Blue("Current Test: {}".format(method.__name__)))
self.logger.info(color.Blue("Test Number: {}".format(self.test_number)))
self.logger.info(color.Blue("END OF TEST"))
(in my_feature.py)
class MyFeature(base.Base):
def setup_method(self, method):
# enable this feature in program
return True
(in test_my_feature.py)
class TestClass(my_feature.MyFeature):
#pytest.mark.parametrize("fragment_length", [1,5,10])
def test_my_first_test(self):
# do stuff that is changed based on fragment_length
assert verify_stuff(fragment_length)
so how can i get the parameters in setup_method, of the basic parent class of the testing framework?
The brief answer: NO, you cannot do this. And YES, you can work around it.
A bit longer: these unittest-style setups & teardowns are done only for compatibility with the unittest-style tests. They do not support the pytest's fixture, which make pytest nice.
Due to this, neither pytest nor pytest's unittest plugin provide the context for these setup/teardown methods. If you would have a request, function or some other contextual objects, you could get the fixture's values dynamically via request.getfuncargvalue('my_fixture_name').
However, all you have is self/cls, and method as the test method object itself (i.e. not the pytest's node).
If you look inside of the _pytest/unittest.py plugin, you will find this code:
class TestCaseFunction(Function):
_excinfo = None
def setup(self):
self._testcase = self.parent.obj(self.name)
self._fix_unittest_skip_decorator()
self._obj = getattr(self._testcase, self.name)
if hasattr(self._testcase, 'setup_method'):
self._testcase.setup_method(self._obj)
if hasattr(self, "_request"):
self._request._fillfixtures()
First, note that the setup_method() is called fully isolated from the pytest's object (e.g. self as the test node).
Second, note that the fixtures are prepared after the setup_method() is called. So even if you could access them, they will not be ready.
So, generally, you cannot do this without some trickery.
For the trickery, you have to define a pytest hook/hookwrapper once, and remember the pytest node being executed:
conftest.py or any other plugin:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
item.cls._item = item
yield
test_me.py:
import pytest
class Base(object):
def setup_method(self, method):
length = self._item.callspec.getparam('fragment_length')
print(length)
class MyFeature(Base):
def setup_method(self, method):
super().setup_method(method)
class TestClass(MyFeature):
#pytest.mark.parametrize("fragment_length", [1,5,10])
def test_my_first_test(self, fragment_length):
# do stuff that is changed based on fragment_length
assert True # verify_stuff(fragment_length)
Also note that MyFeature.setup_method() must call the parent's super(...).setup_method() for obvious reasons.
The cls._item will be set on each callspec (i.e. each function call with each parameter). You can also put the item or the specific parameters into some other global state, if you wish.
Also be carefull not to save the field in the item.instance. The instance of the class will be created later, and you have to use setup_instance/teardown_instance method for that. Otherwise, the saved instance's field is not preserved and is not available as self._item in setup_method().
Here is the execution:
============ test session starts ============
......
collected 3 items
test_me.py::TestClass::test_my_first_test[1] 1
PASSED
test_me.py::TestClass::test_my_first_test[5] 5
PASSED
test_me.py::TestClass::test_my_first_test[10] 10
PASSED
============ 3 passed in 0.04 seconds ============

Pythonic approach to testing properties?

I've run into an issue writing unit tests for class attributes with the #property decorator. I am using the excellent py.test package for testing and would definitely prefer sticking with it because it's easy to set up fixtures. The code for which I'm writing unit tests looks something like this:
class Foo(object):
def __init__(self, fizz, buzz, (and many more...)):
self.fizz = fizz
self.buzz = buzz
self.is_fizzing = #some function of fizz
self.is_fizz_and_buzz = fizz and buzz
...
#property
def is_bar(self):
return self.is_fizz_and_buzz and self.buzz > 5
#property
def foo_bar(self):
# Note that this property uses the property above
return self.is_bar + self.fizz
# Long list of properties that call each other
The problem occurs when writing a unit test for a property which uses multiple other properties sometimes in long chains of up to four properties. For each unit test, I need to determine which inputs I need to set up and, even worse, it may be the case that those inputs are irrelevant for testing the functionality of that particular method. As a result, I end up testing many more cases than necessary for some of these "properties."
My intuition tells me that if these were actually "properties" they shouldn't have such long, involved calculations that need to be tested. Perhaps it would be better to separate out the actual methods (making them class methods) and write new properties which call these class methods.
Another issue with the current code--correct me if I'm wrong--is that every time a property is called (and most properties are called a lot) the attribute is recalculated. This seems terribly inefficient and could be fixed like this with new properties.
Does it even make sense to test properties? In other words, should an attribute itself be calculated in a property or should it only be set? Rewriting the code as I described above seems so unpythonic. Why even have properties in the first place? If properties are supposed to be so simple to not require testing, why not just define the attribute in the init?
Sorry if these are silly questions. I'm still fairly new to python.
Edit/update:
Would it be possible (easy/pythonic) to mock the object and then perform the test on the property/attribute?
The easiest thing to do imo is to just test the property, e.g. foo_bar, as a normal method using it's function member, e.g., foo_bar.func, which #property automatically provides.
For me, using pytest & unittest.mock, this means instead of doing the below when foo_bar is a normal non-property method:
class TestFoo:
def test_foo_bar(self):
foo_mock = mock.create_autospec(Foo)
# setup `foo_mock` however is necessary for testing
assert Foo.foo_bar(foo_mock) == some_value
I would change the last line to do this:
assert Foo.foo_bar.func(foo_mock) == some_value
If you need to stub/mock other properties in the process (like Foo.is_bar), take a look at unittest.mock.PropertyMock.

Python - What's the proper way to unittest methods in a different class?

I've written a module called Consumer.py, containing a class (Consumer). This class is initialized using a configuration file thay contains different parameters it uses for computation and the name of a loq que used for logging.
I want to write unit tests for this class so i've made a script called test_Consumer.py with a class called TestConsumerMethods(unittest.TestCase).
Now, what i've done is create a new object of the Consumer class called cons, and then i use that to call on the class methods for testing. For example, Consumer has a simple method that checks if a file exists in a given directory. The test i've made looks like this
import Consumer
from Consumer import Consumer
cons = Consumer('mockconfig.config', 'logque1')
class TestConsumerMethods(unittest.TestCase):
def test_fileExists(self):
self.assertEqual(cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(cons.file_exists('./dir/', thisDoesExist.config), True)
Is this the correct way to test my class? I mean, ideally i'd like to just use the class methods without having to instantiate the class because to "isolate" the code, right?
Don't make a global object to test against, as it opens up the possibility that some state will get set on it by one test, and affect another.
Each test should run in isolation and be completely independent from others.
Instead, either create the object in your test, or have it automatically created for each test by putting it in the setUp method:
import Consumer
from Consumer import Consumer
class TestConsumerMethods(unittest.TestCase):
def setUp(self):
self.cons = Consumer('mockconfig.config', 'logque1')
def test_fileExists(self):
self.assertEqual(self.cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(self.cons.file_exists('./dir/', thisDoesExist.config), True)
As far as whether you actually have to instantiate your class at all, that depends on the implementation of the class. I think generally you'd expect to instantiate a class to test its methods.
I'm not sure if that's what you're searching for, but you could add your tests at the end of your file like this :
#!/usr/bin/python
...
class TestConsumerMethods(...):
...
if __name__ == "__main__":
# add your tests here.
This way, by executing the file containing the class definition, you execute the tests you put in the if statement.
This way the tests will only be executed if you directly execute the file itself, but not if you import the class from it.

Python unittest: having an external function for repetitive code in different modules

Let's say I have a web service ServA written in python and I want to write some unit tests.
ServA does several things (with different views) but all the views produce a similar log of the requests.
These tests should check the logs of ServA in the different circumstances, so there is a lot of repeated code for these unit tests (the structure of the logs is always the same).
My idea is to write a generic function to avoid repetition of code and I have found this other question that solves the problem creating a generic method inside the unittest class.
But now what if I have another web service ServB and another set of tests and I need to do the same?
Is there a way to reuse the generic function?
Should I simply create a test class with the method to check the logs like this:
class MyMetaTestClass(unittest.TestCase):
def check_logs(self, log_list, **kwargs):
#several self.assertEqual
and then the tests for ServA and ServB inherit this class like this:
class TestServA(MyMetaTestClass):
def test_log1(self):
logs = extract_the_logs()
self.check_logs(logs, log_1=1, log2='foo')
is there another (better) way?
You can inherit from a common base class like you did, but the base class doesn't have to be a TestCase subclass itself - you can just make it a mixin class:
# testutils.py
class LogCheckerMixin(object):
""" this class adds log checking abilities to a TestCase.
"""
def check_logs(self, logs, **kw):
self.assertWhatever(something)
# myserver/tests.py
import unittest
from testutils import LogCheckerMixin
class MyServerTest(unittest.TestCase, LogCheckerMixin):
def test_log1(self):
logs = extract_the_logs()
self.check_logs(logs, log_1=1, log2='foo')
Or you can just make it a plain function and call it from your test:
# testutils.py
def check_logs(testcase, logs, **kw):
testcase.assertWhatever(something)
# myserver/tests.py
import unittest
from testutils import check_logs
class MyServerTest(unittest.TestCase):
def test_log1(self):
logs = extract_the_logs()
check_logs(self, logs, log_1=1, log2='foo')
It's rather opinion based, but what I've seen most often is a set of separate helper classes, which can be used by any tests suite.
I usually create the infrastructure folder/namespace/module/package (in the test project) not confuse tests which should concern only domain aspects with technical issues. In this case you could have something like this:
logHelper.py:
def assert_something_about_logs(logs, something):
# utility code
# do an assertion
def assert_something_else(logs):
# utility code
# do an assertion
etc.
This way your unit tests will read well, for example assert_that_logs_contain(errorMessage="Something went wrong"), and all technicalities are hidden not to obscure the test.

unittest housekeeping in python

I've got a unittest test file containing four test classes each of which is responsible for running tests on one specific class. Each test class makes us of exactly the same set-up and teardown methods. The set-up method is relatively large, initiating about 20 different variables, while the teardown method simply resets these twenty variables to their initial state.
Up to now I have been putting the twenty variables in each of the four setUp classes. This works, but is not very easily maintained; if I decide to change one variable, I must change it in all four setUp methods. My search for a more elegant solution has failed however. Ideally I'd just like to enter my twenty variables once, call them up in each of my four setup methods, then tear them down after each of my test methods. With this end in mind I tried putting the variables in a separate module and importing this in each setUp, but of course the variables are then only available in the setup method (plus, though I couldn't put my finger on the exact reasons, this felt like a potentially problem-prone way of doing it
from unittest import TestCase
class Test_Books(TestCase):
def setup():
# a quick and easy way of making my variables available at the class level
# without typing them all in
def test_method_1(self):
# setup variables available here in their original state
# ... mess about with the variables ...
# reset variables to original state
def test_method_2(self):
# setup variables available here in their original state
# etc...
def teardown(self):
# reset variables to original state without having to type them all in
class Books():
def method_1(self):
pass
def method_2(self):
pass
An alternative is to put the twenty variables into a separate class, set the values in the class's __init__ and then access the data as class.variable, thus the only place to set the variable s in the __init__ and the code s not duplicated.
class Data:
def __init__(self):
data.x= ...
data.y = ....
class Test_Books(TestCase):
def setup():
self.data = Data()
def test_method_1(self):
value = self.data.x # get the data from the variable
This solution makes more sense if the twenty pieces of data are related to each other. Also if you have twenty pieces of data I would expect them to be related and so they should be combined in the real code not just in test.
What I would do is make the 4 test classes each a subclass of one base test class which itself is a subclass of TestCase. Then put setip and teardown in the base class and the rest in the others.
e.g.
class AbstractBookTest(TestCase):
def setup():
...
class Test_Book1(AbstractBookTest):
def test_method_1(self):
...
An alternative is just to make one class not the four you have which seems to be a bit more logical here unless you give a reason for the split.

Categories