I am unhappy with TestCase.setUp(): If a test has a decorator, setUp() gets called outside the decorator.
I am not new to python, I can help myself, but I search a best practice solution.
TestCase.setUp() feels like a way to handle stuff before decorators where introduced in Python.
What is a clean solution to setup a test, if the setup should happen inside the decorator of the test method?
This would be a solution, but setUp() gets called twice:
class Test(unittest.TestCase):
def setUp(self):
...
#somedecorator
def test_foo(self):
self.setUp()
Example use case: somedecorator opens a database connection and it works like a with-statement: It can't be broken into two methods (setUp(), tearDown()).
Update
somedecorator is from a different application. I don't want to modify it. I could do some copy+paste, but that's not a good solution.
Ugly? Yes:
class Test(unittest.TestCase):
def actualSetUp(self):
...
def setUp(self):
if not any('test_foo' in i for i in inspect.stack()):
self.actualSetUp()
#somedecorator
def test_foo(self):
self.actualSetUp()
Works by inspecting the current stack and determining whether we are in any function called 'test_foo', so that should obviously be a unique name.
Probably cleaner to just create another TestCase class specifically for this test, where you can have no setUp method.
Related
I am trying to get documentation to build on a ReadTheDocs installation via Sphinx. The classes I am documenting inherit from a larger Framework, which I cannot easily install and thus would like to mock. However, Mock seems to be overly greedy in mocking also the classes I would actually like to document. The code in question is as follows:
# imports that need to be mocked
from big.framework import a_class_decorator, ..., SomeBaseClass
#a_class_decorator("foo", "bar")
class ToDocument(SomeBaseClass):
""" Lots of nice documentation here
which will never appear
"""
def a_function_that_is_being_documented():
""" This will show up on RTD
"""
I hit the point where I make sure I don't blindly mock the decorator, but am instead explicit in my Sphinx conf.py. Otherwise, I follow RTD suggestions for mocking modules:
class MyMock(MagicMock):
#classmethod
def a_class_decorator(cls, classid, version):
def real_decorator(theClass):
print(theClass)
return theClass
return real_decorator
#classmethod
def __getattr__(cls, name):
return MagicMock()
sys.modules['big.framework'] = MyMock()
Now I would expect that for the printout I get something referring to my to be documented class, e.g. <ToDocument ...>.
However, I always get a Mock for that class as well, <MagicMock spec='str' id='139887652701072'>, which of course does not have any of the documentation I am trying to build. Any ideas?
Turns out the problem was inheritance from a mocked class. Begin explicit about the base class and creating an empty class
class SomeBaseClass:
pass
to patch in in the conf.py solved the problem
I have a class with a dictionary that is used to cache response from server for a particular input. Since this is used for caching purpose, this is kept as a class variable.
class MyClass:
cache_dict = {}
def get_info_server(self, arg):
if arg not in self.cache_dict:
self.cache_dict[arg] = Client.get_from_server(arg)
return cache_dict[arg]
def do_something(self, arg):
# Do something based on get_info_server(arg)
And when writing unit tests, since the dictionary is a class variable, the values are cached across test cases.
Test Cases
# Assume that Client is mocked.
def test_caching():
m = MyClass()
m.get_info_server('foo')
m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
def test_do_something():
m = MyClass()
mock_client.get_from_server.return_value = 'bar'
m.do_something('foo') # This internally calls get_info_server('foo')
If test_caching executes first, the cached value will be some mock object. If test_do_something executes first, then the assertion that the test case is called exactly once will fail.
How do I make the tests independent of each other, besides manipulating the dictionary directly (since this is like requiring intimate knowledge of the inner working of the code. what if the inner working were to change later. All I need to verify is the API itself, and not rely on the inner workings)?
You can't really avoid resetting your cache here. If you are unittesting this class, then your unittest will need to have an intimate knowledge of the inner workings of the class, so just reset the cache. You rarely can change how your class works without adjusting your unittests anyway.
If you feel that that still will create a maintenance burden, then make cache handling explicit by adding a class method:
class MyClass:
cache_dict = {}
#classmethod
def _clear_cache(cls):
# for testing only, hook to clear the class-level cache.
cls.cache_dict.clear()
Note that I still gave it a name with a leading underscore; this is not a method that a 3rd party should call, it is only there for tests. But now you have centralised clearing the cache, giving you control over how it is implemented.
If you are using the unittest framework to run your tests, clear the cache before each test in a TestCase.setUp() method. If you are using a different testing framework, that framework will have a similar hook. Clearing the cache before each test ensures that you always have a clean state.
Do take into account that your cache is not thread safe, if you are running tests in parallel with threading you'll have issues here. Since this also applies to the cache implementation itself, this is probably not something you are worried about right now.
You didn't put it in the question explicitly, but I'm assuming your test methods are in a subclass of unittest.TestCase called MyClassTests.
Explicitly set MyClass.cache_dict in the method under test. If it's just a dictionary, without any getters / setters for it, you don't need a Mock.
If you want to guarantee that every test method is independent, set MyClass.cache_dict = {} in MyClassTests.setup().
You need to make use of Python's built in UnitTest TestCase and implement setup and teardown methods.
If you define setUp() and tearDown() in your tests, these will execute each time one of the single test methods gets called (before and after, respectively)
Example:
# set up any global, consistent state here
# subclass unit test test case here.
def setUp(self):
# prepare your state if needed for each test, if this is not considered "fiddling", use this method to set your cache to a fresh state each time
your_cache_dict_variable = {}
### Your test methods here
def tearDown(self):
# this will handle resetting the state, as needed
Check out the docs for more info: https://docs.python.org/2/library/unittest.html
One thing I can suggest is to use setUp() and tearDown() methods in your test class.
from unittest import TestCase
class MyTest(TestCase):
def setUp(self):
self.m = MyClass()
//anything else you need to load before testing
def tearDown(self):
self.m = None
def test_caching(self):
self.m.get_info_server('foo')
self.m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
Code first:
from abc import abstractmethod
class SomeInterfaceishClass():
#abstractmethod
#staticmethod
def foo(bar):
pass
class SomeClass(SomeInterfaceishClass):
#staticmethod
def foo(bar):
print("implementation here")
class EventHandler():
#staticmethod
#Multitasking.threaded
def foo(bar):
pass
I'm writing a frameworkish thingy where the end-user is forced to implement a few methods (only one here for simplicity). Therefore I have defined the SomeInterfaceishClass class with a abstract method. My problem is that I will not run the method from this class, instead, I'd like to run it from the EventHandler class. As you can see it has a decorator to make it run async and I think this is where the problem arises.
I know that I can call EventHandler.foo = getattr(self, 'foo') in the constructor of SomeInterfaceishClass, but when I do this the complete function will be overridden (loses it's decorator).
I'd like to keep the end-users code as clean as possible and therefore don't really want to add the decorator in SomeClass.
Is there any way to accomplish this? For example is there a way to add an implementation to a class method, rather than add a method to a class?
Just to be clear: I want to add it to the EventHandler class, not an instance of it.
Thank you all! :)
I have an object that needs to be initialised by reading a config file and environment variables. It has class methods, I want to make sure the object is initialised before the classmethod is executed.
Is there any way to initialise all classes of this sort of nature? I will probably have quite a few of these in my code.
I'm coming from a Java/Spring background where simply putting #Service on top of the class or #PostConstruct over the initializer method would make sure it's called. If there's not a clean way to do this in normal Python, is there framework that'll make this easier?
So the class looks something like this
class MyClass(object):
def setup(self):
# read variables from the environment and config files
#classmethod
def my_method(cls, params):
# run some code based on params and variables initialised during setup
You could always use a simple singleton implementation, which sounds like what you're trying to do. This post has examples of how to do it and some thoughts about whether you really need a singleton.
Is there a simple, elegant way to define singletons?
Option #2.
class MyClass(object):
# No setup()
setting_A="foo"
setting_B="bar"
dict_of_settings={"foo":"bar"}
# basically you can define some class variables here
#classmethod
def my_method(cls, params):
print cls.setting_B # etc
This option avoids the use of any global code at all. However it comes at the price that your settings are no longer nicely encapsulated within an instance.
Yes.
class MyClass(object):
def __init__(self): #instead of setup()
# read variables from the environment and config files
#classmethod
def my_method(cls, params):
# run some code based on params and variables initialised during setup
MyClass._instance=MyClass()
Basically, When you first import/load the file containing MyClass, it will run the __init__ constructor once for it's internal _instance. That instance (singleton) will then be accessible from all other class methods.
Problem
Yes I know you shouldn't send arguments to a unittest but in this case I have to so that it can access a piece of hardware within the current framework. For reasons for which the detail is irrelevant, I can't use a simple board = myBoard.getBoard() method within the tests, otherwise I wouldn't be here asking you all.
Horrible Attempt 1
class Test(object):
#staticmethod
def __call__(board):
_board = board
global _board
logging.info('Entering a Unit Test')
suite = unittest.TestLoader()
suite = suite.loadTestsFromTestCase(TestCase)
unittest.TextTestRunner().run(suite)
This method will be called upon to perform a unittest at somepoint. This is the way things are set out in the current framework, I'm just following the rules.
You can see here I assign a global variable (horrible). This can then be accessed by the test methods easily:
class TestCase(unittest.TestCase):
def runTest(self):
logging.critical('running')
def setUp(self):
logging.critical('setup')
def test_Me(self):
logging.critical('me')
The result is it won't enter runTest but it will setUp before each test_ method. Pretty much what I'd like, I can deal with it not entering runTest
Horrible Attempt 2
I'd like to do the above but without global variables so I tried this method:
class Test(object):
#staticmethod
def __call__(board):
logging.info('Entering a basic Unit Test')
suite = unittest.TestSuite() #creates a suite for testing
suite.addTest(BasicCase(board))
unittest.TextTestRunner().run(suite)
class BasicCase(unittest.TestCase):
def __init__(self, board):
self.board = board
logging.critical('init')
super(BasicCase, self).__init__()
def runTest(self):
logging.critical('runtest')
def setUp(self):
logging.critical('reset')
infra.Reset.__call__(self.board) #allows me to pass the board in a much nicer way
def test_Connect(self):
logging.critical('connection')
Problem with the above is that it will obviously only do init, setUp and runTest but it'll miss any test_ methods.
If I add:
suite.addTest(BasicCase(board).test_Connect)
Then it unnecessarily calls init twice and I have to add an extra member variable to test_Connect to accept the test object. I can deal with this too as I can return a list of the test_ methods but it just seems like a pain to do. Also I don't think it runs setUp every time it enters a new test_ method. Not ideal.
So ...
So is there some other method I can use that allows me to quickly run everything like a unittest but allows me to pass the board object?
If I should be using a different python module, then please say but I'm using unittest due to the lovely outputs and assert methods I get.