I've got a unittest test file containing four test classes each of which is responsible for running tests on one specific class. Each test class makes us of exactly the same set-up and teardown methods. The set-up method is relatively large, initiating about 20 different variables, while the teardown method simply resets these twenty variables to their initial state.
Up to now I have been putting the twenty variables in each of the four setUp classes. This works, but is not very easily maintained; if I decide to change one variable, I must change it in all four setUp methods. My search for a more elegant solution has failed however. Ideally I'd just like to enter my twenty variables once, call them up in each of my four setup methods, then tear them down after each of my test methods. With this end in mind I tried putting the variables in a separate module and importing this in each setUp, but of course the variables are then only available in the setup method (plus, though I couldn't put my finger on the exact reasons, this felt like a potentially problem-prone way of doing it
from unittest import TestCase
class Test_Books(TestCase):
def setup():
# a quick and easy way of making my variables available at the class level
# without typing them all in
def test_method_1(self):
# setup variables available here in their original state
# ... mess about with the variables ...
# reset variables to original state
def test_method_2(self):
# setup variables available here in their original state
# etc...
def teardown(self):
# reset variables to original state without having to type them all in
class Books():
def method_1(self):
pass
def method_2(self):
pass
An alternative is to put the twenty variables into a separate class, set the values in the class's __init__ and then access the data as class.variable, thus the only place to set the variable s in the __init__ and the code s not duplicated.
class Data:
def __init__(self):
data.x= ...
data.y = ....
class Test_Books(TestCase):
def setup():
self.data = Data()
def test_method_1(self):
value = self.data.x # get the data from the variable
This solution makes more sense if the twenty pieces of data are related to each other. Also if you have twenty pieces of data I would expect them to be related and so they should be combined in the real code not just in test.
What I would do is make the 4 test classes each a subclass of one base test class which itself is a subclass of TestCase. Then put setip and teardown in the base class and the rest in the others.
e.g.
class AbstractBookTest(TestCase):
def setup():
...
class Test_Book1(AbstractBookTest):
def test_method_1(self):
...
An alternative is just to make one class not the four you have which seems to be a bit more logical here unless you give a reason for the split.
Related
I have a question about multiple class inheritance in python. I think that I have already implemented it correctly, however it is somehow not in line with my usual understanding of inheritance (the usage of super(), specifically) and I am not really sure whether this could lead to errors or certain attributes not be updated etc.
So, let me try to describe the basic problem clearly:
I have three classes Base, First and Second
Both First and Second need to inherit from Base
Second also inherits from First
Base is an external module that has certain base methods needed for First and Second to function correctly
First is a base class for Second, which contains methods that I would have to repetitively write down in Second
Second is the actual class that I use. It implements additional methods and attributes. Second is a class for which the design may vary a lot, so I want to flexibly change it without having all the code from first written in Second.
The most important point about Second however is the following: As visible below, in Second's init, I firstly want to inherit from Base and perform some operations that require methods from Base. Then, after that, I would like to launch the operations in the init of First, which manipulate some of the parameters that are instantiated in Second. For that, I inherit from First at the end of Second's init-body.
You can see how the variable a is manipulated by throughout the initialization of Second. The current behavior is as I wish, but the structure of my code looks somehow weird, which is why I am asking.
Why the hell do I want to do this? Think of the First class having many methods and also performing many operations (on parameters from Second) in it's init body. I don't want to have all these methods in the body of Second and all these operations in the init of Second (here the parameter a). First is a class that will rarely change, so it is better for clarity and compactness to move it back to another file, at least in my opinion ^^. Also, due to the sequence of calls in Second's init, I did not find another way to realize it.
Now the code:
class Base():
def __init__(self):
pass
def base_method1(self):
print('Base Method 1')
pass
def base_method2(self):
pass
# ...
class First(Base):
def __init__(self):
super().__init__()
print('Add in Init')
self.first_method1()
def first_method1(self):
self.a += 1.5
def first_method2(self):
pass
# ...
class Second(First):
def __init__(self,a):
# set parameters
self.a = a
# inherit from Base class
Base.__init__(self)
# some operations that rely on Base-methods
self.base_method1()
print(self.a)
# inherit from First and perform operations in First-init
# that must follow AFTER the body of Second-init
First.__init__(self)
print(self.a)
# checking whether Second has inherited the method(s) from First
print('Add by calling method')
self.first_method1()
print(self.a)
sec = Second(0)
The output of the statement sec = Second(0) prints:
Base Method 1
0
Add in Init
1.5
Add by calling method
3.0
I hope it is more or less clear; if not, I am glad to clarify!
Thanks, I appreciate any comment!
Best, JZ
So - the basic problem here is that you are trying something for which multiple-inheritance is not a solution on itself - however there are ways to structure your code so that they work.
When using multiple-inheritance proper in Python, one should only have to call super().method once in each defined method, out of the base class, and only once, and do not call specific versions of the method by hardcoding an ancestor like you do in Base.__init__() on Second class. Just for start, with this design as is, Base.__init__() will run twice each time you instantiate Second.
The main problem in your assumptions lies in
I firstly want to inherit from Base and perform some operations that require methods from Base. Then, after that, I would like to launch the operations in the init of First, which manipulate some of the parameters that are instantiated in Second. For that, I inherit from First at the end of Second's init-body.
So - if you call super()as you shall have perceived, no matter if you write Second as class Second(Base, First) , super will run the method in First before the method in Base. It happens because Python does linearize all ancestor classes when there is multiple-inheritance, so that there is always a predictable and deterministic order, in which each ancestor class shows up only once, to find attributes and call methods (the "mro" or "Method Resolution Order") - and the only possible way with your arrangement is to go Second->First->Base.
Now, if you really want to perform initialization of certain parts in Base prior to running intialization in First, that is where "more than multiple inheritance" comes into play: you have to define slots in your class - that is a design decision you do by yourself, and nothing ready-made on the language: agree on some convention of methods that Base.__init__ will call at varying states of the initialization, so that they are activated at the proper time.
This way you could even skip having a First.__init__ method - just have some method you know will be called on the child, by Base.__init__ and override those as you need.
The language itself use this strategy when it offers both a __new__ and a __init__ method spec: each will run a different stage of initialization of a new instance.
Note that by doing it this way, you don't even need multiple inheritance for this case:
class Base():
def __init__(self):
# initial step
self.base_method1()
...
# first initalization step:
self.init_1()
#Intemediate common initialization
...
# second initialization step
...
def init_1(self):
pass
def init_2(self):
pass
def base_method1(self):
pass
# ...
class First(Base):
# __init__ actually uneeded here:
#def __init__(self):
# super().__init__()
def init_1(self):
print('Add in Init')
self.first_method1()
return super().init_1()
# init_2 not used,
def first_method1(self):
self.a += 1.5
def first_method2(self):
pass
# ...
class Second(First):
def __init__(self,a):
# set parameters
self.a = a
# some operations that rely on Base-methods
super().__init__() # will call Base.__init__ which should call "base_method1"
# if you need access to self.a _before_ it is changed by First, you implement
# the "init_1" slot in this "Second" class
# At this point, the code in First that updates the attribute has already run
print(self.a)
def init_1(self):
# this part of the code will be called by Base.__init__
_before_ First.init_1 is executed:
print("Initial value of self.a", a)
# delegate to "init_1" stage on First:
return super().init_1()
sec = Second(0)
Here's my problem:
I have a class. And I have two objects of that class: ObjectOne and ObjectTwo
I'd like my class to have certain methods for ObjectOne and different methods for ObjectTwo.
I'd also like to choose those methods from a variety depending on some condition.
and of course, I need to call the methods I have 'from the outside code'
As I see the solution on my own (just logic, no code):
I make a default class. And I make a list of functions defined somewhere.
IF 'some condition' is True I construct a child class that takes one of those functions and adds it into class as class method. Otherwise I add some default set of methods. Then I make ObjectOne of this child class.
The question is: can I do that at all? And how do I do that? And how do I call such a method once it is added? They all would surely be named differently...
I do not ask for a piece of working code here. If you could give me a hint on where to look or maybe a certain topic to learn, this would do just fine!
PS: In case you wonder, the context is this: I am making a simple game prototype, and my objects represent two game units (characters) that fight each other automatically. Something like an auto-chess. Each unit may have unique abilities and therefore should act (make decisions on the battlefield) depending on the abilities it has. At first I tried to make a unified decision-making routine that would include all possible abilities at once (such as: if hasDoubleStrike else if... etc). But it turned out to be a very complex task, because there are tens of abilities overall, each unit may have any two, so the number of combinations is... vast. So, now I am trying to distribute this logic over separate units: each one would 'know' only of its own two abilities.
I mean I believe this is what would generally be referred to as a bad idea, but... you could have an argument passed into the class's constructor and then define the behavior/existence of a function depending on that condition. Like So:
class foo():
def __init__(self, condition):
if condition:
self.func = lambda : print('baz')
else:
self.func = lambda : print('bar')
if __name__ == '__main__':
obj1 = foo(True)
obj2 = foo(False)
obj1.func()
obj2.func()
Outputs:
baz
bar
You'd likely be better off just having different classes or setting up some sort of class hierarchy.
So in the end the best solution was the classical factory method and factory class. Like this:
import abc
import Actions # a module that works as a library of standard actions
def make_creature(some_params):
creature_factory = CreatureFactory()
tempCreature = creature_factory.make_creature(some_params)
return tempCreature
class CreatureFactory:
def make_creature(some_params):
...
if "foo" in some_params:
return fooChildCreature()
class ParentCreature(metaclass=abc.ABCMeta):
someStaticParams = 'abc'
#abc.abstractmethod
def decisionMaking():
pass
class fooChildCreature(ParentCreature):
def decisionMaking():
Actions.foo_action()
Actions.bar_action()
# some creature-specific decision making here that calls same static functions from 'Actions'
NewCreature = make_creature(some_params)
This is not ideal, this still requires much manual work to define decision making for various kinds of creatures, but it is still WAY better than anything else. Thank you very much for this advice.
I have a class with a dictionary that is used to cache response from server for a particular input. Since this is used for caching purpose, this is kept as a class variable.
class MyClass:
cache_dict = {}
def get_info_server(self, arg):
if arg not in self.cache_dict:
self.cache_dict[arg] = Client.get_from_server(arg)
return cache_dict[arg]
def do_something(self, arg):
# Do something based on get_info_server(arg)
And when writing unit tests, since the dictionary is a class variable, the values are cached across test cases.
Test Cases
# Assume that Client is mocked.
def test_caching():
m = MyClass()
m.get_info_server('foo')
m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
def test_do_something():
m = MyClass()
mock_client.get_from_server.return_value = 'bar'
m.do_something('foo') # This internally calls get_info_server('foo')
If test_caching executes first, the cached value will be some mock object. If test_do_something executes first, then the assertion that the test case is called exactly once will fail.
How do I make the tests independent of each other, besides manipulating the dictionary directly (since this is like requiring intimate knowledge of the inner working of the code. what if the inner working were to change later. All I need to verify is the API itself, and not rely on the inner workings)?
You can't really avoid resetting your cache here. If you are unittesting this class, then your unittest will need to have an intimate knowledge of the inner workings of the class, so just reset the cache. You rarely can change how your class works without adjusting your unittests anyway.
If you feel that that still will create a maintenance burden, then make cache handling explicit by adding a class method:
class MyClass:
cache_dict = {}
#classmethod
def _clear_cache(cls):
# for testing only, hook to clear the class-level cache.
cls.cache_dict.clear()
Note that I still gave it a name with a leading underscore; this is not a method that a 3rd party should call, it is only there for tests. But now you have centralised clearing the cache, giving you control over how it is implemented.
If you are using the unittest framework to run your tests, clear the cache before each test in a TestCase.setUp() method. If you are using a different testing framework, that framework will have a similar hook. Clearing the cache before each test ensures that you always have a clean state.
Do take into account that your cache is not thread safe, if you are running tests in parallel with threading you'll have issues here. Since this also applies to the cache implementation itself, this is probably not something you are worried about right now.
You didn't put it in the question explicitly, but I'm assuming your test methods are in a subclass of unittest.TestCase called MyClassTests.
Explicitly set MyClass.cache_dict in the method under test. If it's just a dictionary, without any getters / setters for it, you don't need a Mock.
If you want to guarantee that every test method is independent, set MyClass.cache_dict = {} in MyClassTests.setup().
You need to make use of Python's built in UnitTest TestCase and implement setup and teardown methods.
If you define setUp() and tearDown() in your tests, these will execute each time one of the single test methods gets called (before and after, respectively)
Example:
# set up any global, consistent state here
# subclass unit test test case here.
def setUp(self):
# prepare your state if needed for each test, if this is not considered "fiddling", use this method to set your cache to a fresh state each time
your_cache_dict_variable = {}
### Your test methods here
def tearDown(self):
# this will handle resetting the state, as needed
Check out the docs for more info: https://docs.python.org/2/library/unittest.html
One thing I can suggest is to use setUp() and tearDown() methods in your test class.
from unittest import TestCase
class MyTest(TestCase):
def setUp(self):
self.m = MyClass()
//anything else you need to load before testing
def tearDown(self):
self.m = None
def test_caching(self):
self.m.get_info_server('foo')
self.m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
I've written a module called Consumer.py, containing a class (Consumer). This class is initialized using a configuration file thay contains different parameters it uses for computation and the name of a loq que used for logging.
I want to write unit tests for this class so i've made a script called test_Consumer.py with a class called TestConsumerMethods(unittest.TestCase).
Now, what i've done is create a new object of the Consumer class called cons, and then i use that to call on the class methods for testing. For example, Consumer has a simple method that checks if a file exists in a given directory. The test i've made looks like this
import Consumer
from Consumer import Consumer
cons = Consumer('mockconfig.config', 'logque1')
class TestConsumerMethods(unittest.TestCase):
def test_fileExists(self):
self.assertEqual(cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(cons.file_exists('./dir/', thisDoesExist.config), True)
Is this the correct way to test my class? I mean, ideally i'd like to just use the class methods without having to instantiate the class because to "isolate" the code, right?
Don't make a global object to test against, as it opens up the possibility that some state will get set on it by one test, and affect another.
Each test should run in isolation and be completely independent from others.
Instead, either create the object in your test, or have it automatically created for each test by putting it in the setUp method:
import Consumer
from Consumer import Consumer
class TestConsumerMethods(unittest.TestCase):
def setUp(self):
self.cons = Consumer('mockconfig.config', 'logque1')
def test_fileExists(self):
self.assertEqual(self.cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(self.cons.file_exists('./dir/', thisDoesExist.config), True)
As far as whether you actually have to instantiate your class at all, that depends on the implementation of the class. I think generally you'd expect to instantiate a class to test its methods.
I'm not sure if that's what you're searching for, but you could add your tests at the end of your file like this :
#!/usr/bin/python
...
class TestConsumerMethods(...):
...
if __name__ == "__main__":
# add your tests here.
This way, by executing the file containing the class definition, you execute the tests you put in the if statement.
This way the tests will only be executed if you directly execute the file itself, but not if you import the class from it.
Not sure if this is a dupe or not. Here it goes.
I need to write some Python code that looks like:
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
and then have a test engine class that scans all these test classes, loads all the test_case functions and for each invokes get_categories to find out if the test belongs t the group of interest for the specific run.
The problem is that get_categories is not seen as an attribute of test_case, and even if I manually assign it
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
test_case.get_categories = get_categories
this is only going to happen when test_case first runs, too late for me.
The reason why this function can’t go on the class (or at least why I want it to be also available at the per-function level) is that a TestClass can have multiple test cases.
Since this is an already existing testing infrastructure, and the categories mechanism works (other than the categories-on-function scenario, which is of lesser importance), a rewrite is not in the plans.
Language tricks dearly appreciated.
Nested functions don't become attributes any more than any other assignment.
I suspect your test infrastructure is doing some severely weird things if this isn't supported (and uses old-style classes!), but you could just do this:
class TestClass:
def test_case(self):
# ...
def _get_categories(self):
return [...]
test_case.get_categories = _get_categories
del _get_categories
Class bodies are executable code like any other block.
What you need is nested classes. Functions aren't made to do what you are trying to do, so you have to move up a notch. Function attributes are mainly used as markup, whereas classes can have anything you want.
class TestClass(object):
class TestCase(object):
#classmethod
def get_categories(cls):
return ['abc', 'efg']
Note that I used #classmethod so that you could use it without instantiating TestCase(); modify if you want to do test_case = TestCase().