A DRY way of writing similar unit tests in python - python

I have some similar unit tests in python.
There are so similar that only one argument is changing.
class TestFoo(TestCase):
def test_typeA(self):
self.assertTrue(foo(bar=TYPE_A))
def test_typeB(self):
self.assertTrue(foo(bar=TYPE_B))
def test_typeC(self):
self.assertTrue(foo(bar=TYPE_C))
...
Obviously this is not very DRY, and if you have even 4-5 different options the code is going to be very repetitive
Now I could do something like this
class TestFoo(TestCase):
BAR_TYPES = (
TYPE_A,
TYPE_B,
TYPE_C,
...
)
def _foo_test(self, bar_type):
self.assertTrue(foo(bar=bar_type))
def test_foo_bar_type(self):
for bar_type in BAR_TYPES:
_foo_test(bar=bar_type))
Which works, however when an exception gets raised, how will I know whether _foo_test failed with argument TYPE_A, TYPE_B or TYPE_C ?
Perhaps there is a better way of structuring these very similar tests?

What are you trying to do is essentially a parameterized test. This feature isn't included in standard django or python unittest modules, but a number of libs provide it: nose-parameterized, py.test, ddt
My favorite so far is ddt: it resembles NUnit-JUnit style parameterized tests most, pretty lightweight, don't get in your way and does not require dedicated test runner (like nose-parameterized do). The way it can help you is that it modifies test name to include all parameters, so you would clearly see which test case failed by looking at a test name.
With ddt your example would look like this:
import ddt
#ddt.ddt
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
#ddt.data(TYPE_A, TYPE_B, TYPE_C)
def test_foo_bar_type(self, type):
self.assertTrue(foo(bar=type))
In such case names will look like test_foo_bar_type__TYPE_A (technically, it constructs it something like [test_name]__[repr(parameter_1)]__[repr(parameter_2)]).
As a bonus, it is much cleaner (no helper method), and you get three methods instead of one. The advantage here is that you can test various code paths in a method and get one test case per each path (but a certain amount of thinking is needed, sometimes it's better to have a dedicated test for some of code paths)

Most TestCase assertion methods, including assertTrue, take an optional msg argument.
If you change your BAR_TYPES tuple to include the variable names, then you can include this in the message that is shown when the assertion fails.
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
BAR_TYPES = (
('TYPE_A', TYPE_A),
('TYPE_B', TYPE_B),
('TYPE_C', TYPE_C),
...
)
def _foo_test(self, var_name, bar_type):
self.assertTrue(foo(bar=bar_type), var_name)
def test_foo_bar_type(self):
for (var_name, bar_type) in BAR_TYPES:
_foo_test(bar=bar_type), var_name=var_name)

Related

Mocking decorator with that uses hardcoded global variable

When trying to unittest the below seen code snippet i get limited by the timing limit that the decorator that wraps calc_something functions puts to me. It seems that I cant override RAND_RATE on my unittests since then I import the module containing my implementation the decorators have already wrapped my function. How can I solve that issue?
RAND_RATE=20
RAND_PERIOD=10
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Without knowing exactly what limits does, we don't know what (if anything) can be patched. Instead, leave the base implementation undecorated for use by unit test. calc_something will be saved as the separate result of applying limits manually.
RAND_RATE=20
RAND_PERIOD=10
def _do_calc():
...
calc_something = limits(calls=RAND_RATE, period=RAND_PERIOD)(_do_calc)
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Now in your tests, you can define any decorated version you like:
test_me = limits(10, 5)(my_module._do_calc)

How to parametrize a parameter in pytest?

Sometimes with pytest we start to parametrize a test function:
import pytest
#pytest.mark.parametrize("user", [WebUser(), BusinessUser(), AdminUser()])
def test_foo(user):
...
And then realize that some of the parameters would themselves have possible variants. Of course for simple concepts and short lists of parameters the simplest way to do this is to duplicate:
import pytest
#pytest.mark.parametrize("user", [WebUser("bob", age=12), WebUser("alice", age=56), BusinessUser("v1"), BusinessUser("v2"), AdminUser()])
def test_foo(user):
...
However this breaks tests readability quite fast when the number of variants gets big and very different for each "kind" in the first parameter. Is there a way to do this in a more elegant/modular way ?
(Answering my own question to share at least one way of doing this - please excuse me for advertising my own lib here but I really felt that this question was missing in SO. Feel free to propose other alternatives, this is a living topic !)
One way to do this is with the pytest-cases plugin.
import pytest
from pytest_cases import parametrize_with_cases
class UserCases:
#pytest.mark.parametrize("name,age", [("bob", 12), ("alice", 56)])
def user_web(self, name, age):
return WebUser(name, age)
#pytest.mark.parametrize("version", ["v1", "v2"])
def user_business(self, version):
return BusinessUser(version)
def user_admin(self):
return AdminUser()
#parametrize_with_cases("user", cases=UserCases, prefix="user_")
def test_foo(user):
pass
yields:
test_foo[web-bob-12]
test_foo[web-alice-56]
test_foo[business-v1]
test_foo[business-v2]
test_foo[admin]
This is recursive so you can use #parametrize_with_cases on any case function.
See pytest-cases documentation for details.

How to log the name of the test class, if the test method resides in a class common for all tests?

I have the following project structure:
/root
/tests
common_test_case.py
test_case_1.py
test_case_2.py
...
project_file.py
...
Every test test_case_... is inherited from both unittest.TestCase and common_test_case.CommonTestCase. Class CommonTestCase contains test methods that should be executed by all the tests (though using data unique to each test, stored and accessed in self.something of the test). If some specific tests are needed for an exact test case, they are added directly to that particular class.
Currently I am working on adding logging to my tests. Among other things I would like to log the class the method was run from (since the approach above implies the same test method name for different classes). I would like to stick with the built-in logging module to achieve this.
I have tried the following LogRecord attributes:%(filename)s, %(module)s, %(pathname)s. Though, for methods defined in common_test_case.py they all return path/name to the common_test_case.py and not the test module they were actually run from.
My questions are:
Is there a way to achieve what I am trying to, using only built-in logging module?
Using some third-party/other module (I was thinking maybe some "hacky" solution with inspect)?
Is it possible to achieve (in Python) at all?
Your question appears similar to this one, and solved by:
self.id()
See the function definition here, which calls self.__class__ for the instance of the TestCase class that is instantiated. Given that you are using multiple inheritance the multiple inheritance rules from Python apply:
For most purposes, in the simplest cases, you can think of the search for attributes inherited from a parent class as depth-first, left-to-right, not searching twice in the same class where there is an overlap in the hierarchy.
Which means that common_test_case.CommonTestCase will be searched then unittest.TestCase. If there is no id function in common_test_case.CommonTestCase things should work as if it is only derived from unittest.TestCase. If you feel the need to add an id function to the CommonTestCase, something like this (if really necessary):
def id(self):
if issubclass(self,unittest.TestCase):
return super(unittest.TestCase,self).id()
The solution I've found (that does the trick, so far):
import inspect
class_called_from = inspect.stack()[1][0].f_locals['self'].__class__.__name__
I'm still wondering, though, if there is a "clearer" method, or if this is possible to achieve using logging module.
Recipes, based on West's answer (tested on Python 3.6.1):
test_name = self.id().split('.')[-1]
class_called_from = self.id().split('.')[-2]

Does Python provide support for Assumptions as pre-conditions?

Assumptions in Python unit tests
Does Python provide support for Assumptions to be used as pre-conditions for tests similar to those provided by JUnit with assumeThat(...) methods for Java.
This is important, because of the application of Hoare Logic, to quote JUnit:
A set of methods useful for stating assumptions about the conditions in which a test is meaningful. A failed assumption does not mean the code is broken, but that the test provides no useful information. Assume basically means "don't run this test if these conditions don't apply". The default JUnit runner skips tests with failing assumptions. Custom runners may behave differently.
It seems that Python doesn't provide these out of the box in its unittest framework. I've tentatively POC my own approach by extending unittest.TestCase.
class LoggingTestCase(unittest.TestCase):
def assumeTrue(self, expr: Any, msg: Any = ...) -> None:
try:
super().assertTrue(expr, msg)
self.test_result = TestResult.PASSED
except AssertionError as e:
self.test_result = TestResult.SKIPPED
raise InvalidAssumption(e)
With this unittest of behaviour:
class TestLoggingTestCase(LoggingTestCase):
def test_assumeTrue(self):
self.assertRaises(InvalidAssumption, self.assumeTrue, False) # Passes as expected.
self.assertRaises(InvalidAssumption, self.assumeTrue, True) # Fails as expected.
This approach seems to exhibit the correct behaviour I want, is this the best approach, is there better way to do this, or 3rd party library to use? I'm looking for a better way than wrapping all the base assertions this way to make my own assumptions.
I use pytest skipif for this purpose.
http://doc.pytest.org/en/latest/skipping.html

Giving parameters into TestCase from Suite in python

From python documentation(http://docs.python.org/library/unittest.html):
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
self.widget = None
def test_default_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
Here is, how invoke those testcase:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_size'))
suite.addTest(WidgetTestCase('test_resize'))
return suite
Is it possible to insert parameter custom_parameter into WidgetTestCase like:
class WidgetTestCase(unittest.TestCase):
def setUp(self,custom_parameter):
self.widget = Widget('The widget')
self.custom_parameter=custom_parameter
?
What I've done is in test_suite module just added
WidgetTestCase.CustomParameter="some_address"
The simplest solutions are the best :)
I've found a way to do this, but it's a bit of a cludge.
Basically, what I do is add, to the TestCase, an __init__ method which defines a 'default' parameter and a __str__ so that we can distinguish cases:
class WidgetTestCase(unittest.TestCase):
def __init__(self, methodName='runTest'):
self.parameter = default_parameter
unittest.TestCase.__init__(self, methodName)
def __str__(self):
''' Override this so that we know which instance it is '''
return "%s(%s) (%s)" % (self._testMethodName, self.currentTest, unittest._strclass(self.__class__))
Then in suite(), I iterate over my test parameters, replacing the default parameter with one specific to each test:
def suite():
suite = unittest.TestSuite()
for test_parameter in test_parameters:
loadedtests = unittest.TestLoader().loadTestsFromTestCase(WidgetTestCase)
for t in loadedtests:
t.parameter = test_parameter
suite.addTests(loadedtests)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(OtherWidgetTestCases))
return suite
where OtherWidgetTestCases are tests which don't need to be parameterised.
For instance I have a bunch of tests on real data for which a suite of tests need to be applied to each, but I also have some synthetic data sets, designed to test certain edge cases not normally present in the data, and I only need to apply certain tests to those, so they get their own tests in OtherWidgetTestCases.
This is something that has been on my mind recently. Yes it is very possible to do. I called it scenario testing, but I think parameterized may be more accurate. I put a proof of concept up as a gist here. In short it is a meta class that allows you to define a scenario and run the tests against it a bunch. With it your example can be something like this:
class WidgetTestCase(unittest.TestCase):
__metaclass__ = ScenarioMeta
class widget_width(ScenerioTest):
scenarios = [
dict(widget_in=Widget("One Way"), expected_tuple=(50, 50)),
dict(widget_in=Widget("Another Way"), expected_tuple=(100, 150))
]
def __test__(self, widget_in, expected_tuple):
self.assertEqual(widget_in.size, expected_tuple)
When run, the meta class writes 2 seperate tests out so the output would be something like:
$ python myscerariotest.py -v
test_widget_width_0 (__main__.widget_width) ... ok
test_widget_width_1 (__main__.widget_width) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
As you can see the scenarios are converted to tests at runtime.
Now I am not yet sure if this is even a good idea. I use it in tests where I have a lot of text centric cases that repeat the same assertions on slightly different data, which helps me to catch the little edge cases. But the classes in that gist do work and I believe it accomplishes what you are after.
Note that the with some trickery the test cases can be given names and even pulled from an external source like a text file or database. Its not documented yet but some digging around in the meta class should get you started. There is also some more info and examples on my post here.
Edit
This is an ugly hack that I do not support anymore. The implementation should have been done as a subclass of TestCase, not as a hacked meta class. Live and learn. An even better solution would be to use nose generators.
I don't believe so, the signature for setUp needs to be what unittest is expecting, afaik, setUp is automagically called within the testcase's run method as setUp()... you're not going to be able to pass it unless you override run to pass in the var you want. But I think what you want defeats the purpose of unit testing. Don't try to use a DRY philosophy with this, each unit you're testing should be a part of a class or even part of a function/method.
I don't think this is a good idea. Unit tests should be thorough enough that you test all functionality in your cases so passing in different parameteres shouldn't be required.
You mention you're passing in a www address - this is almost certainly not a good idea. What happens if you try and run the tests on a machine where the 'net connection is down? Your tests should be:
Automatic - they will run on all machines and platforms where your app is supported, without user intervention. They shouldn't rely on external environment to pass. This means (amongst other things) that relying on a properly set up connection to the Internet is a bad idea. You can get around this by providing dummy data. Instead of passing in a URL to a resource, abstract away the data source and pass in a data-stream or whatever. This is especially easy in python since you can make use of python's duck-typing to present a stream-like object (python frequently uses a "file-like" object for this very reason!).
Thorough - your unit tests should have 100% code coverage, and cover all possible situations. You want to test your code with multiple sites? Instead, test your code with all the possible features that a site may include. Without knowing more about what your application does, I can't offer much advice in this point.
Now, it looks like you're tests are going to be heavily data-driven. There are many tools that allow you to define data-sets for unit tests and load them in the tests. Check out python test fixtures, for example.
I realise that this isn't the answer you're looking for, but I think you'll have more joy in the long-run if you follow these principles.

Categories