Does Python provide support for Assumptions as pre-conditions? - python

Assumptions in Python unit tests
Does Python provide support for Assumptions to be used as pre-conditions for tests similar to those provided by JUnit with assumeThat(...) methods for Java.
This is important, because of the application of Hoare Logic, to quote JUnit:
A set of methods useful for stating assumptions about the conditions in which a test is meaningful. A failed assumption does not mean the code is broken, but that the test provides no useful information. Assume basically means "don't run this test if these conditions don't apply". The default JUnit runner skips tests with failing assumptions. Custom runners may behave differently.
It seems that Python doesn't provide these out of the box in its unittest framework. I've tentatively POC my own approach by extending unittest.TestCase.
class LoggingTestCase(unittest.TestCase):
def assumeTrue(self, expr: Any, msg: Any = ...) -> None:
try:
super().assertTrue(expr, msg)
self.test_result = TestResult.PASSED
except AssertionError as e:
self.test_result = TestResult.SKIPPED
raise InvalidAssumption(e)
With this unittest of behaviour:
class TestLoggingTestCase(LoggingTestCase):
def test_assumeTrue(self):
self.assertRaises(InvalidAssumption, self.assumeTrue, False) # Passes as expected.
self.assertRaises(InvalidAssumption, self.assumeTrue, True) # Fails as expected.
This approach seems to exhibit the correct behaviour I want, is this the best approach, is there better way to do this, or 3rd party library to use? I'm looking for a better way than wrapping all the base assertions this way to make my own assumptions.

I use pytest skipif for this purpose.
http://doc.pytest.org/en/latest/skipping.html

Related

How to disable a try/except block during testing?

I wrote a cronjob that iterates through a list of accounts and performs some web call for them (shown below):
for account in self.ActiveAccountFactory():
try:
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
except Exception,e:
self.logger.exception(traceback.format_exc())
Because this job is run by heroku one every 10 minutes, I do not want the entire job to fail just because one account is running into issues (it happens). I placed a try catch clause here so that this task is "fault-tolerant".
However, I noticed that when I am testing, this try/catch block is giving me cryptic problems because of the task is allowed to continue executing even though there is some serious error.
What is the best way to disable a try/except block during testing?
I've though about implementing the code directly like this:
for account in self.ActiveAccountFactory():
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
self.logger.exception(traceback.format_exc())
in my test cases but then this makes my tests very clumsy as I am copying large amounts of code over.
What should I do?
First of all: don't swallow all exceptions using except Exception. It's bad design. So cut it out.
With that out of the way:
One thing you could do is setup a monkeypatch for the logger.exception method. Then you can handle the test however you see fit based on whether it was called, whether it's creating a mock logger, or a separate testing logger, or a custom testing logger class that stops the tests when certain exceptions occur. You could even choose to end the testing immediately by raising an error.
Here is an example using pytest.monkeypatch. I like pytest's way of doing this because they already have a predefined fixture setup for it, and no boilerplate code is required. However, there are others ways to do this as well (such as using unittest.mock.patch as part of the unitest module).
I will call your class SomeClass. What we will do is create a patched version of your SomeClass object as a fixture. The patched version will not log to the logger; instead, it will have a mock logger. Anything that happens to the logger will be recorded in the mock logger for inspection later.
import pytest
import unittest.mock as mock # import mock for Python 2
#pytest.fixture
def SomeClassObj_with_patched_logger(monkeypatch):
##### SETUP PHASE ####
# create a basic mock logger:
mock_logger = mock.Mock(spec=LoggerClass)
# patch the 'logger' attribute so that when it is called on
# 'some_class_instance' (which is bound to 'self' in the method)
# things are re-routed to mock_logger
monkeypatch.setattr('some_class_instance.logger', mock_logger)
# now create class instance you will test with the same name
# as the patched object
some_class_instance = SomeClass()
# the class object you created will now be patched
# we can now send that patched object to any test we want
# using the standard pytest fixture way of doing things
yield some_class_instance
###### TEARDOWN PHASE #######
# after all tests have been run, we can inspect what happened to
# the mock logger like so:
print('\n#### ', mock_logger.method_calls)
If call.exception appears in the method calls of the mock logger, you know that method was called. There are a lot of other ways you could handle this as well, this is just one.
If you're using the logging module, LoggerClass should just be logging.Logger. Alternatively, you can just do mock_logger = mock.Mock(). Or, you could create your own custom testing logger class that raises an exception when its exception method is called. The sky is the limit!
Use your patched object in any test like so:
def test_something(SomeClassObj_with_patched_logger):
# no need to do the line below really, just getting
# a shorter variable name
my_obj = SomeClassObj_with_patched_logger
#### DO STUFF WITH my_obj #####
If you are not familiar with pytest, see this training video for a little bit more in depth information.
try...except blocks are difficult when you are testing because they catch and try to dispose of errors you would really rather see. As you have found out. While testing, for
except Exception as e:
(don't use Exception,e, it's not forward-compatible) substitute an exception type that is really unlikely to occur in your circumstances, such as
except AssertionError as e:
A text editor will do this for you (and reverse it afterwards) at the cost of a couple of mouse-clicks.
You can make callables test-aware by add a _testing=False parameter. Use that to code alternate pathways in the callable for when testing. Then pass _testing=True when calling from a test file.
For the situation presented in this question, putting if _testing: raise in the exception body would 'uncatch' the exception.
Conditioning module level code is tricker. To get special behavior when testing module mod in package pack, I put
_testing = False # in `pack.__init__`
from pack import _testing # in pack.mod
Then test_mod I put something like:
import pack
pack._testing = True
from pack import mod

Is there a way to override default assert in pytest (python)?

I'd like to a log some information to a file/database every time assert is invoked. Is there a way to override assert or register some sort of callback function to do this, every time assert is invoked?
Regards
Sharad
Try overload the AssertionError instead of assert. The original assertion error is available in exceptions module in python2 and builtins module in python3.
import exceptions
class AssertionError:
def __init__(self, *args, **kwargs):
print("Log me!")
raise exceptions.AssertionError
I don't think that would be possible. assert is a statement (and not a function) in Python and has a predefined behavior. It's a language element and cannot just be modified. Changing the language cannot be the solution to a problem. Problem has to be solved using what is provided by the language
There is one thing you can do though. Assert will raise AssertionError exception on failure. This can be exploited to get the job done. Place the assert statement in Try-expect block and do your callbacks inside that block. It isn't as good a solution as you are looking for. You have to do this with every assert. Modifying a statement's behavior is something one won't do.
It is possible, because pytest is actually re-writing assert expressions in some cases. I do not know how to do it or how easy it is, but here is the documentation explaining when assert re-writing occurs in pytest:
https://docs.pytest.org/en/latest/assert.html
By default, if the Python version is greater than or equal to 2.6, py.test rewrites assert statements in test modules.
...
py.test rewrites test modules on import. It does this by using an
import hook to write a new pyc files.
Theoretically, you could look at the pytest code to see how they do it, and perhaps do something similar.
For further information, Benjamin Peterson wrote up Behind the scenes of py.test’s new assertion rewriting [ at http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html ]
I suggest to use pyhamcrest. It has very beatiful matchers which can be simply reimplemented. Also you can write your own.

A DRY way of writing similar unit tests in python

I have some similar unit tests in python.
There are so similar that only one argument is changing.
class TestFoo(TestCase):
def test_typeA(self):
self.assertTrue(foo(bar=TYPE_A))
def test_typeB(self):
self.assertTrue(foo(bar=TYPE_B))
def test_typeC(self):
self.assertTrue(foo(bar=TYPE_C))
...
Obviously this is not very DRY, and if you have even 4-5 different options the code is going to be very repetitive
Now I could do something like this
class TestFoo(TestCase):
BAR_TYPES = (
TYPE_A,
TYPE_B,
TYPE_C,
...
)
def _foo_test(self, bar_type):
self.assertTrue(foo(bar=bar_type))
def test_foo_bar_type(self):
for bar_type in BAR_TYPES:
_foo_test(bar=bar_type))
Which works, however when an exception gets raised, how will I know whether _foo_test failed with argument TYPE_A, TYPE_B or TYPE_C ?
Perhaps there is a better way of structuring these very similar tests?
What are you trying to do is essentially a parameterized test. This feature isn't included in standard django or python unittest modules, but a number of libs provide it: nose-parameterized, py.test, ddt
My favorite so far is ddt: it resembles NUnit-JUnit style parameterized tests most, pretty lightweight, don't get in your way and does not require dedicated test runner (like nose-parameterized do). The way it can help you is that it modifies test name to include all parameters, so you would clearly see which test case failed by looking at a test name.
With ddt your example would look like this:
import ddt
#ddt.ddt
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
#ddt.data(TYPE_A, TYPE_B, TYPE_C)
def test_foo_bar_type(self, type):
self.assertTrue(foo(bar=type))
In such case names will look like test_foo_bar_type__TYPE_A (technically, it constructs it something like [test_name]__[repr(parameter_1)]__[repr(parameter_2)]).
As a bonus, it is much cleaner (no helper method), and you get three methods instead of one. The advantage here is that you can test various code paths in a method and get one test case per each path (but a certain amount of thinking is needed, sometimes it's better to have a dedicated test for some of code paths)
Most TestCase assertion methods, including assertTrue, take an optional msg argument.
If you change your BAR_TYPES tuple to include the variable names, then you can include this in the message that is shown when the assertion fails.
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
BAR_TYPES = (
('TYPE_A', TYPE_A),
('TYPE_B', TYPE_B),
('TYPE_C', TYPE_C),
...
)
def _foo_test(self, var_name, bar_type):
self.assertTrue(foo(bar=bar_type), var_name)
def test_foo_bar_type(self):
for (var_name, bar_type) in BAR_TYPES:
_foo_test(bar=bar_type), var_name=var_name)

Giving parameters into TestCase from Suite in python

From python documentation(http://docs.python.org/library/unittest.html):
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
self.widget = None
def test_default_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
Here is, how invoke those testcase:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_size'))
suite.addTest(WidgetTestCase('test_resize'))
return suite
Is it possible to insert parameter custom_parameter into WidgetTestCase like:
class WidgetTestCase(unittest.TestCase):
def setUp(self,custom_parameter):
self.widget = Widget('The widget')
self.custom_parameter=custom_parameter
?
What I've done is in test_suite module just added
WidgetTestCase.CustomParameter="some_address"
The simplest solutions are the best :)
I've found a way to do this, but it's a bit of a cludge.
Basically, what I do is add, to the TestCase, an __init__ method which defines a 'default' parameter and a __str__ so that we can distinguish cases:
class WidgetTestCase(unittest.TestCase):
def __init__(self, methodName='runTest'):
self.parameter = default_parameter
unittest.TestCase.__init__(self, methodName)
def __str__(self):
''' Override this so that we know which instance it is '''
return "%s(%s) (%s)" % (self._testMethodName, self.currentTest, unittest._strclass(self.__class__))
Then in suite(), I iterate over my test parameters, replacing the default parameter with one specific to each test:
def suite():
suite = unittest.TestSuite()
for test_parameter in test_parameters:
loadedtests = unittest.TestLoader().loadTestsFromTestCase(WidgetTestCase)
for t in loadedtests:
t.parameter = test_parameter
suite.addTests(loadedtests)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(OtherWidgetTestCases))
return suite
where OtherWidgetTestCases are tests which don't need to be parameterised.
For instance I have a bunch of tests on real data for which a suite of tests need to be applied to each, but I also have some synthetic data sets, designed to test certain edge cases not normally present in the data, and I only need to apply certain tests to those, so they get their own tests in OtherWidgetTestCases.
This is something that has been on my mind recently. Yes it is very possible to do. I called it scenario testing, but I think parameterized may be more accurate. I put a proof of concept up as a gist here. In short it is a meta class that allows you to define a scenario and run the tests against it a bunch. With it your example can be something like this:
class WidgetTestCase(unittest.TestCase):
__metaclass__ = ScenarioMeta
class widget_width(ScenerioTest):
scenarios = [
dict(widget_in=Widget("One Way"), expected_tuple=(50, 50)),
dict(widget_in=Widget("Another Way"), expected_tuple=(100, 150))
]
def __test__(self, widget_in, expected_tuple):
self.assertEqual(widget_in.size, expected_tuple)
When run, the meta class writes 2 seperate tests out so the output would be something like:
$ python myscerariotest.py -v
test_widget_width_0 (__main__.widget_width) ... ok
test_widget_width_1 (__main__.widget_width) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
As you can see the scenarios are converted to tests at runtime.
Now I am not yet sure if this is even a good idea. I use it in tests where I have a lot of text centric cases that repeat the same assertions on slightly different data, which helps me to catch the little edge cases. But the classes in that gist do work and I believe it accomplishes what you are after.
Note that the with some trickery the test cases can be given names and even pulled from an external source like a text file or database. Its not documented yet but some digging around in the meta class should get you started. There is also some more info and examples on my post here.
Edit
This is an ugly hack that I do not support anymore. The implementation should have been done as a subclass of TestCase, not as a hacked meta class. Live and learn. An even better solution would be to use nose generators.
I don't believe so, the signature for setUp needs to be what unittest is expecting, afaik, setUp is automagically called within the testcase's run method as setUp()... you're not going to be able to pass it unless you override run to pass in the var you want. But I think what you want defeats the purpose of unit testing. Don't try to use a DRY philosophy with this, each unit you're testing should be a part of a class or even part of a function/method.
I don't think this is a good idea. Unit tests should be thorough enough that you test all functionality in your cases so passing in different parameteres shouldn't be required.
You mention you're passing in a www address - this is almost certainly not a good idea. What happens if you try and run the tests on a machine where the 'net connection is down? Your tests should be:
Automatic - they will run on all machines and platforms where your app is supported, without user intervention. They shouldn't rely on external environment to pass. This means (amongst other things) that relying on a properly set up connection to the Internet is a bad idea. You can get around this by providing dummy data. Instead of passing in a URL to a resource, abstract away the data source and pass in a data-stream or whatever. This is especially easy in python since you can make use of python's duck-typing to present a stream-like object (python frequently uses a "file-like" object for this very reason!).
Thorough - your unit tests should have 100% code coverage, and cover all possible situations. You want to test your code with multiple sites? Instead, test your code with all the possible features that a site may include. Without knowing more about what your application does, I can't offer much advice in this point.
Now, it looks like you're tests are going to be heavily data-driven. There are many tools that allow you to define data-sets for unit tests and load them in the tests. Check out python test fixtures, for example.
I realise that this isn't the answer you're looking for, but I think you'll have more joy in the long-run if you follow these principles.

Python unittest with expensive setup

My test file is basically:
class Test(unittest.TestCase):
def testOk():
pass
if __name__ == "__main__":
expensiveSetup()
try:
unittest.main()
finally:
cleanUp()
However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at Caching result of setUp() using Python unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed.
How can I do the setup and cleanup once for all the tests in my TestSuite?
The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.
If you use Python >= 2.7 (or unittest2 for Python >= 2.4 & <= 2.6), the best approach would be be to use
def setUpClass(cls):
# ...
setUpClass = classmethod(setUpClass)
to perform some initialization once for all tests belonging to the given class.
And to perform the cleanup, use:
#classmethod
def tearDownClass(cls):
# ...
See also the unittest standard library documentation on setUpClass and tearDownClass classmethods.
This is what I do:
class TestSearch(unittest.TestCase):
"""General Search tests for...."""
matcher = None
counter = 0
num_of_tests = None
def setUp(self): # pylint: disable-msg=C0103
"""Only instantiate the matcher once"""
if self.matcher is None:
self.__class__.matcher = Matcher()
self.__class__.num_of_tests = len(filter(self.isTestMethod, dir(self)))
self.__class__.counter = self.counter + 1
def tearDown(self): # pylint: disable-msg=C0103
"""And kill it when done"""
if self.counter == self.num_of_tests:
print 'KILL KILL KILL'
del self.__class__.matcher
Sadly (because I do want my tests to be independent and deterministic), I do this a lot (because system testing that take less than 5 minutes are also important).
First of all, what S. Lott said. However!, you do not want to do that. There is a reason setUp and tearDown are wrapped around each test: they help preserve the determinism of testing.
Otherwise, if some test places the system in a bad state, your next tests may fail. Ideally, each of your tests should be independent.
Also, if you insist on doing it this way, instead of writing by hand self.runTest1(), self.runTest2(), you might want to do a bit of introspection in order to find the methods to run.
Won't package-level initialization do it for you? From the Nose Wiki:
nose allows tests to be grouped into
test packages. This allows
package-level setup; for instance, if
you need to create a test database or
other data fixture for your tests, you
may create it in package setup and
remove it in package teardown once per
test run, rather than having to create
and tear it down once per test module
or test case.
To create package-level setup and
teardown methods, define setup and/or
teardown functions in the __init__.py
of a test package. Setup methods may
be named setup, setup_package, setUp,
or setUpPackage; teardown may be named
teardown, teardown_package, tearDown
or tearDownPackage. Execution of tests
in a test package begins as soon as
the first test module is loaded from
the test package.
You can save the state if expensiveSetup() is run or not.
__expensiveSetup_has_run = False
class ExpensiveSetupMixin(unittest.TestCase):
def setUp(self):
global __expensiveSetup_has_run
super(ExpensiveSetupMixin, self).setUp()
if __expensiveSetup_has_run is False:
expensiveSetup()
__expensiveSetup_has_run = True
Or some kind of variation of this. Maybe pinging xml-rpc server and create a new one if it isn't answering.
But the unit-testing way AFAIK is to setup and teardown per unittest even if it is expensive.
I know nothing about Netbeans, but I though I should mention zope.testrunner and it's support for a nifty thing: Layers. Basically, you do the testsetup in separate classes, and attach those classes to the tests. These classes can inherit from each other, forming a layer of setups. The testrunner will then only call each setup once, and saving the state of that in memory, and instead of setting up and tearing down, it will simply just copy the relevant layer context as a setup.
This speeds up test setup a lot, and is used when you test Zope products and Plone, where the testsetup often needs you to start a Plone CMS server, create a Plone site and add loads of content, a process that can take upwards half a minute. Doing that for each test method is obviously impossible, but with layers it is done only once. This shortens the test setup and protects the test methods from each other, and therefore means that the testing continues to be determenistic.
So I don't know of zope.testrunner will work for you, but it's worth a try.
shuld be possible to do it by defining startTestRun,stopTestRun of unittest.TestResult class. answer https://stackoverflow.com/a/64892396/2679740
You can assure setUp and tearDown execute once if you have only one test method, runTest. This method can do whatever else it wants. Just be sure you don't have any methods with names that start with test.
class MyExpensiveTest( unittest.TestCase ):
def setUp( self ):
self.resource = owThatHurts()
def tearDown( self ):
self.resource.flush()
self.resource.finish()
def runTest( self ):
self.runTest1()
self.tunTest2()
def runTest1( self ):
self.assertEquals(...)
def runTest2( self ):
self.assertEquals(...)
It doesn't automagically figure out what to run. If you add a test method, you also have to update runTest.

Categories