I like the fact that the unit test class will print out the nice summary about what pass and fail; although I can't see on the official documentation, how do you customize that output.
I see the msg parameter for the assert, which means that I can print a descriptive message when the assert trigger (test fail), but what if you want to include a summary on success?
Related
In Java, Junit has some "assert" methods that, when they fail, will tell you something like "Assertion failed. Expected x but saw y".
What's an equivalent library in Python for that, as opposed to "assert x" producing "assertion failed" but not further information?
When you use the assert keyword you can add customised error messages:
assert your_statement_here, "Custom error message here"
Within your custom error message error you can format any variables or data you need to show to debug.
However, if you need something more powerful, as #Asocia said, you will need unittest, specifically assert-methods is what I think you are looking for, and the official documentation can help with that:
https://docs.python.org/3/library/unittest.html#assert-methods
The TestCase class provides several assert methods to check for and
report failures.
assertEqual(a, b) checks: a == b
assertNotEqual(a, b) checks: a != b
... and so on
Assumptions in Python unit tests
Does Python provide support for Assumptions to be used as pre-conditions for tests similar to those provided by JUnit with assumeThat(...) methods for Java.
This is important, because of the application of Hoare Logic, to quote JUnit:
A set of methods useful for stating assumptions about the conditions in which a test is meaningful. A failed assumption does not mean the code is broken, but that the test provides no useful information. Assume basically means "don't run this test if these conditions don't apply". The default JUnit runner skips tests with failing assumptions. Custom runners may behave differently.
It seems that Python doesn't provide these out of the box in its unittest framework. I've tentatively POC my own approach by extending unittest.TestCase.
class LoggingTestCase(unittest.TestCase):
def assumeTrue(self, expr: Any, msg: Any = ...) -> None:
try:
super().assertTrue(expr, msg)
self.test_result = TestResult.PASSED
except AssertionError as e:
self.test_result = TestResult.SKIPPED
raise InvalidAssumption(e)
With this unittest of behaviour:
class TestLoggingTestCase(LoggingTestCase):
def test_assumeTrue(self):
self.assertRaises(InvalidAssumption, self.assumeTrue, False) # Passes as expected.
self.assertRaises(InvalidAssumption, self.assumeTrue, True) # Fails as expected.
This approach seems to exhibit the correct behaviour I want, is this the best approach, is there better way to do this, or 3rd party library to use? I'm looking for a better way than wrapping all the base assertions this way to make my own assumptions.
I use pytest skipif for this purpose.
http://doc.pytest.org/en/latest/skipping.html
I'm writing some unit tests for a server program which catches most exceptions, but logs them, and would like to make assertions on the logged output. I've found the testfixtures package useful to this end; for example:
import logging
import testfixtures
with testfixtures.LogCapture() as l:
logging.info('Here is some info.')
l.check(('root', 'INFO', 'Here is some info.'))
Following the documentation, the check method will raise an error if either the logger name, level, or message is not as expected.
I would like to perform a more 'flexible' kind of test in which I make assertions on the message using a wildcard for the other elements of the tuple. This less stringent assertion would look something like
l.check((*, *, 'Here is some info.'))
but this is not valid syntax. Is there any way to specify a 'wildcard' in the check method of the testfixtures.logcapture.LogCapture class?
The way to check messages only (which, as pointed out to me by the author, is actually described in the documentation) is to use the records attribute of the LogCapture class, which is a list of logging.LogRecord objects. So the appropriate assertion is:
assert l.records[-1].getMessage() == 'Here is some info.'
I'd like to a log some information to a file/database every time assert is invoked. Is there a way to override assert or register some sort of callback function to do this, every time assert is invoked?
Regards
Sharad
Try overload the AssertionError instead of assert. The original assertion error is available in exceptions module in python2 and builtins module in python3.
import exceptions
class AssertionError:
def __init__(self, *args, **kwargs):
print("Log me!")
raise exceptions.AssertionError
I don't think that would be possible. assert is a statement (and not a function) in Python and has a predefined behavior. It's a language element and cannot just be modified. Changing the language cannot be the solution to a problem. Problem has to be solved using what is provided by the language
There is one thing you can do though. Assert will raise AssertionError exception on failure. This can be exploited to get the job done. Place the assert statement in Try-expect block and do your callbacks inside that block. It isn't as good a solution as you are looking for. You have to do this with every assert. Modifying a statement's behavior is something one won't do.
It is possible, because pytest is actually re-writing assert expressions in some cases. I do not know how to do it or how easy it is, but here is the documentation explaining when assert re-writing occurs in pytest:
https://docs.pytest.org/en/latest/assert.html
By default, if the Python version is greater than or equal to 2.6, py.test rewrites assert statements in test modules.
...
py.test rewrites test modules on import. It does this by using an
import hook to write a new pyc files.
Theoretically, you could look at the pytest code to see how they do it, and perhaps do something similar.
For further information, Benjamin Peterson wrote up Behind the scenes of py.test’s new assertion rewriting [ at http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html ]
I suggest to use pyhamcrest. It has very beatiful matchers which can be simply reimplemented. Also you can write your own.
From python documentation(http://docs.python.org/library/unittest.html):
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
self.widget = None
def test_default_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
Here is, how invoke those testcase:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_size'))
suite.addTest(WidgetTestCase('test_resize'))
return suite
Is it possible to insert parameter custom_parameter into WidgetTestCase like:
class WidgetTestCase(unittest.TestCase):
def setUp(self,custom_parameter):
self.widget = Widget('The widget')
self.custom_parameter=custom_parameter
?
What I've done is in test_suite module just added
WidgetTestCase.CustomParameter="some_address"
The simplest solutions are the best :)
I've found a way to do this, but it's a bit of a cludge.
Basically, what I do is add, to the TestCase, an __init__ method which defines a 'default' parameter and a __str__ so that we can distinguish cases:
class WidgetTestCase(unittest.TestCase):
def __init__(self, methodName='runTest'):
self.parameter = default_parameter
unittest.TestCase.__init__(self, methodName)
def __str__(self):
''' Override this so that we know which instance it is '''
return "%s(%s) (%s)" % (self._testMethodName, self.currentTest, unittest._strclass(self.__class__))
Then in suite(), I iterate over my test parameters, replacing the default parameter with one specific to each test:
def suite():
suite = unittest.TestSuite()
for test_parameter in test_parameters:
loadedtests = unittest.TestLoader().loadTestsFromTestCase(WidgetTestCase)
for t in loadedtests:
t.parameter = test_parameter
suite.addTests(loadedtests)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(OtherWidgetTestCases))
return suite
where OtherWidgetTestCases are tests which don't need to be parameterised.
For instance I have a bunch of tests on real data for which a suite of tests need to be applied to each, but I also have some synthetic data sets, designed to test certain edge cases not normally present in the data, and I only need to apply certain tests to those, so they get their own tests in OtherWidgetTestCases.
This is something that has been on my mind recently. Yes it is very possible to do. I called it scenario testing, but I think parameterized may be more accurate. I put a proof of concept up as a gist here. In short it is a meta class that allows you to define a scenario and run the tests against it a bunch. With it your example can be something like this:
class WidgetTestCase(unittest.TestCase):
__metaclass__ = ScenarioMeta
class widget_width(ScenerioTest):
scenarios = [
dict(widget_in=Widget("One Way"), expected_tuple=(50, 50)),
dict(widget_in=Widget("Another Way"), expected_tuple=(100, 150))
]
def __test__(self, widget_in, expected_tuple):
self.assertEqual(widget_in.size, expected_tuple)
When run, the meta class writes 2 seperate tests out so the output would be something like:
$ python myscerariotest.py -v
test_widget_width_0 (__main__.widget_width) ... ok
test_widget_width_1 (__main__.widget_width) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
As you can see the scenarios are converted to tests at runtime.
Now I am not yet sure if this is even a good idea. I use it in tests where I have a lot of text centric cases that repeat the same assertions on slightly different data, which helps me to catch the little edge cases. But the classes in that gist do work and I believe it accomplishes what you are after.
Note that the with some trickery the test cases can be given names and even pulled from an external source like a text file or database. Its not documented yet but some digging around in the meta class should get you started. There is also some more info and examples on my post here.
Edit
This is an ugly hack that I do not support anymore. The implementation should have been done as a subclass of TestCase, not as a hacked meta class. Live and learn. An even better solution would be to use nose generators.
I don't believe so, the signature for setUp needs to be what unittest is expecting, afaik, setUp is automagically called within the testcase's run method as setUp()... you're not going to be able to pass it unless you override run to pass in the var you want. But I think what you want defeats the purpose of unit testing. Don't try to use a DRY philosophy with this, each unit you're testing should be a part of a class or even part of a function/method.
I don't think this is a good idea. Unit tests should be thorough enough that you test all functionality in your cases so passing in different parameteres shouldn't be required.
You mention you're passing in a www address - this is almost certainly not a good idea. What happens if you try and run the tests on a machine where the 'net connection is down? Your tests should be:
Automatic - they will run on all machines and platforms where your app is supported, without user intervention. They shouldn't rely on external environment to pass. This means (amongst other things) that relying on a properly set up connection to the Internet is a bad idea. You can get around this by providing dummy data. Instead of passing in a URL to a resource, abstract away the data source and pass in a data-stream or whatever. This is especially easy in python since you can make use of python's duck-typing to present a stream-like object (python frequently uses a "file-like" object for this very reason!).
Thorough - your unit tests should have 100% code coverage, and cover all possible situations. You want to test your code with multiple sites? Instead, test your code with all the possible features that a site may include. Without knowing more about what your application does, I can't offer much advice in this point.
Now, it looks like you're tests are going to be heavily data-driven. There are many tools that allow you to define data-sets for unit tests and load them in the tests. Check out python test fixtures, for example.
I realise that this isn't the answer you're looking for, but I think you'll have more joy in the long-run if you follow these principles.