Related
Although I've been using Python for a number of years now, I realised that working predominantly on personal projects, I never needed to do Unit testing before, so apologies for the obvious questions or wrong assumptions I might make.
My goal is to understand how I can make tests and possibly combine everything with the GitHub workflow to create some automation.
I've seen Failures/Errors (which are conceptually different) thrown locally are not treated differently once online. But before I go, I have some doubts that I want to clarify.
From reading online, my initial understanding seems to be that a test should always SUCCEED, even if it contains errors or failure.
But if it succeeds, how can then I record a failure or an error? So I'm tempted to say I'm capturing this in the wrong way?
I appreciate that in an Agile environment, some would like to say it's a controlled process, and errors can be intercepted while looking into the code. But I'm not sure this is the best approach.And this leads me to the second question.
Say I have a function accepting dates, and I know that it cannot accept anything else than that.
Would it make sense to do a test to say pass in strings (and get
a failure)?
Or should I test only for the expected circumstances?
Say case 1) is a best practice; what should I do in the context of running these tests? Should I let the test fail and get a long list of errors? Or should I decorate functions with a #pytest.mark.xfail() (a sort of Soft fail, where I can use a try ... catch)?
And last question (for now): would an xfail decorator let the workflow automation consider the test as "passed". Probably not, but at this stage, I've so much confusion in my head that any clarity from experienced users could help.
Thanks for your patience in reading.
The question is a bit fuzzy, but I will have a shot.
The notion that tests should always succeed even if they have errors is probably a misunderstanding. Failing tests are errors and should be shown as such (with the exception of tests known to fail, but that is a special case, see below). From the comment I guess what was actually meant was that other tests shall continue to run, even if one test failed - that makes certainly sense, especially in CI tests, where you want to get the whole picture.
If you have a function accepting dates and nothing else, it shall be tested that it indeed only accepts dates, and raises an exception or something in the case an invalid date is given. What I meant in the comment is if your software ensures that only a date can be passed to that function, and this is also ensured via tests, it would not be needed to test this again, but in general - yes, this should be tested.
So, to give a few examples: if your function is specified to raise an exception on invalid input, this has to be tested using something like pytest.raises - it would fail, if no exception is raised. If your function shall handle invalid dates by logging an error, the test shall verify that the error is logged. If an invalid input should just be ignored, the test shall ensure that no exception is raised and the state does not change.
For xfail, I just refer you to the pytest documentation, where this is described nicely:
An xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary.
So a passing xfail test will be shown as passed indeed. You can easily test this yourself:
import pytest
#pytest.mark.xfail
def test_fails():
assert False
#pytest.mark.xfail
def test_succeeds():
assert True
gives something like:
============================= test session starts =============================
collecting ... collected 2 items
test_xfail.py::test_fails
test_xfail.py::test_succeeds
======================== 1 xfailed, 1 xpassed in 0.35s ========================
and the test is considered passed (e.g. has the exit code 0).
The unittest module is extremely good to detect problems in code.
I understand the idea of isolating and testing parts of code with assertions:
self.assertEqual(web_page_view.func, web_page_url)
But besides these assertions you also might have some logic before it, in the same test method, that could have problems.
I am wondering if manual exception handling is something to take in account ever inside methods of a TestCase subclass.
Because if I wrap a block in a try-catch, if something fails, the test returns OK and does not fail:
def test_simulate_requests(self):
"""
Simulate requests to a url
"""
try:
response = self.client.get('/adress/of/page/')
self.assertEqual(response.status_code, 200)
except Exception as e:
print("error: ", e)
Should exception handling be always avoided in such tests?
First part of answer:
As you correctly say, there needs to be some logic before the actual test. The code belonging to a unit-test can be clustered into four parts (I use Meszaros' terminology in the following): setup, exercise, verify, teardown. Often the code of a test case is structured such that the code for the four parts are cleanly separated and come in that precise order - this is called the four phase test pattern.
The exercise phase is the heart of the test, where the functionality is executed that shall be checked in the test. The setup ensures that this happens in a well defined context. So, what you have described is in this terminology the situation that during setup something fails. Which means, that the preconditions are not met which are required for a meaningful execution of the functionality that is to be tested.
This is a common situation and it means that you in fact need to be able to distinguish three outcomes of a test: A test can pass successfully, it can fail, or it can just be meaningless.
Fortunately, there is an answer for this in python: You can skip tests, and if a test is skipped this is recorded, but neither as a failure nor as a success. Skipping of tests would probably a better way to handle the situation that you have shown in your example. Here is a small code piece that demonstrates one way of skipping tests:
import unittest
class TestException(unittest.TestCase):
def test_skipTest_shallSkip(self):
self.skipTest("Skipped because skipping shall be demonstrated.")
Second part of answer:
Your test seems to have some non-deterministic elements. The self.client.get can throw exceptions (but only sometimes - sometimes it doesn't). This means you do not have the context during the test execution under control. In unit-testing this is a situation you should try to avoid. Your tests should have a deterministic behavior.
One typical way to achieve this is to isolate your code from the components that are responsible for the nondeterminism and during testing replace these components by mocks. The behaviour of the mocks is under full control of the test code. Thus, if your code uses some component for network accesses, you would mock that component. Then, in some test cases you can instruct the mock to simulate a successful network communication to see how your component handles this, and in other tests you instruct the mock to simulate a network failure to see how your component copes with this situation.
There are two "bad" states of a test: Failure (when one of the assertions fails) and Error (when the test itself fails - your case).
First of all, it goes without saying that it's better to build your test in such a way that it reaches its assertions.
If you need to assert some tested code raises an exception, you should use with self.assertRaises(ExpectedError)
If some code inside the test raises an exception - it's better to know it from 'Error' result than seeing 'OK all tests have passed'
If your test logic really assumes that something can fail in the test itself and it is normal behaviour - probably the test is wrong. May be you should use mocks (https://docs.python.org/3/library/unittest.mock.html) to imitate an api call or something else.
In your case, even if the test fails, you catch it with bare except and say "Ok, continue". Anyway the implementation is wrong.
Finally: no, there shouldn't be except in your test cases
P.S. it's better to call your test functions with test_what_you_want_to_test_name, in this case probably test_successful_request would be ok.
I have been trying to get the hang of TDD and unit testing (in python, using nose) and there are a few basic concepts which I'm stuck on. I've read up a lot on the subject but nothing seems to address my issues - probably because they're so basic they're assumed to be understood.
The idea of TDD is that unit tests are written before the code they test. Unit test should test small portions of code (e.g. functions) which, for the purposes of the test, are self-contained and isolated. However, this seems to me to be highly dependent on the implementation. During implementation, or during a later bugfix it may become necessary to abstract some of the code into a new function. Should I then go through all my tests and mock out that function to keep them isolated? Surely in doing this there is a danger of introducing new bugs into the tests, and the tests will no longer test exactly the same situation?
From my limited experience in writing unit tests, it appears that completely isolating a function sometimes results in a test that is longer and more complicated than the code it is testing. So if the test fails all it tells you is that there is either a bug in the code or in the test, but its not obvious which. Not isolating it may mean a much shorter and easier to read test, but then its not a unit test...
Often, once isolated, unit tests seem to be merely repeating the function. E.g. if there is a simple function which adds two numbers, then the test would probably look something like assert add(a, b) == a + b. Since the implementation is simply return a + b, what's the point in the test? A far more useful test would be to see how the function works within the system, but this goes against unit testing because it is no longer isolated.
My conclusion is that unit tests are good in some situations, but not everywhere and that system tests are generally more useful. The approach that this implies is to write system tests first, then, if they fail, isolate portions of the system into unit tests to pinpoint the failure. The problem with this, obviously, is that its not so easy to test corner cases. It also means that the development is not fully test driven, as unit tests are only written as needed.
So my basic questions are:
Should unit tests be used everywhere, however small and simple the function?
How does one deal with changing implementations? I.e. should the implementation of the tests change continuously too, and doesn't this reduce their usefulness?
What should be done when the test gets more complicated than the code its testing?
Is it always best to start with unit tests, or is it better to start with system tests, which at the start of development are much easier to write?
Regarding your conclusion first: both unit tests and system tests (integration tests) both have their use, and are in my opinion just as useful. During development I find it easier to start with unit tests, but for testing legacy code I find your approach where you start with the integration tests easier. I don't think there's a right or wrong way of doing this, the goal is to make a safetynet that allows you to write solid and well tested code, not the method itself.
I find it useful to think about each function as an API in this context. The unit test is testing the API, not the implementation. If the implementation changes, the test should remain the same, this is the safety net that allows you to refactor your code with confidence. Even if refactoring means taking part of the implementation out to a new function, I will say it's ok to keep the test as it is without stubbing or mocking the part that was refactored out. You will probably want a new set of tests for the new function however.
Unit tests are not a holy grail! Test code should be fairly simple in my opinion, and it should be little reason for the test code itself to fail. If the test becomes more complex than the function it tests, it probably means you need to refactor the code differently. An example from my own past: I had some code that took some input and produced some output stored as XML. Parsing the XML to verifying that the output was correct caused a lot of complexity in my tests. However realizing that the XML-representation was not the point, I was able to refactor the code so that I could test the output without messing with the details of XML.
Some functions are so trivial that a separate test for them adds no value. In your example you're not really testing your code, but that the '+' operator in your language works as expected. This should be tested by the language implementer, not you. However that function won't need to get very much more complex before adding a test for it is worthwhile.
In short, I think your observations are very relevant and point towards a pragmatic approach to testing. Following some rigorous definition too closely will often get in the way, even though the definitions themselves may be necessary for the purpose of having a way to communicate about the ideas they convey. As said, the goal is not the method, but the result; which for testing is to have confidence in your code.
1) Should unit tests be used everywhere, however small and simple the function?
No. If a function has no logic in it (if, while-loops, appends, etc...) there's nothing to test.
This means that an add function implemented like:
def add(a, b):
return a + b
It doesn't have anything to test. But if you really want to build a test for it, then:
assert add(a, b) == a + b # Worst test ever!
is the worst test one could ever write. The main problem is that the tested logic must NOT be reproduced in the testing code, because:
If there's a bug in there it will be reproduced as well.
You're no more testing the function but that a + b works in the same way in two different files.
So it would make more sense something like:
assert add(1, 2) == 3
But once again, this is just an example, and this add function shouldn't even be tested.
2) How does one deal with changing implementations?
It depends on what changes. Keep in mind that:
You're testing the API (roughly speaking, that for a given input you get a specific output/effect).
You're not repeating the production code in your testing code (as explained before).
So, unless you're changing the API of your production code, the testing code will not be affacted in any way.
3) What should be done when the test gets more complicated than the code its testing?
Yell at whoever wrote those tests! (And re-write them).
Unit tests are simple and don't have any logic in them.
4a) Is it always best to start with unit tests, or is it better to start with system tests?
If we are talking about TDD than one shouldn't even have this problem, because even before writing one little tiny function the good TDD developer would've written unit tests for it.
If you have already working code without tests whatsoever, I'd say that unit tests are easier to write.
4b) Which at the start of development are much easier to write?
Unit tests! Because you don't even have the root of your code, how could you write system tests?
I'm using Python's built-in unittest module and I want to write a few tests that are not critical.
I mean, if my program passes such tests, that's great! However, if it doesn't pass, it's not really a problem, the program will still work.
For example, my program is designed to work with a custom type "A". If it fails to work with "A", then it's broken. However, for convenience, most of it should also work with another type "B", but that's not mandatory. If it fails to work with "B", then it's not broken (because it still works with "A", which is its main purpose). Failing to work with "B" is not critical, I will just miss a "bonus feature" I could have.
Another (hypothetical) example is when writing an OCR. The algorithm should recognize most images from the tests, but it's okay if some of them fails. (and no, I'm not writing an OCR)
Is there any way to write non-critical tests in unittest (or other testing framework)?
As a practical matter, I'd probably use print statements to indicate failure in that case. A more correct solution is to use warnings:
http://docs.python.org/library/warnings.html
You could, however, use the logging facility to generate a more detailed record of your test results (i.e. set your "B" class failures to write warnings to the logs).
http://docs.python.org/library/logging.html
Edit:
The way we handle this in Django is that we have some tests we expect to fail, and we have others that we skip based on the environment. Since we can generally predict whether a test SHOULD fail or pass (i.e. if we can't import a certain module, the system doesn't have it, and so the test won't work), we can skip failing tests intelligently. This means that we still run every test that will pass, and have no tests that "might" pass. Unit tests are most useful when they do things predictably, and being able to detect whether or not a test SHOULD pass before we run it makes this possible.
Asserts in unit tests are binary: they will work or they will fail, there's no mid-term.
Given that, to create those "non-critical" tests you should not use assertions when you don't want the tests to fail. You should do this carefully so you don't compromise the "usefulness" of the test.
My advice to your OCR example is that you use something to record the success rate in your tests code and then create one assertion like: "assert success_rate > 8.5", and that should give the effect you desire.
Thank you for the great answers. No only one answer was really complete, so I'm writing here a combination of all answers that helped me. If you like this answer, please vote up the people who were responsible for this.
Conclusions
Unit tests (or at least unit tests in unittest module) are binary. As Guilherme Chapiewski says: they will work or they will fail, there's no mid-term.
Thus, my conclusion is that unit tests are not exactly the right tool for this job. It seems that unit tests are more concerned about "keep everything working, no failure is expected", and thus I can't (or it's not easy) to have non-binary tests.
So, unit tests don't seem the right tool if I'm trying to improve an algorithm or an implementation, because unit tests can't tell me how better is one version when compared to the other (supposing both of them are correctly implemented, then both will pass all unit tests).
My final solution
My final solution is based on ryber's idea and code shown in wcoenen answer. I'm basically extending the default TextTestRunner and making it less verbose. Then, my main code call two test suits: the critical one using the standard TextTestRunner, and the non-critical one, with my own less-verbose version.
class _TerseTextTestResult(unittest._TextTestResult):
def printErrorList(self, flavour, errors):
for test, err in errors:
#self.stream.writeln(self.separator1)
self.stream.writeln("%s: %s" % (flavour,self.getDescription(test)))
#self.stream.writeln(self.separator2)
#self.stream.writeln("%s" % err)
class TerseTextTestRunner(unittest.TextTestRunner):
def _makeResult(self):
return _TerseTextTestResult(self.stream, self.descriptions, self.verbosity)
if __name__ == '__main__':
sys.stderr.write("Running non-critical tests:\n")
non_critical_suite = unittest.TestLoader().loadTestsFromTestCase(TestSomethingNonCritical)
TerseTextTestRunner(verbosity=1).run(non_critical_suite)
sys.stderr.write("\n")
sys.stderr.write("Running CRITICAL tests:\n")
suite = unittest.TestLoader().loadTestsFromTestCase(TestEverythingImportant)
unittest.TextTestRunner(verbosity=1).run(suite)
Possible improvements
It should still be useful to know if there is any testing framework with non-binary tests, like Kathy Van Stone suggested. Probably I won't use it this simple personal project, but it might be useful on future projects.
Im not totally sure how unittest works, but most unit testing frameworks have something akin to categories. I suppose you could just categorize such tests, mark them to be ignored, and then run them only when your interested in them. But I know from experience that ignored tests very quickly become...just that ignored tests that nobody ever runs and are therefore a waste of time and energy to write them.
My advice is for your app to do, or do not, there is no try.
From unittest documentation which you link:
Instead of unittest.main(), there are
other ways to run the tests with a
finer level of control, less terse
output, and no requirement to be run
from the command line. For example,
the last two lines may be replaced
with:
suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)
In your case, you can create separate TestSuite instances for the criticial and non-critical tests. You could control which suite is passed to the test runner with a command line argument. Test suites can also contain other test suites so you can create big hierarchies if you want.
Python 2.7 (and 3.1) added support for skipping some test methods or test cases, as well as marking some tests as expected failure.
http://docs.python.org/library/unittest.html#skipping-tests-and-expected-failures
Tests marked as expected failure won't be counted as failure on a TestResult.
There are some test systems that allow warnings rather than failures, but test_unit is not one of them (I don't know which ones do, offhand) unless you want to extend it (which is possible).
You can make the tests so that they log warnings rather than fail.
Another way to handle this is to separate out the tests and only run them to get the pass/fail reports and not have any build dependencies (this depends on your build setup).
Take a look at Nose : http://somethingaboutorange.com/mrl/projects/nose/0.11.1/
There are plenty of command line options for selecting tests to run, and you can keep your existing unittest tests.
Another possibility is to create a "B" branch (you ARE using some sort of version control, right?) and have your unit tests for "B" in there. That way, you keep your release version's unit tests clean (Look, all dots!), but still have tests for B. If you're using a modern version control system like git or mercurial (I'm partial to mercurial), branching/cloning and merging are trivial operations, so that's what I'd recommend.
However, I think you're using tests for something they're not meant to do. The real question is "How important to you is it that 'B' works?" Because your test suite should only have tests in it that you care whether they pass or fail. Tests that, if they fail, it means the code is broken. That's why I suggested only testing "B" in the "B" branch, since that would be the branch where you are developing the "B" feature.
You could test using logger or print commands, if you like. But if you don't care enough that it's broken to have it flagged in your unit tests, I'd seriously question whether you care enough to test it at all. Besides, that adds needless complexity (extra variables to set debug level, multiple testing vectors that are completely independent of each other yet operate within the same space, causing potential collisions and errors, etc, etc). Unless you're developing a "Hello, World!" app, I suspect your problem set is complicated enough without adding additional, unnecessary complications.
You could write your test so that they count success rate.
With OCR you could throw at code 1000 images and require that 95% is successful.
If your program must work with type A then if this fails the test fails. If it's not required to work with B, what is the value of doing such a test ?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to improve the number and quality of tests in my Python projects. One of the the difficulties I've encountered as the number of tests increase is knowing what each test does and how it's supposed to help spot problems. I know that part of keeping track of tests is better unit test names (which has been addressed elsewhere), but I'm also interested in understanding how documentation and unit testing go together.
How can unit tests be documented to improve their utility when those tests fail in the future? Specifically, what makes a good unit test docstring?
I'd appreciate both descriptive answers and examples of unit tests with excellent documentation. Though I'm working exclusively with Python, I'm open to practices from other languages.
I document most on my unit tests with the method name exclusively:
testInitializeSetsUpChessBoardCorrectly()
testSuccessfulPromotionAddsCorrectPiece()
For almost 100% of my test cases, this clearly explains what the unit test is validating and that's all I use. However, in a few of the more complicated test cases, I'll add a few comments throughout the method to explain what several lines are doing.
I've seen a tool before (I believe it was for Ruby) that would generate documentation files by parsing the names of all the test cases in a project, but I don't recall the name. If you had test cases for a chess Queen class:
testCanMoveStraightUpWhenNotBlocked()
testCanMoveStraightLeftWhenNotBlocked()
the tool would generate an HTML doc with contents something like this:
Queen requirements:
- can move straight up when not blocked.
- can move straight left when not blocked.
Perhaps the issue isn't in how best to write test docstrings, but how to write the tests themselves? Refactoring tests in such a way that they're self documenting can go a long way, and your docstring won't go stale when the code changes.
There's a few things you can do to make the tests clearer:
clear & descriptive test method names (already mentioned)
test body should be clear and concise (self documenting)
abstract away complicated setup/teardown etc. in methods
more?
For example, if you have a test like this:
def test_widget_run_returns_0():
widget = Widget(param1, param2, "another param")
widget.set_option(true)
widget.set_temp_dir("/tmp/widget_tmp")
widget.destination_ip = "10.10.10.99"
return_value = widget.run()
assert return_value == 0
assert widget.response == "My expected response"
assert widget.errors == None
You might replace the setup statements with a method call:
def test_widget_run_returns_0():
widget = create_basic_widget()
return_value = widget.run()
assert return_value == 0
assert_basic_widget(widget)
def create_basic_widget():
widget = Widget(param1, param2, "another param")
widget.set_option(true)
widget.set_temp_dir("/tmp/widget_tmp")
widget.destination_ip = "10.10.10.99"
return widget
def assert_basic_widget():
assert widget.response == "My expected response"
assert widget.errors == None
Note that your test method is now composed of a series of method calls with intent-revealing names, a sort of DSL specific to your tests. Does a test like that still need documentation?
Another thing to note is that your test method is mainly at one level of abstraction. Someone reading the test method will see the algorithm is:
creating a widget
calling run on the widget
asserting the code did what we expect
Their understanding of the test method is not muddied by the details of setting up the widget, which is one level of abstraction lower than the test method.
The first version of the test method follows the Inline Setup pattern. The second version follows Creation Method and Delegated Setup patterns.
Generally I'm against comments, except where they explain the "why" of the code. Reading Uncle Bob Martin's Clean Code convinced me of this. There is a chapter on comments, and there is a chapter on testing. I recommend it.
For more on automated testing best practices, do check out xUnit Patterns.
The name of the test method should describe exactly what you are testing. The documentation should say what makes the test fail.
You should use a combination of descriptive method names and comments in the doc string. A good way to do it is including a basic procedure and verification steps in the doc string. Then if you run these tests from some kind of testing framework that automates running the tests and collecting results, you can have the framework log the contents of the doc string for each test method along with its stdout+stderr.
Here's a basic example:
class SimpelTestCase(unittest.TestCase):
def testSomething(self):
""" Procedure:
1. Print something
2. Print something else
---------
Verification:
3. Verify no errors occurred
"""
print "something"
print "something else"
Having the procedure with the test makes it much easier to figure out what the test is doing. And if you include the docstring with the test output it makes figuring out what went wrong when going through the results later much easier. The previous place I worked at did something like this and it worked out very well when failures occurred. We ran the unit tests on every checkin automatically, using CruiseControl.
When the test fails (which should be before it ever passes) you should see the error message and be able to tell what's up. That only happens if you plan it that way.
It's entirely a matter of the naming of the test class, the test method, and the assert message. When a test fails, and you can't tell what is up from these three clues, then rename some things or break up some tests classes.
It doesn't happen if the name of the fixture is ClassXTests and the name of the test is TestMethodX and the error message is "expected true, returned false". That's a sign of sloppy test writing.
Most of the time you shouldn't have to read the test or any comments to know what has happened.