Is it OK to assert in unittest tearDown method? - python

I have a TestCase with multiple tests and need to assert a few conditions (the same for every test) at the end of each test. Is it OK to add these assertions to the tearDown() method, or is it a bad habit since they're not "cleaning" anything?
What would be the right way of doing this?

Asserting something in your tearDown means that you need to be careful that all the cleaning is done before the actual asserting otherwise the cleaning code may not be called if the assert statement fails and raises.
If the assert is just one line it may be OK to have it in every test methods, if it is more than that having a specific method would be a possibility- that method should not be a test of its own i.e. not recognized as a test by your test framework. Using a method decorator or class decorator may also be an alternative.
Overall the idea is that tearDown shouldn't do any testing and that explicit is better than implicit.

Mmh i have never seen this before. Personally i wouldn't do it because it doesn't belong there. I would do it via a decorator that does the asserts for you at the end. Then just decorate the test functions that you do want to have these asserts.
For an excellent introduction to python decorators see the answers to this question

Related

Using Fixtures vs passing method as argument

I'm just learning Python and Pytest and came across Fixtures. Pardon the basic question but I'm a bit wondering what's the advantage of using fixtures in Python as you can already pass a method as argument, for example:
def method1():
return 'hello world'
def method2(methodToRun):
result = methodToRun()
return result
method2(method1)
What would be the advantage of passing a #pytest.fixture object as argument instead?
One difference is that fixtures pass the result of calling the function, not the function itself. That doesn't answer your question though why you'd want to use pytest.fixture instead of just manually calling it, so I'll just list a couple of things.
One reason is the global availability. After you write a fixture in conftest.py, you can use it in your whole test suite just by referencing its name and avoid duplicating it, which is nice.
In case your fixture returns a mutable object, pytest also handles the new call for you, so that you can be sure that other tests using the same fixture won't change the behavior between each other. If pytest didn't do that by default, you'd have to do it by hand.
A big one is that the plugin system of pytest uses fixtures to make its functionality available. So if you are a web dev and want to have a mock-server for your tests, you just install pytest-localserver and now adding httpserver, httpsserver, and smtpserver arguments to your test functions will inject the fixtures from the library you just installed. This is incredibly convenient and intuitive, in particular when compared to injection mechanisms in other languages.
The bottom line is that it is useful to have a single way to include dependencies in your test suits, and pytest chooses a fixture mechanism that magically binds itself to function signatures. So while it really is no different from manually inserting the argument, the quality of life things pytest adds through it make it worth it.
Fixture are a way of centralizing your test variables, avoid redundancy. If you are confortable with the concept of Dependency Injection, that's basically the same advantages, i.e. python will automatically bind your parameters with the available fixtures so you build tests more quickly by simply asking for what you need.
Also, fixtures enables you to easily parametrize all your tests at once. Which will avoid some cumbersome code if you want to do it by hand. (more info about it on the documentation: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures)
Some references:
Official documentation: https://docs.pytest.org/en/latest/fixture.html
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

Is there any good reason to catch exceptions in unittest transactions?

The unittest module is extremely good to detect problems in code.
I understand the idea of isolating and testing parts of code with assertions:
self.assertEqual(web_page_view.func, web_page_url)
But besides these assertions you also might have some logic before it, in the same test method, that could have problems.
I am wondering if manual exception handling is something to take in account ever inside methods of a TestCase subclass.
Because if I wrap a block in a try-catch, if something fails, the test returns OK and does not fail:
def test_simulate_requests(self):
"""
Simulate requests to a url
"""
try:
response = self.client.get('/adress/of/page/')
self.assertEqual(response.status_code, 200)
except Exception as e:
print("error: ", e)
Should exception handling be always avoided in such tests?
First part of answer:
As you correctly say, there needs to be some logic before the actual test. The code belonging to a unit-test can be clustered into four parts (I use Meszaros' terminology in the following): setup, exercise, verify, teardown. Often the code of a test case is structured such that the code for the four parts are cleanly separated and come in that precise order - this is called the four phase test pattern.
The exercise phase is the heart of the test, where the functionality is executed that shall be checked in the test. The setup ensures that this happens in a well defined context. So, what you have described is in this terminology the situation that during setup something fails. Which means, that the preconditions are not met which are required for a meaningful execution of the functionality that is to be tested.
This is a common situation and it means that you in fact need to be able to distinguish three outcomes of a test: A test can pass successfully, it can fail, or it can just be meaningless.
Fortunately, there is an answer for this in python: You can skip tests, and if a test is skipped this is recorded, but neither as a failure nor as a success. Skipping of tests would probably a better way to handle the situation that you have shown in your example. Here is a small code piece that demonstrates one way of skipping tests:
import unittest
class TestException(unittest.TestCase):
def test_skipTest_shallSkip(self):
self.skipTest("Skipped because skipping shall be demonstrated.")
Second part of answer:
Your test seems to have some non-deterministic elements. The self.client.get can throw exceptions (but only sometimes - sometimes it doesn't). This means you do not have the context during the test execution under control. In unit-testing this is a situation you should try to avoid. Your tests should have a deterministic behavior.
One typical way to achieve this is to isolate your code from the components that are responsible for the nondeterminism and during testing replace these components by mocks. The behaviour of the mocks is under full control of the test code. Thus, if your code uses some component for network accesses, you would mock that component. Then, in some test cases you can instruct the mock to simulate a successful network communication to see how your component handles this, and in other tests you instruct the mock to simulate a network failure to see how your component copes with this situation.
There are two "bad" states of a test: Failure (when one of the assertions fails) and Error (when the test itself fails - your case).
First of all, it goes without saying that it's better to build your test in such a way that it reaches its assertions.
If you need to assert some tested code raises an exception, you should use with self.assertRaises(ExpectedError)
If some code inside the test raises an exception - it's better to know it from 'Error' result than seeing 'OK all tests have passed'
If your test logic really assumes that something can fail in the test itself and it is normal behaviour - probably the test is wrong. May be you should use mocks (https://docs.python.org/3/library/unittest.mock.html) to imitate an api call or something else.
In your case, even if the test fails, you catch it with bare except and say "Ok, continue". Anyway the implementation is wrong.
Finally: no, there shouldn't be except in your test cases
P.S. it's better to call your test functions with test_what_you_want_to_test_name, in this case probably test_successful_request would be ok.

Fail test if the assert statement is missing [duplicate]

Today I had a failing test that happily succeeded, because I forgot a rather important line at the end:
assert actual == expected
I would like to have the machine catch this mistake in the future. Is there a way to make pytest detect if a test function does not assert anything, and consider this a test failure?
Of course, this needs to be a "global" configuration setting; annotating each test function with #fail_if_nothing_is_asserted would defeat the purpose.
This is one of the reasons why it really helps to write a failing test before writing the code to make the test pass. It's that one little extra sanity check for your code.
Also, the first time your test passes without actually writing the code to make it pass is a nice double-take moment too.

Non-test methods in a Python TestCase

Ok, as Google search isn't helping me in a while (even when using the correct keywords).
I have a class extending from TestCase in which I want to have some auxiliary methods that are not going to be executed as part of the test, they'll be used to generate some mocked objects, etc, auxiliary things for almost any test.
I know I could use the #skip decorator so unittest doesn't run a particular test method, but I think that's an ugly hack to use for my purpose, any tips?
Thanks in advance, community :D
I believe that you don't have to do anything. Your helper methods should just not start with test_.
The only methods that unittest will execute [1] are setUp, anything that starts with test, and tearDown [2], in that order. You can make helper methods and call them anything except for those three things, and they will not be executed by unittest.
You can think of setUp as __init__: if you're generating mock objects that are used by multiple tests, create them in setUp.
def setUp(self):
self.mock_obj = MockObj()
[1]: This is not entirely true, but these are the main 3 groups of methods that you can concentrate on when writing tests.
[2]: For legacy reasons, unittest will execute both test_foo and testFoo, but test_foo is the preferred style these days. setUp and tearDown should appear as such.
The test runner will only directly execute methods beginning with test, so just make sure your helper methods' names don't begin with test.

Is having a unit test that is mostly mock verification a smell?

I have a class that connects three other services, specifically designed to make implementing the other services more modular, but the bulk of my unit test logic is in mock verification. Is there a way to redesign to avoid this?
Python example:
class Input(object): pass
class Output(object): pass
class Finder(object): pass
class Correlator(object): pass
def __init__(self, input, output, finder):
pass
def run():
finder.find(input.GetRows())
output.print(finder)
I then have to mock input, output and finder. Even if I do make another abstraction and return something from Correlator.run(), it will still have to be tested as a mock.
Just ask yourself: what exactly do you need to check in this particular test case? If this check does not rely on other classes not being dummy, then you are OK.
However, a lot of mocks is usually a smell in sense that you are probably trying to test integration without actually doing integration. So if you assume that if the class passes test with mocks, it will be fine working with real classes, than yes, you have to write some more tests.
Personally, I don't write many Unit tests at all. I'm web developer and I prefer functional tests, that test the whole application via HTTP requests, as users would. Your case may be different
There's no reason to only use unit test - Maybe integration tests would be more useful for this case. Initialize all the objects properly, use the main class a bit, and assert on the (possibly complex) results. That way you'll test interfaces, output predictability, and other things which are important further up the stack. I've used this before, and found that something which is difficult to integration test probably has too many attributes / parameters or too complicated/wrongly formatted output.
On a quick glance, this do look like the level of mocking becomes to large. If you're on a dynamic language (I'm assuming yes here since your example is in Python), I'd try to construct either subclasses of the production classes with the most problematic methods overridden and presenting mocked data, so you'd get a mix of production and mocked code. If your code path doesn't allow for instantiating the objects, I'd try monkey patching in replacement methods returning mock data.
Weather or not this is code smell also depends on the quality of mocked data. Dropping into a debugger and copy-pasting known correct data or sniffing it from the network is in my experience the preferred way of ensuring that.
Integration vs unit testing is also an economical question: how painful is it to replace unit tests with integration/functional tests? The larger the scale of your system, the more there is to gain with light-weight mocking, and hence, unit tests.

Categories