While the unittesting philosophy is that tests can be ran in any order and it should pass, what if you're implementing an API where there is no other means of communicating with a server... and you need to test a certain very basic feature (such as delete) before you can do more complicated tasks? Is ordering the tests then reasonable?
If so, how can I do it with python's unittest module?
You already seem to realise that your unit tests should be independent. The only other reason I can see that you want to run the tests in some fixed order is that you want to stop running the suite if an early test fails. To do that, you can use the command-line option
-f, --failfast
Stop the test run on the first error or failure.
By the way, the tests are run in alphabetical order:
the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
(docs)
Related
We maintain a module that imports a third party library. This library is only available on a specific python installation on a production server. We need to write unit tests for our module, but also want these tests to run on a separate python install (e.g. in a CI pipeline).
We are considering mocking this library. In theory, we think that we can implement a combination of the solutions mentioned here and here. However, we have some reservations about this approach. Since a single mock instance is shared across all unit tests (per the sys['third_party_library'] = mock.MagicMock()), its behavior is likewise shared across all the tests. If we try to set side_effect on the mock in one test, another test could potentially use the wrong side_effect if it runs in parallel. Moreover, expectations can potentially break due to circumstances outside the bounds of an individual unit test that verifies this mock.
Is there a way to ensure that a mocked import is unique to each unit test?
In my project, I use pytest to write unit test cases for my program. But later I find I there are many db operation, ORM stuff in my program.
I known unit-testing should be run fast, but what is the different between unit-testing and auto integration-testing except fast.
Should I just use the database fixture instead of mocking them?
The main difference between unit tests and integration tests are that integration testing deal with the interactions between two or more "units". As in, a unit test doesn't particularly care what happens with the code surrounding it, just as long as the code within the unit test operates as it's designed to.
As for your second question, if you feel the database and fixtures in your unit test suite is taking too long to run, mocking is a great solution.
I have a few functions for performing actions in python, and I am debating whether or not it makes sense to write unit tests for these.
There is one function that resolves a host using the socket module. I don't think it makes sense to write a test for the socket module itself, but seems like it may make sense to write a functional test to see if DNS is working.
I have another function that opens a file and puts the contents into a list.
Does anyone have any guidance or links on performing unit tests/functional tests for items like this? Most of what I have read pertains to application level testing only.
Thank you in advance.
First of all, if you don't have tests at all, better start with high-level end-to-end functional tests and end with unit tests gathering coverage statistics on every new test you write.
While writing a test to a function which uses system or network libraries, you usually want to isolate your test, to make it independent and straight as much as possible by mocking out system and network calls (see Mock library).
By using mock library you can and should test how does your application/function respond to situations where there is a socket or system error.
Also see:
python unittest howto
How to Mock an HTTP request in a unit testing scenario in Python
An Introduction to Mocking in Python
Hope that helps.
I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas?
We use a continious integration server jenkins for such task. It has cron like scheduling and can send an email when build becomes unstable (a test fails). There is an extention to python's unittest module to produce junit style xml report supported by jenkins.
In the end, I wound up running the test and returning the TestResult object. I then look at the failures attribute of that object, and run post processing on each test in the suite that failed. This works well enough for me, and let's me custom design my post-process.
For any extra meta data per test that I need, I subclass unittest.TestResult and add to the addFailure method anything extra that I need.
I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite?
You should use Trial. It really isn't very hard. Trial's documentation could stand to be improved, but if you know how to use the standard library unit test, the only difference is that instead of writing
import unittest
you should write
from twisted.trial import unittest
... and then you can return Deferreds from your test_ methods. Pretty much everything else is the same.
The one other difference is that instead of building a giant test object at the bottom of your module and then running
python your/test_module.py
you can simply define your test cases and then run
trial your.test_module
If you don't care about reactor integration at all, in fact, you can just run trial on a set of existing Python unit tests. Trial supports the standard library 'unittest' module.
"My question is: Is this a correct approach?"
It's what you chose. You made a lot of excuses, so I'm assuming that your pretty well fixed on this course. It's not the best, but you've already listed all your reasons for doing it (and then asked follow-up questions on this specific course of action). "correct" doesn't enter into it anymore, so there's no answer to this question.
"what kind of tests are covered with this approach?"
They call it "black-box" testing. The application server is a black box that has a few inputs and outputs, and you can't test any of it's internals. It's considered one acceptable form of testing because it tests the bottom-line external interfaces for acceptable behavior.
If you have problems, it turns out to be useless for doing diagnostic work. You'll find that you need to also to white-box testing on the internal structures.
"not being able to access the database layer in order to build/rebuild the schema,"
Why not? This is Python. Write a separate tool that imports that layer and does database builds.
"when will the test client going to connect to the server: per each unit test or before running the test suite?"
Depends on the intent of the test. Depends on your use cases. What happens in the "real world" with your actual intended clients?
You'll want to test client-like behavior, making connections the way clients make connections.
Also, you'll want to test abnormal behavior, like clients dropping connections or doing things out of order, or unconnected.
I think you chose the wrong direction. It's true that the Trial docs is very light. But Trial is base on unittest and only add some stuff to deal with the reactor loop and the asynchronous calls (it's not easy to write tests that deal with deffers). All your tests that are not including deffer/asynchronous call will be exactly like normal unittest.
The Trial command is a test runner (a bit like nose), so you don't have to write test suites for your tests. You will save time with it. On top of that, the Trial command can output profiling and coverage information. Just do Trial -h for more info.
But in any way the first thing you should ask yourself is which kind of tests do you need the most, unit tests, integration tests or system tests (black-box). It's possible to do all with Trial but it's not necessary allways the best fit.
haven't used twisted before, and the twisted/trial documentation isn't stellar from what I just saw, but it'll likely take you 2-3 days to implement correctly the test system you describe above. Now, like I said I have no idea about Trial, but I GUESS you could probably get it working in 1-2 days, since you already have a Twisted application. Now if Trial gives you more coverage in less time, I'd go with Trial.
But remember this is just an answer from a very cursory look at the docs