Django testing method, fixture or mock? - python

In my project, I use pytest to write unit test cases for my program. But later I find I there are many db operation, ORM stuff in my program.
I known unit-testing should be run fast, but what is the different between unit-testing and auto integration-testing except fast.
Should I just use the database fixture instead of mocking them?

The main difference between unit tests and integration tests are that integration testing deal with the interactions between two or more "units". As in, a unit test doesn't particularly care what happens with the code surrounding it, just as long as the code within the unit test operates as it's designed to.
As for your second question, if you feel the database and fixtures in your unit test suite is taking too long to run, mocking is a great solution.

Related

Creating a unique mock for an import per unit test in python

We maintain a module that imports a third party library. This library is only available on a specific python installation on a production server. We need to write unit tests for our module, but also want these tests to run on a separate python install (e.g. in a CI pipeline).
We are considering mocking this library. In theory, we think that we can implement a combination of the solutions mentioned here and here. However, we have some reservations about this approach. Since a single mock instance is shared across all unit tests (per the sys['third_party_library'] = mock.MagicMock()), its behavior is likewise shared across all the tests. If we try to set side_effect on the mock in one test, another test could potentially use the wrong side_effect if it runs in parallel. Moreover, expectations can potentially break due to circumstances outside the bounds of an individual unit test that verifies this mock.
Is there a way to ensure that a mocked import is unique to each unit test?

Python/Django unittest, how to handle outside calls?

I read multiple times that one should use mock to mimic outside calls and there should be no calls made to any outside service because your tests need to run regardless of outside services.
This totally makes sense....BUT
What about outside services changing? What good is a test, testing that my code works like it should if I will never know when it breaks because of the outside service being modified/updated/removed/deprecated/etc...
How can I reconcile this? The pseudocode is below
function post_tweet:
data = {"tweet":"tweetcontent"}
send request to twitter
receive response
return response
If I mock this there is no way I will be notified that twitter changed their API and now I have to update my test...
There are different levels of testing.
Unit tests are testing, as you might guess from the name, a unit. Which is for example a function or a method, maybe a class. If you interpret it wider it might include a view to be tested with Djangos test client. Unittests never test external stuff like libraries, dependencies or interfaces to other Systems. Theses thing will be mocked.
Integration tests are testing if your interfaces and usage of outside libraries, systems and APIs is implemented properly. If the dependency changes, you will notice have to change your code and unit tests.
There are other levels of tests as well, like behavior tests, UI tests, usability tests. You should make sure to separate theses tests classes in your project.

Writing unit tests for various os/system level checks

I have a few functions for performing actions in python, and I am debating whether or not it makes sense to write unit tests for these.
There is one function that resolves a host using the socket module. I don't think it makes sense to write a test for the socket module itself, but seems like it may make sense to write a functional test to see if DNS is working.
I have another function that opens a file and puts the contents into a list.
Does anyone have any guidance or links on performing unit tests/functional tests for items like this? Most of what I have read pertains to application level testing only.
Thank you in advance.
First of all, if you don't have tests at all, better start with high-level end-to-end functional tests and end with unit tests gathering coverage statistics on every new test you write.
While writing a test to a function which uses system or network libraries, you usually want to isolate your test, to make it independent and straight as much as possible by mocking out system and network calls (see Mock library).
By using mock library you can and should test how does your application/function respond to situations where there is a socket or system error.
Also see:
python unittest howto
How to Mock an HTTP request in a unit testing scenario in Python
An Introduction to Mocking in Python
Hope that helps.

Python unittest philosophy and ordering

While the unittesting philosophy is that tests can be ran in any order and it should pass, what if you're implementing an API where there is no other means of communicating with a server... and you need to test a certain very basic feature (such as delete) before you can do more complicated tasks? Is ordering the tests then reasonable?
If so, how can I do it with python's unittest module?
You already seem to realise that your unit tests should be independent. The only other reason I can see that you want to run the tests in some fixed order is that you want to stop running the suite if an early test fails. To do that, you can use the command-line option
-f, --failfast
Stop the test run on the first error or failure.
By the way, the tests are run in alphabetical order:
the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
(docs)

How to correctly achieve test isolation with a stateful Python module?

The project I'm working on is a business logic software wrapped up as a Python package. The idea is that various script or application will import it, initialize it, then use it.
It currently has a top level init() method that does the initialization and sets up various things, a good example is that it sets up SQLAlchemy with a db connection and stores the SA session for later access. It is being stored in a subpackage of my project (namely myproj.model.Session, so other code could get a working SA session after import'ing the model).
Long story short, this makes my package a stateful one. I'm writing unit tests for the project and this stafeful behaviour poses some problems:
tests should be isolated, but the internal state of my package breaks this isolation
I cannot test the main init() method since its behavior depends on the state
future tests will need to be run against the (not yet written) controller part with a well known model state (eg. a pre-populated sqlite in-memory db)
Should I somehow refactor my package because the current structure is not the Best (possible) Practice(tm)? :)
Should I leave it at that and setup/teardown the whole thing every time? If I'm going to achieve complete isolation that'd mean fully erasing and re-populating the db at every single test, isn't that overkill?
This question is really on the overall code & tests structure, but for what it's worth I'm using nose-1.0 for my tests. I know the Isolate plugin could probably help me but I'd like to get the code right before doing strange things in the test suite.
You have a few options:
Mock the database
There are a few trade offs to be aware of.
Your tests will become more complex as you will have to do the setup, teardown and mocking of the connection. You may also want to do verification of the SQL/commands sent. It also tends to create an odd sort of tight coupling which may cause you to spend additonal time maintaining/updating tests when the schema or SQL changes.
This is usually the purest for of test isolation because it reduces a potentially large dependency from testing. It also tends to make tests faster and reduces the overhead to automating the test suite in say a continuous integration environment.
Recreate the DB with each Test
Trade offs to be aware of.
This can make your test very slow depending on how much time it actually takes to recreate your database. If the dev database server is a shared resource there will have to be additional initial investment in making sure each dev has their own db on the server. The server may become impacted depending on how often tests get runs. There is additional overhead to running your test suite in a continuous integration environment because it will need at least, possibly more dbs (depending on how many branches are being built simultaneously).
The benefit has to do with actually running through the same code paths and similar resources that will be used in production. This usually helps to reveal bugs earlier which is always a very good thing.
ORM DB swap
If your using an ORM like SQLAlchemy their is a possibility that you can swap the underlying database with a potentially faster in-memory database. This allows you to mitigate some of the negatives of both the previous options.
It's not quite the same database as will be used in production, but the ORM should help mitigate the risk that obscures a bug. Typically the time to setup an in-memory database is much shorter that one which is file-backed. It also has the benefit of being isolated to the current test run so you don't have to worry about shared resource management or final teardown/cleanup.
Working on a project with a relatively expensive setup (IPython), I've seen an approach used where we call a get_ipython function, which sets up and returns an instance, while replacing itself with a function which returns a reference to the existing instance. Then every test can call the same function, but it only does the setup for the first one.
That saves doing a long setup procedure for every test, but occasionally it creates odd cases where a test fails or passes depending on what tests were run before. We have ways of dealing with that - a lot of the tests should do the same thing regardless of the state, and we can try to reset the object's state before certain tests. You might find a similar trade-off works for you.
Mock is a simple and powerfull tool to achieve some isolation. There is a nice video from Pycon2011 which shows how to use it. I recommend to use it together with py.test which reduces the amount of code required to define tests and is still very, very powerfull.

Categories