Python Unit Tests Run method until certain point - python

I'm new to testing with Python and am not sure if this is even possible.
I have a relatively long method that accepts an input, does some processing then sends the data to an API.
I would like to write a test that will send the inputted data to the test, run the processing on the data but NOT send it to the API. So basically run for a certain amount within the method but not to the end.
Unfortunately I'm not even sure where to start so I can't really provide relevant sample code - It would just be a standard unit test that runs a method with input and asserts the output.

You are taking the wrong approach. What you want to do is execute your test isolated from your external API function calls. Just mock your API calls. That means, run your test with the API calls replaced with mock methods. You don't need to change code under test, you can use a patch decorator to replace the API calls with mock objects. Please see the unittest.mock documentation and examples here
unittest.mock is very powerful, and can look a bit daunting or at least a bit puzzling at the beginning. Take your time to understand the kinds of things you can do with mocks in the documentation. A very simple example here, of one of the possibilites (in some test code):
#patch('myproject.db.api.os.path.exists')
def test_init_db(self, mock_exists):
...
# my mock function call will always returns False
mock_exists.return_value = False
# now calls to myproject.db.api.os.path.exists
# in the code under test act just like the db file does not exist
...
So you probably can bypass your external API calls (all of them or some of them) with ease. And you don't have to specify API results if you don't want to. Mocks exhibit "plastic" behaviour.
If you create a mock and call an arbitrary mock method you haven't even defined (think the API methods you want to isolate) it will run ok and simply return another mock object. That is, it will do nothing, but its client code will still run as if it did. So you can run your tests actually disabling the parts you want.

Related

Testing an Airflow DAG end-to-end wtih fixed input from within a python unit test while mocking Configuration

So, I have a DAG and some specified input, and i expect a certain output. I want to run all operators end to end.
Currently I'm trying to write some unit test in python that will mock over the some configuration,
and then I also want to overwrite some input parameters in dag_run.conf.
This seems to be cumbersome, since i did not find a way yet to both mock Variables.get, BaseHook.get_connection and trigger a dag in the python unit test.
If I use from airflow.api.client.local_client import Client then I cannot mock over Variable.get and Hooks.
How would i best do that?
Is there some further documentation on how Airflow works internally (I'm suspecting it starts different processes that overwrite the mocks?)
Or maybe I'm just not doing this the way it was intended to be done?

Python/Django unittest, how to handle outside calls?

I read multiple times that one should use mock to mimic outside calls and there should be no calls made to any outside service because your tests need to run regardless of outside services.
This totally makes sense....BUT
What about outside services changing? What good is a test, testing that my code works like it should if I will never know when it breaks because of the outside service being modified/updated/removed/deprecated/etc...
How can I reconcile this? The pseudocode is below
function post_tweet:
data = {"tweet":"tweetcontent"}
send request to twitter
receive response
return response
If I mock this there is no way I will be notified that twitter changed their API and now I have to update my test...
There are different levels of testing.
Unit tests are testing, as you might guess from the name, a unit. Which is for example a function or a method, maybe a class. If you interpret it wider it might include a view to be tested with Djangos test client. Unittests never test external stuff like libraries, dependencies or interfaces to other Systems. Theses thing will be mocked.
Integration tests are testing if your interfaces and usage of outside libraries, systems and APIs is implemented properly. If the dependency changes, you will notice have to change your code and unit tests.
There are other levels of tests as well, like behavior tests, UI tests, usability tests. You should make sure to separate theses tests classes in your project.

How to approach this unit-testing issue based on an external class?

I have a class Node (add_node.py) in which I create nodes that connect to a websocket. Now I want to write unit-tests for checking whether or not the connection was successful, and for checking the response of the server.
So I created a node_tests.py file which the following content:
import unittest
import json
import re
from add_node import Node
class TestNodes(unittest.TestCase):
def test_node_creation(self):
self.node = Node(a='1', b='2', c=True)
self.response = json.loads(self.node.connect())
self.assertIn('ok', self.response['r'])
def test_node_c(self):
self.assertTrue(self.response['c'])
if __name__ == '__main__':
unittest.main()
The first method is working but the second is failing because there is no attribute 'response'. So how could I approach this problem?
Also, is it ok to do it they way I'm doing it? Importing the class and writing multiple test within the same Test class?
The point of a unit test is to verify the functionality of a single isolated unit. Exactly what a unit is can be debated. Some would argue it's a single public method on a single class. I think most would agree that is not a very useful definition though. I like to think of them as use-cases. Your tests should do one use-case, and then make one assertion about the results from that use-case. This means that sometimes it's OK to let classes collaborate, and sometimes it's better to isolate a class and use test doubles for it's dependencies.
Knowing when to isolate is something you learn over time. I think the most important points to consider are that every test you write should
Fully define the context in which it's run (without depending on global state or previously run tests)
Be fast, a few milliseconds tops (this means you don't touch external resources like the file system, a web server or some database)
Not test a bunch of other things that are covered by other tests.
This third point is probably the hardest to balance. Obviously several tests will run the same code if they're making different assertions on the same use-case. You should keep the tests small though. Let's say we want to test the cancellation process of orders in an e-commerce application. In this case we probably don't want to import and set up all the classes used to create, verify, etc. an order before we cancel it. In that case it's probably a better idea to just create an order object manually or maybe use a test double.
In your case, if what you actually want to do is to test the real connection and the responses the real server gives, you don't want a unit test, you want an integration test.
If what you actually want is to test the business logic of your client class, however, you should probably create a fake socket/server where you can yourself define the responses and whether or not the connection is successful. This allows you to test that your client behaves correctly depending on it's communications with the remote server, without actually having to depend on the remote server in your test suite.

Writing unit tests for various os/system level checks

I have a few functions for performing actions in python, and I am debating whether or not it makes sense to write unit tests for these.
There is one function that resolves a host using the socket module. I don't think it makes sense to write a test for the socket module itself, but seems like it may make sense to write a functional test to see if DNS is working.
I have another function that opens a file and puts the contents into a list.
Does anyone have any guidance or links on performing unit tests/functional tests for items like this? Most of what I have read pertains to application level testing only.
Thank you in advance.
First of all, if you don't have tests at all, better start with high-level end-to-end functional tests and end with unit tests gathering coverage statistics on every new test you write.
While writing a test to a function which uses system or network libraries, you usually want to isolate your test, to make it independent and straight as much as possible by mocking out system and network calls (see Mock library).
By using mock library you can and should test how does your application/function respond to situations where there is a socket or system error.
Also see:
python unittest howto
How to Mock an HTTP request in a unit testing scenario in Python
An Introduction to Mocking in Python
Hope that helps.

Custom onFailure Call in Unittest?

I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas?
We use a continious integration server jenkins for such task. It has cron like scheduling and can send an email when build becomes unstable (a test fails). There is an extention to python's unittest module to produce junit style xml report supported by jenkins.
In the end, I wound up running the test and returning the TestResult object. I then look at the failures attribute of that object, and run post processing on each test in the suite that failed. This works well enough for me, and let's me custom design my post-process.
For any extra meta data per test that I need, I subclass unittest.TestResult and add to the addFailure method anything extra that I need.

Categories