I'm trying to test sent signal and it's providing_args. Signal triggered inside contact_question_create view just after form submission.
My TestCase is something like:
def test_form_should_post_proper_data_via_signal(self):
form_data = {'name': 'Jan Nowak'}
signals.question_posted.send(sender='test', form_data=form_data)
#receiver(signals.question_posted, sender='test')
def question_posted_listener(sender, form_data):
self.name = form_data['name']
eq_(self.name, 'Jan Nowak')
Is this the proper way to test this signal? Any better ideas?
Simplest way to do what you asked in 2015:
from unittest.mock import patch
#patch('full.path.to.signals.question_posted.send')
def test_question_posted_signal_triggered(self, mock):
form = YourForm()
form.cleaned_data = {'name': 'Jan Nowak'}
form.save()
# Check that your signal was called.
self.assertTrue(mock.called)
# Check that your signal was called only once.
self.assertEqual(mock.call_count, 1)
# Do whatever else, like actually checking if your signal logic did well.
And with that, you just tested that your signal was properly triggered.
I have an alternative suggestion using the mock library, which is now part of the unittest.mock standard library in Python 3 (if you're using Python 2, you'll have to pip install mock).
try:
from unittest.mock import MagicMock
except ImportError:
from mock import MagicMock
def test_form_should_post_proper_data_via_signal(self):
"""
Assert signal is sent with proper arguments
"""
# Create handler
handler = MagicMock()
signals.question_posted.connect(handler, sender='test')
# Post the form or do what it takes to send the signal
signals.question_posted.send(sender='test', form_data=form_data)
# Assert the signal was called only once with the args
handler.assert_called_once_with(signal=signals.question_posted, form_data=form_data, sender="test")
The essential part of the suggestion is to mock a receiver, then test whether or not your signal is being sent to that receiver, and called only once. This is great, especially if you have custom signals, or you've written methods that send signals and you want to ensure in your unit tests that they are being sent.
I've resolved the problem by myself. I think that the best solution is following:
def test_form_should_post_proper_data_via_signal(self):
# define the local listener
def question_posted_listener(sender, form_data, **kwargs):
self.name = form_data['name']
# prepare fake data
form_data = {'name': 'Jan Nowak'}
# connect & send the signal
signals.question_posted.connect(question_posted_listener, sender='test')
signals.question_posted.send(sender='test', form_data=form_data)
# check results
eq_(self.name, 'Jan Nowak')
The purpose of this isn't to test the underlying signalling mechanism, but rather is an important unit test to ensure that whatever signal your method is supposed to emit is actually emitted with the proper arguments. In this case, it seems a little trivial since its an internal django signal, but imagine if you wrote the method that was emitting a custom signal.
You need to:
assert a signal was emited with proper arguments and,
a specific number of times and,
in appropriate order.
You can use mock_django app which provides a mock for signals.
Example:
from mock import call
def test_install_dependency(self):
with mock_signal_receiver(post_app_install) as install_receiver:
self.env.install(self.music_app)
self.assertEqual(install_receiver.call_args_list, [
call(signal=post_app_install, sender=self.env,
app=self.ukulele_app),
call(signal=post_app_install, sender=self.env,
app=self.music_app),
])
Why do you test your framework? Django already have unit tests for signal dispatcher. If you don't believe that your framework is fine just attach it unit tests to yours test runner.
For my part, I wouldn't test that the signal is sent. I would test the intended effect of the signals processing.
In my use case, the signals are used to update a Produit.qte attribute when, say, Order.qte_shipped is upated. (E.g. when we fill an order, I want the qte of the given product to be subtracted from the corresponding product for that order).
Thus I do something like this in signals.py:
#receiver(post_save, sender='orders.Order')
#disable_for_loaddata
def quantity_adjust_order(sender, **kwargs):
# retrieve the corresponding product for that order
# subtract Order.qte_shipped from Produit.qte
# save the updated Produit
What I actually test is that Produit.qte is correctly updated when I ship an Order. I do not test that the signals works; that's just one of the things that COULD explain why test_order_ship_updates_product() failed.
I somewhat agree with what #Piotr Czapla said; you're kind of trying to test the framework. Test the effect on your code instead.
Related
I have a function which is registered as an event on a sqlalchemy model, as show in the code snippets below (not fully-functional as I don't show the db fixture), which should be enough to explain the problem.
root/myapp/models.py:
class MyModel:
id = Column(UUID, primary_key=True)
value = ''
#classmethod
def register_hook(cls, hook_fn):
event.listen(cls, "after_update", hook_fn, propagate=True)
root/myapp/app.py:
from models import MyModel
def hook_fn(mapper, connection, target):
print('fired hook!')
MyModel.register_hook(hook_fn)
root/test/conftest.py:
#pytest.fixture
def patched_hook_fn(mocker):
with mocker.patch("root.myapp.app.hook_fn") as patched:
yield patched
root/test/tests.py:
def test_hook_fires_on_change(db, patched_hook_fn):
model = MyModel(value="initial")
db.session.commit()
model.value = "changed"
db.session.commit() # hook fires here
assert patched_hook_fn.called # assert fails
What I'd like to know is:
Why doesn't the patched function get called?
Is there a simple way in a debug session to see where I should be patching in the with mocker.patch("myapp.app.hook_fn") as patched line?
It doesn't get called because you've already registered the unpatched version with the event system. SQLAlchemy does not read the value at root.myapp.app.hook_fn every time the event is fired, so even if you later set root.myapp.app.hook_fn = some_other_function (which is what patch is doing), it has no visible effect.
The way to fix this is to simply force your app to read the value every time the event is fired, by introducing a level of indirection:
MyModel.register_hook(lambda: hook_fn())
This takes advantage of the way Python resolves identifiers in a closure, where changing root.myapp.app.hook_fn actually changes the value of hook_fn in the closure.
As for your second question, there's no straightforward way to figure out what you need to patch because in order to patch it directly you need to figure out where it is stored in the internals of SQLAlchemy, and depending on that, even in your tests, is quite fragile.
I am having an issue while testing if Celery task was executed successfully. My task calls another reusable function that sends request to another server:
#shared_task
def my_task():
send_request_to_server()
I am using mock to check if function send_request_to_server() was called. Task is triggered via Django Admin changelist action, so the final test looks like this:
#override_settings(CELERY_TASK_ALWAYS_EAGER=True)
#mock.patch(‘helpers.send_request_to_server’)
def my_test(self, mocked_function):
change_url = reverse('admin:app_model_changelist')
response = self.client.post(change_url, {'action': ‘mark’, '_selected_action': [1]}, follow=True)
self.assertTrue(mocked_function.called)
I am 100% percent sure, that this test at some point calls send_request_to_server() function, since this function also creates a file and it is quite easy to notice that. But mocked_function attribute “called” still holds value of False. Also I am quite certain, that the mock decorator path is correct, since simplified test passes without any problems:
#override_settings(CELERY_TASK_ALWAYS_EAGER=True)
#mock.patch(‘helpers.send_request_to_server’)
def my_test(self, mocked_function):
send_request_to_server()
self.assertTrue(mocked_function.called)
Even if the function is called, what could cause the False value of mocked_function.called? Maybe it has something to do with multiple threads? Thank you!
I am a beginner to using pytest in python and trying to write test cases for the following method which get the user address when correct Id is passed else rises custom error BadId.
def get_user_info(id: str, host='127.0.0.1', port=3000 ) -> str:
uri = 'http://{}:{}/users/{}'.format(host,port,id)
result = Requests.get(uri).json()
address = result.get('user',{}).get('address',None)
if address:
return address
else:
raise BadId
Can someone help me with this and also can you suggest me what are the best resources for learning pytest? TIA
Your test regimen might look something like this.
First I suggest creating a fixture to be used in your various method tests. The fixture sets up an instance of your class to be used in your tests rather than creating the instance in the test itself. Keeping tasks separated in this way helps to make your tests both more robust and easier to read.
from my_package import MyClass
import pytest
#pytest.fixture
def a_test_object():
return MyClass()
You can pass the test object to your series of method tests:
def test_something(a_test_object):
# do the test
However if your test object requires some resources during setup (such as a connection, a database, a file, etc etc), you can mock it instead to avoid setting up the resources for the test. See this talk for some helpful info on how to do that.
By the way: if you need to test several different states of the user defined object being created in your fixture, you'll need to parametrize your fixture. This is a bit of a complicated topic, but the documentation explains fixture parametrization very clearly.
The other thing you need to do is make sure any .get calls to Requests are intercepted. This is important because it allows your tests to be run without an internet connection, and ensures they do not fail as a result of a bad connection, which is not the thing you are trying to test.
You can intercept Requests.get by using the monkeypatch feature of pytest. All that is required is to include monkeypatch as an input parameter to the test regimen functions.
You can employ another fixture to accomplish this. It might look like this:
import Requests
import pytest
#pytest.fixture
def patched_requests(monkeypatch):
# store a reference to the old get method
old_get = Requests.get
def mocked_get(uri, *args, **kwargs):
'''A method replacing Requests.get
Returns either a mocked response object (with json method)
or the default response object if the uri doesn't match
one of those that have been supplied.
'''
_, id = uri.split('/users/', 1)
try:
# attempt to get the correct mocked json method
json = dict(
with_address1 = lambda: {'user': {'address': 123}},
with_address2 = lambda: {'user': {'address': 456}},
no_address = lambda: {'user': {}},
no_user = lambda: {},
)[id]
except KeyError:
# fall back to default behavior
obj = old_get(uri, *args, **kwargs)
else:
# create a mocked requests object
mock = type('MockedReq', (), {})()
# assign mocked json to requests.json
mock.json = json
# assign obj to mock
obj = mock
return obj
# finally, patch Requests.get with patched version
monkeypatch.setattr(Requests, 'get', mocked_get)
This looks complicated until you understand what is happening: we have simply made some mocked json objects (represented by dictionaries) with pre-determined user ids and addresses. The patched version of Requests.get simply returns an object- of type MockedReq- with the corresponding mocked .json() method when its id is requested.
Note that Requests will only be patched in tests that actually use the above fixture, e.g.:
def test_something(patched_requests):
# use patched Requests.get
Any test that does not use patched_requests as an input parameter will not use the patched version.
Also note that you could monkeypatch Requests within the test itself, but I suggest doing it separately. If you are using other parts of the requests API, you may need to monkeypatch those as well. Keeping all of this stuff separate is often going to be easier to understand than including it within your test.
Write your various method tests next. You'll need a different test for each aspect of your method. In other words, you will usually write a different test for the instance in which your method succeeds, and another one for testing when it fails.
First we test method success with a couple test cases.
#pytest.mark.parametrize('id, result', [
('with_address1', 123),
('with_address2', 456),
])
def test_get_user_info_success(patched_requests, a_test_object, id, result):
address = a_test_object.get_user_info(id)
assert address == result
Next we can test for raising the BadId exception using the with pytest.raises feature. Note that since an exception is raised, there is not a result input parameter for the test function.
#pytest.mark.parametrize('id', [
'no_address',
'no_user',
])
def test_get_user_info_failure(patched_requests, a_test_object, id):
from my_package import BadId
with pytest.raises(BadId):
address = a_test_object.get_user_info(id)
As posted in my comment, here also are some additional resources to help you learn more about pytest:
link
link
Also be sure to check out Brian Okken's book and Bruno Oliveira's book. They are both very helpful for learning pytest.
I'm not sure if this is an IntelliJ thing or not (using the built-in test runner) but I have a class whose logging output I'd like to appear in the test case that I am running. I hope the example code is enough scope, if not I can edit to include more.
Basically the log.info() call in the Matching() class never shows up in my test runner console when running. Is there something I need to configure on the class that extends TestCase ?
Here's the class in matching.py:
class Matching(object):
"""
The main compliance matching logic.
"""
request_data = None
def __init__(self, matching_request):
"""
Set matching request information.
"""
self.request_data = matching_request
def can_matching_run(self):
raise Exception("Not implemented yet.")
def run_matching(self):
log.info("Matching started at {0}".format(datetime.now()))
Here is the test:
class MatchingServiceTest(IntegrationTestBase):
def __do_matching(self, client_name, date_range):
"""
Pull control records from control table, and compare against program generated
matching data from teh non-control table.
The ``client_name`` dictates which model to use. Data is compared within
a mock ``date_range``.
"""
from matching import Matching, MatchingRequest
# Run the actual matching service for client.
match_request = MatchingRequest(client_name, date_range)
matcher = Matching(match_request)
matcher.run_matching()
Well I do not see where you initialize the log object but I presume you do that somewhere and you add a Handler to it (StreamHandler, FileHandler etc.)
This means that during your tests this does not occur. So you would have to that in test. Since you did not post that part of the code, I can't give an exact solution:
import logging
log = logging.getLogger("your-logger-name")
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)
Although test should generally not have anything printed out to stdout. It's best to use a FileHandler, and you should design your tests in such a way that they will fail if something goes wrong. That's the whole point of automated tests. So you won't have to manually inspect the output. If they fail, you can then check the log to see if they contain useful debugging information.
Hope this helps.
Read more on logging here.
I had a problem with post_save being called twice and I spent a lot of time figuring out the imports as had been mentioned. I confirmed that the import is happening only once and there is no question of multiple registrations. Besides I'm using a unique dispatch_uid in the signal registration which as per the documentation should have solved the problem. It did not. I looked more carefully and saw that the signal handler gets called on .create() as well as .save(). Why for create?
The only way I could get it to work is by relying on the hack below inside my signal handler
created = False
#Workaround to signal being emitted twice on create and save
if 'created' in kwargs:
if kwargs['created']:
created=True
#If signal is from object creation, return
if created:
return
This is a follow up to question Django post save signal getting called twice despite uid
Because "creation" is instantiation plus saving.
create(**kwargs)
A convenience method for creating an object and saving it all in one step. Thus:
p = Person.objects.create(first_name="Bruce", last_name="Springsteen")
and:
p = Person(first_name="Bruce", last_name="Springsteen")
p.save(force_insert=True)
are equivalent.