I am having an issue while testing if Celery task was executed successfully. My task calls another reusable function that sends request to another server:
#shared_task
def my_task():
send_request_to_server()
I am using mock to check if function send_request_to_server() was called. Task is triggered via Django Admin changelist action, so the final test looks like this:
#override_settings(CELERY_TASK_ALWAYS_EAGER=True)
#mock.patch(‘helpers.send_request_to_server’)
def my_test(self, mocked_function):
change_url = reverse('admin:app_model_changelist')
response = self.client.post(change_url, {'action': ‘mark’, '_selected_action': [1]}, follow=True)
self.assertTrue(mocked_function.called)
I am 100% percent sure, that this test at some point calls send_request_to_server() function, since this function also creates a file and it is quite easy to notice that. But mocked_function attribute “called” still holds value of False. Also I am quite certain, that the mock decorator path is correct, since simplified test passes without any problems:
#override_settings(CELERY_TASK_ALWAYS_EAGER=True)
#mock.patch(‘helpers.send_request_to_server’)
def my_test(self, mocked_function):
send_request_to_server()
self.assertTrue(mocked_function.called)
Even if the function is called, what could cause the False value of mocked_function.called? Maybe it has something to do with multiple threads? Thank you!
Related
I have a function which is registered as an event on a sqlalchemy model, as show in the code snippets below (not fully-functional as I don't show the db fixture), which should be enough to explain the problem.
root/myapp/models.py:
class MyModel:
id = Column(UUID, primary_key=True)
value = ''
#classmethod
def register_hook(cls, hook_fn):
event.listen(cls, "after_update", hook_fn, propagate=True)
root/myapp/app.py:
from models import MyModel
def hook_fn(mapper, connection, target):
print('fired hook!')
MyModel.register_hook(hook_fn)
root/test/conftest.py:
#pytest.fixture
def patched_hook_fn(mocker):
with mocker.patch("root.myapp.app.hook_fn") as patched:
yield patched
root/test/tests.py:
def test_hook_fires_on_change(db, patched_hook_fn):
model = MyModel(value="initial")
db.session.commit()
model.value = "changed"
db.session.commit() # hook fires here
assert patched_hook_fn.called # assert fails
What I'd like to know is:
Why doesn't the patched function get called?
Is there a simple way in a debug session to see where I should be patching in the with mocker.patch("myapp.app.hook_fn") as patched line?
It doesn't get called because you've already registered the unpatched version with the event system. SQLAlchemy does not read the value at root.myapp.app.hook_fn every time the event is fired, so even if you later set root.myapp.app.hook_fn = some_other_function (which is what patch is doing), it has no visible effect.
The way to fix this is to simply force your app to read the value every time the event is fired, by introducing a level of indirection:
MyModel.register_hook(lambda: hook_fn())
This takes advantage of the way Python resolves identifiers in a closure, where changing root.myapp.app.hook_fn actually changes the value of hook_fn in the closure.
As for your second question, there's no straightforward way to figure out what you need to patch because in order to patch it directly you need to figure out where it is stored in the internals of SQLAlchemy, and depending on that, even in your tests, is quite fragile.
I have a view to create new users in my django project.
I am applying the #sensitive_post_parameters decorator to that view to make sure the password isn't logged if there is an unhandled exception or something like that (as indicated in the comments in the source code https://docs.djangoproject.com/en/2.0/_modules/django/views/decorators/debug/).
When I proceed to test the view, I would like to make sure that this protection of the sensitive information is still in place (that I didn't delete the decorator to the function by mistake or something).
I am aware, since the decorator is applied to my function, I can't test it directly from the view tests.
But, for example, with the #login_required decorator, I can test its effects with assertRedirects (as explained here How to test if a view is decorated with "login_required" (Django)).
I have been searching for a way to do that, but I can't find one that works.
I thought of something like this:
def test_senstive_post_parameters(self):
request = RequestFactory().post('create_user', data={})
my_sensitive_parameters = ['password']
self.assertEqual(
request.sensitive_post_parameters,
my_senstive_parameters
)
but that gives me an
AttributeError: 'WSGIRequest' object has no attribute 'sensitive_post_parameters'
Any help would be appreciated.
Even it is telling me I shouldn't be attempting to test this, though I would really like to, as it is seems like an important behaviour that I should make sure remains in my code as it is later modified.
You have created a request using RequestFactory, but you have not actually used it. To test the effect of your view you need to import the view and call it.
from myapp.views import create_user
def test_senstive_post_parameters(self):
request = RequestFactory().post('create_user', data={})
response = create_user(request)
my_sensitive_parameters = ['password']
self.assertEqual(
request.sensitive_post_parameters,
my_senstive_parameters
)
I am a beginner to using pytest in python and trying to write test cases for the following method which get the user address when correct Id is passed else rises custom error BadId.
def get_user_info(id: str, host='127.0.0.1', port=3000 ) -> str:
uri = 'http://{}:{}/users/{}'.format(host,port,id)
result = Requests.get(uri).json()
address = result.get('user',{}).get('address',None)
if address:
return address
else:
raise BadId
Can someone help me with this and also can you suggest me what are the best resources for learning pytest? TIA
Your test regimen might look something like this.
First I suggest creating a fixture to be used in your various method tests. The fixture sets up an instance of your class to be used in your tests rather than creating the instance in the test itself. Keeping tasks separated in this way helps to make your tests both more robust and easier to read.
from my_package import MyClass
import pytest
#pytest.fixture
def a_test_object():
return MyClass()
You can pass the test object to your series of method tests:
def test_something(a_test_object):
# do the test
However if your test object requires some resources during setup (such as a connection, a database, a file, etc etc), you can mock it instead to avoid setting up the resources for the test. See this talk for some helpful info on how to do that.
By the way: if you need to test several different states of the user defined object being created in your fixture, you'll need to parametrize your fixture. This is a bit of a complicated topic, but the documentation explains fixture parametrization very clearly.
The other thing you need to do is make sure any .get calls to Requests are intercepted. This is important because it allows your tests to be run without an internet connection, and ensures they do not fail as a result of a bad connection, which is not the thing you are trying to test.
You can intercept Requests.get by using the monkeypatch feature of pytest. All that is required is to include monkeypatch as an input parameter to the test regimen functions.
You can employ another fixture to accomplish this. It might look like this:
import Requests
import pytest
#pytest.fixture
def patched_requests(monkeypatch):
# store a reference to the old get method
old_get = Requests.get
def mocked_get(uri, *args, **kwargs):
'''A method replacing Requests.get
Returns either a mocked response object (with json method)
or the default response object if the uri doesn't match
one of those that have been supplied.
'''
_, id = uri.split('/users/', 1)
try:
# attempt to get the correct mocked json method
json = dict(
with_address1 = lambda: {'user': {'address': 123}},
with_address2 = lambda: {'user': {'address': 456}},
no_address = lambda: {'user': {}},
no_user = lambda: {},
)[id]
except KeyError:
# fall back to default behavior
obj = old_get(uri, *args, **kwargs)
else:
# create a mocked requests object
mock = type('MockedReq', (), {})()
# assign mocked json to requests.json
mock.json = json
# assign obj to mock
obj = mock
return obj
# finally, patch Requests.get with patched version
monkeypatch.setattr(Requests, 'get', mocked_get)
This looks complicated until you understand what is happening: we have simply made some mocked json objects (represented by dictionaries) with pre-determined user ids and addresses. The patched version of Requests.get simply returns an object- of type MockedReq- with the corresponding mocked .json() method when its id is requested.
Note that Requests will only be patched in tests that actually use the above fixture, e.g.:
def test_something(patched_requests):
# use patched Requests.get
Any test that does not use patched_requests as an input parameter will not use the patched version.
Also note that you could monkeypatch Requests within the test itself, but I suggest doing it separately. If you are using other parts of the requests API, you may need to monkeypatch those as well. Keeping all of this stuff separate is often going to be easier to understand than including it within your test.
Write your various method tests next. You'll need a different test for each aspect of your method. In other words, you will usually write a different test for the instance in which your method succeeds, and another one for testing when it fails.
First we test method success with a couple test cases.
#pytest.mark.parametrize('id, result', [
('with_address1', 123),
('with_address2', 456),
])
def test_get_user_info_success(patched_requests, a_test_object, id, result):
address = a_test_object.get_user_info(id)
assert address == result
Next we can test for raising the BadId exception using the with pytest.raises feature. Note that since an exception is raised, there is not a result input parameter for the test function.
#pytest.mark.parametrize('id', [
'no_address',
'no_user',
])
def test_get_user_info_failure(patched_requests, a_test_object, id):
from my_package import BadId
with pytest.raises(BadId):
address = a_test_object.get_user_info(id)
As posted in my comment, here also are some additional resources to help you learn more about pytest:
link
link
Also be sure to check out Brian Okken's book and Bruno Oliveira's book. They are both very helpful for learning pytest.
I have several Celery tasks I'm executing within a Django view (more specifically within Django Rest Framework's perform_create method).
What I'm trying to achieve is to immediately (that is, as soon as the task has an id/is in the results backend) access the TaskResult object and do something with it, like this:
tasks = [do_something.s(a) for a in (1, 2, 3, 4,)]
results = group(*tasks).apply_async()
for result in results.children:
task = TaskResult.objects.get(task_id=result.task_id)
do_something_with_task_object(task)
Now, this fails with django_celery_results.models.DoesNotExist: TaskResult matching query does not exist.
I did not yet try it, but I could make this work with something like the following snippet. But that strikes me as plain wrong and ugly, also does it wait until the tasks are finished:
while not all([TaskResult.objects.filter(task_id=t.task_id).exists() for t in results.children]):
pass
Is there some way to make this work in a nice and clean fashion?
It turns out that a) the moment you ask a question on StackOverflow, you're able to answer it yourself and b) Django transaction management does everything you need.
If you wrap the call to task.apply_async in an atomic wrapper all is fine, e.g.
with transactions.atomic():
results = group(*tasks).apply_async()
TaskResult.objects.get(task_id=results.children[0].task_id)
I don't know if it worked for everyone, but with django-celery-results==2.2.0, the transaction as a context manager doesn't seem to work anymore.
On the other hand, in a post_save signal, it seems ok.
# models.py
#receiver(post_save, sender=TaskResult)
def after_task_result(sender, instance, created, **kwargs):
if created: transaction.on_commit(lambda x:do_something())
However, I lose the variables in the view that are not passed in the model creation with signal. In this case, it is still the ugly code that works best.
# views.py
while not TaskResult.objects.filter(task_id = task.id).exists(): pass
task = TaskResult.objects.get(task_id = task.id)
# do something more complex with local variables
I'm trying to write some unit tests for my flask app that uses OpenID for authentication. Since it seems like there's no way to log in the test client via OpenID (I asked this question but it hasn't received any responses: Flask OpenID unittest), I was thinking of overriding g.user in my test, so I tried the code snippet from http://flask.pocoo.org/docs/testing/#faking-resources-and-context and it works as expected.
Unfortunately, when using flask-login, g.user is overridden in the before_request wrapper that sets
g.user = current_user
current_user is anonymous, so my test case is broken. One fix is to execute the before_request wrapper code only when in test mode but it seems lame that you need to add test-specific logic in production code. I've tried messing around with the request context too but g.user still gets overridden eventually. Any ideas for a clean way to solve this?
The official documentation has an example in "Faking Resources and Context":
You first need to make sure that g.user is only set if it does not exist yet. You can do this using getattr. Here's a slightly modified example:
#APP.before_request
def set_user():
user = getattr(g, 'user', None)
if user is None:
g.user = concrete_implementation()
return user
By using getattr we give ourselves the chance to "inject" something during testing. If we would not do this, we would overwrite the variable again with the concrete implementation after the unit-tests injected the value.
The next thing we do is hook into the appcontext_pushed event and set the g.user value with a testable value. This happens before the before_request hook. So by the time that is called getattr will return our test-value and the concrete implementation is skipped:
from contextlib import contextmanager
from flask import appcontext_pushed, g
#contextmanager
def user_set(app, user):
def handler(sender, **kwargs):
g.user = user
with appcontext_pushed.connected_to(handler, app):
yield
And with this little helper we can inject something whenever we need to use a testable value:
def test_user_me():
with user_set(app, 'our-testable-user'):
c = app.test_client()
resp = c.get('/protected-resource')
assert resp.data == '...'
Based on this other question in a Flask unit-test, how can I mock objects on the request-global `g` object? what worked for me is the following:
In my app I refactored the login logic that is contained in the before_request into a separate function. Then, I patched that function in my tests so that it returns the specific user I want to use for a bunch of tests. The before_request is still run with the tests but by patching the function it invokes I can now avoid the actual login process.
I am not sure it is the cleanest way but I think it is better than adding test-only logic to your before_request; it is just a refactoring.