I am working with a class in python that is part of a bigger program. The class is calling different methods.
If there is an error in one of the method I would like code to keep running after, but after the program is finished, I want to be able to see which methods had potential errors in them.
Below is roughly how I am structuring it at the moment, and this solution doesn't scale very well with more methods. Is there a better way to provide feedback (after the code has been fully run) as to which of the method had a potential error?
class Class():
def __init__(self):
try:
self.method_1()
except:
self.error_method1 = "Yes"
break
try:
self.method_2()
except:
self.error_method2 = "Yes"
break
try:
self.method_3()
except:
self.error_method3 = "Yes"
break
Although you could use sys.exc_info() to retrieve information about an Exception when one occurs as I mentioned in a comment, doing so may not be required since Python's standard try/expect mechanism seems adequate.
Below is a runnable example showing how to do so in order to provide "feedback" later about the execution of several methods of a class. This approach uses a decorator function, so should scale well since the same decorator can be applied to as many of the class' methods as desired.
from contextlib import contextmanager
from functools import wraps
import sys
from textwrap import indent
def provide_feedback(method):
""" Decorator to trap exceptions and add messages to feedback. """
#wraps(method)
def wrapped_method(self, *args, **kwargs):
try:
return method(self, *args, **kwargs)
except Exception as exc:
self._feedback.append(
'{!r} exception occurred in {}()'.format(exc, method.__qualname__))
return wrapped_method
class Class():
def __init__(self):
with self.feedback():
self.method_1()
self.method_2()
self.method_3()
#contextmanager
def feedback(self):
self._feedback = []
try:
yield
finally:
# Example of what could be done with any exception messages.
# They could instead be appended to some higher-level container.
if self._feedback:
print('Feedback:')
print(indent('\n'.join(self._feedback), ' '))
#provide_feedback
def method_1(self):
raise RuntimeError('bogus')
#provide_feedback
def method_2(self):
pass
#provide_feedback
def method_3(self):
raise StopIteration('Not enough foobar to go around')
inst = Class()
Output:
Feedback:
RuntimeError('bogus') exception occurred in Class.method_1()
StopIteration('Not enough foobar to go around') exception occurred in Class.method_3()
I have a class that caches a method:
class Foo:
def __init__(self, package: str):
self.is_installed = functools.lru_cache()(
self.is_installed)
def is_installed():
#implementation here
And code that calls the method by looping on the instances of the class
try:
if Foo('package').is_installed():
except Exception as e:
print('Could not install')
else:
print('Installed properly')
I am trying to test this code by mocking the is_installed method to throw an exception.
#patch.object(Foo, 'is_installed')
def test_exception_installing_bear(self, mock_method):
mock_method.side_effect = Exception('Something bad')
# code to assert 'could not install' in stdout
But it does not work. The exception is not thrown and the assertion fails. On the other hand the output shows that it installed properly. I think it has something to be cached. What am I doing wrong?
see document
unittest.TestCase.assertRaises
Alternative
with self.assertRaises(Exception):
mock_args = {'side_effect': Exception}
with mock.patch('foo.Foo.is_installed', **mock_args):
Foo.is_installed
This pattern is from the django docs:
class SimpleTest(unittest.TestCase):
def test_details(self):
client = Client()
response = client.get('/customer/details/')
self.assertEqual(response.status_code, 200)
From: https://docs.djangoproject.com/en/1.8/topics/testing/tools/#default-test-client
If the test fails, the error message does not help very much. For example if the status_code is 302, then I see 302 != 200.
The question is now: Where does the wrong HTTPResponse get created?
I would like to see the stacktrace of the interpreter where the wrong HTTPResponse object get created.
I read the docs for the assertions of django but found no matching method.
Update
This is a general question: How to see the wanted information immediately if the assertion fails? Since these assertions (self.assertEqual(response.status_code, 200)) are common, I don't want to start debugging.
Update 2016
I had the same idea again, found the current answer not 100% easy. I wrote a new answer, which has a simple to use solution (subclass of django web client): Django: assertEqual(response.status_code, 200): I want to see useful stack of functions calls
I think it could be achieved by creating a TestCase subclass that monkeypatches django.http.response.HttpResponseBase.__init__() to record a stack trace and store it on the Response object, then writing an assertResponseCodeEquals(response, status_code=200) method that prints the stored stack trace on failure to show where the Response was created.
I could actually really use a solution for this myself, and might look at implementing it.
Update:
Here's a v1 implementation, which could use some refinement (eg only printing relevant lines of the stack trace).
import mock
from traceback import extract_stack, format_list
from django.test.testcases import TestCase
from django.http.response import HttpResponseBase
orig_response_init = HttpResponseBase.__init__
def new_response_init(self, *args, **kwargs):
orig_response_init(self, *args, **kwargs)
self._init_stack = extract_stack()
class ResponseTracebackTestCase(TestCase):
#classmethod
def setUpClass(cls):
cls.patcher = mock.patch.object(HttpResponseBase, '__init__', new_response_init)
cls.patcher.start()
#classmethod
def tearDownClass(cls):
cls.patcher.stop()
def assertResponseCodeEquals(self, response, status_code=200):
self.assertEqual(response.status_code, status_code,
"Response code was '%s', expected '%s'" % (
response.status_code, status_code,
) + '\n' + ''.join(format_list(response._init_stack))
)
class MyTestCase(ResponseTracebackTestCase):
def test_index_page_returns_200(self):
response = self.client.get('/')
self.assertResponseCodeEquals(response, 200)
How do I see the traceback if the assertion fails without debugging
If the assertion fails, there isn't a traceback. The client.get() hasn't failed, it just returned a different response than you were expecting.
You could use a pdb to step through the client.get() call, and see why it is returning the unexpected response.
Maybe this could work for you:
class SimpleTest(unittest.TestCase):
#override_settings(DEBUG=True)
def test_details(self):
client = Client()
response = client.get('/customer/details/')
self.assertEqual(response.status_code, 200, response.content)
Using #override_settings to have DEBUG=True will have the stacktrace just as if you were running an instance in DEBUG mode.
Secondly, in order to provide the content of the response, you need to either print it or log it using the logging module, or add it as your message for the assert method. Without a debugger, once you assert, it is too late to print anything useful (usually).
You can also configure logging and add a handler to save messages in memory, and print all of that; either in a custom assert method or in a custom test runner.
I was inspired by the solution that #Fush proposed but my code was using assertRedirects which is a longer method and was a bit too much code to duplicate without feeling bad about myself.
I spent a bit of time figuring out how I could just call super() for each assert and came up with this. I've included 2 example assert methods - they would all basically be the same. Maybe some clever soul can think of some metaclass magic that does this for all methods that take 'response' as their first argument.
from bs4 import BeautifulSoup
from django.test.testcases import TestCase
class ResponseTracebackTestCase(TestCase):
def _display_response_traceback(self, e, content):
soup = BeautifulSoup(content)
assert False, u'\n\nOriginal Traceback:\n\n{}'.format(
soup.find("textarea", {"id": "traceback_area"}).text
)
def assertRedirects(self, response, *args, **kwargs):
try:
super(ResponseTracebackTestCase, self).assertRedirects(response, *args, **kwargs)
except Exception as e:
self._display_response_traceback(e, response.content)
def assertContains(self, response, *args, **kwargs):
try:
super(ResponseTracebackTestCase, self).assertContains(response, *args, **kwargs)
except Exception as e:
self._display_response_traceback(e, response.content)
I subclassed the django web client, to get this:
Usage
def test_foo(self):
...
MyClient().get(url, assert_status=200)
Implementation
from django.test import Client
class MyClient(Client):
def generic(self, method, path, data='',
content_type='application/octet-stream', secure=False,
assert_status=None,
**extra):
if assert_status:
return self.assert_status(assert_status, super(MyClient, self).generic, method, path, data, content_type, secure, **extra)
return super(MyClient, self).generic(method, path, data, content_type, secure, **extra)
#classmethod
def assert_status(cls, status_code, method_pointer, *args, **kwargs):
assert hasattr(method_pointer, '__call__'), 'Method pointer needed, looks like the result of a method call: %r' % (method_pointer)
def new_init(self, *args, **kwargs):
orig_response_init(self, *args, **kwargs)
if not status_code == self.status_code:
raise HTTPResponseStatusCodeAssertionError('should=%s is=%s' % (status_code, self.status_code))
def reraise_exception(*args, **kwargs):
raise
with mock.patch('django.core.handlers.base.BaseHandler.handle_uncaught_exception', reraise_exception):
with mock.patch.object(HttpResponseBase, '__init__', new_init):
return method_pointer(*args, **kwargs)
Conclusion
This results in a long exception if a http response with a wrong status code was created. If you are not afraid of long exceptions, you see very fast the root of the problem. That's what I want, I am happy.
Credits
This was based on other answers of this question.
I'm trying to test if the application is retrying.
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
// doing something
VideoManager().convert(gif_url)
return
except Exception as exc:
raise convert_video.retry(exc=exc)
And I'm mocking the test
#patch('src.video_manager.VideoManager.convert')
#patch('requests.post')
def test_retry_failed_task(self, mock_video_manager, mock_requests):
mock_video_manager.return_value= {'webm':'file.webm', 'mp4':'file.mp4', 'ogv' : 'file.ogv', 'snapshot':'snapshot.png'}
mock_video_manager.side_effect = Exception('some error')
server.convert_video.retry = MagicMock()
server.convert_video('gif_url', 'http://www.company.com/webhook?attachment_id=1234')
server.convert_video.retry.assert_called_with(ANY)
And I'm getting this error
TypeError: exceptions must be old-style classes or derived from BaseException, not MagicMock
Which is obvious but I don't know how to do it otherwise to test if the method is being called.
I havn't gotten it to work with just using the built in retry so I have to use a mock with the side effect of the real Retry, this makes it possible to catch it in a test.
I've done it like this:
from celery.exceptions import Retry
from mock import MagicMock
from nose.plugins.attrib import attr
# Set it for for every task-call (or per task below with #patch)
task.retry = MagicMock(side_effect=Retry)
##patch('task.retry', MagicMock(side_effect=Retry)
def test_task(self):
with assert_raises(Retry):
task() # Note, no delay or things like that
# and the task, I don't know if it works without bind.
#Celery.task(bind=True)
def task(self):
raise self.retry()
If anyone knows how I can get rid of the extra step in mocking the Retry "exception" I'd be happy to hear it!
from mock import patch
import pytest
#patch('tasks.convert_video.retry')
#patch('tasks.VideoManager')
def test_retry_on_exception(mock_video_manger, mock_retry):
mock_video_manger.convert.side_effect = error = Exception()
with pytest.raises(Exception):
tasks.convert_video('foo', 'bar')
mock_retry.assert_called_with(exc=error)
you're also missing some stuff in your task:
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
try:
return VideoManager().convert(gif_url)
except Exception as exc:
convert_video.retry(exc=exc)
The answers here didn't help me, so I dived even deeper into celery's code and found a hack that works for me:
def test_celery_retry(monkeypatch):
# so the retry will be eager
monkeypatch.setattr(celery_app.conf, 'task_always_eager', True)
# so celery won't try to raise an error and actually retry
monkeypatch.setattr(celery.app.task.Context, 'called_directly', False)
task.delay()
For me it worked to patch celery.app.task.Task.request. This way I could also simulate later retries (eg. to test that the task is retried multiple times).
Using pytest and unittest.mock.patch() this looks like:
#mock.patch("celery.app.task.Task.request")
def test_celery_task_retry(mock_request):
# Override called_directly so that Task.retry() produces a Retry exception.
mock_request.called_directly = False
# Simulate the 42nd retry.
mock_request.retries = 42
with pytest.raises(celery.exceptions.Retry) as retry_exc:
task()
assert retry_exc.value.when > 0
I got this-like hierarchy and similar code:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def cmd_a():
BackendRequest().exec()
def cmd_b():
BackendRequest().exec()
The goal is to make developer able to operate with Frontend objects and exceptions within functions cmd_x of Frontend.
Basicly, i need a place to handle common BackendException types, to raise FrontendException. For example:
class Frontend:
def cmd_a():
try:
BackendRequest().exec()
except BackendException as e:
raise FrontendException()
And this will be repeated in each cmd_x function! It's so ugly! And it operates with Backend things! I want to remove repeated exception handling.
Any suggestions?
Btw, my solution, but i find it ugly too, so view it after you'll try to suggest me something. Maybe you'll suggest me something about my solution.
class BaseFrontend:
def exec_request(req):
try:
return req.exec()
except BackendException as e:
raise FrontendException
class Frontend(BaseFrontend):
def cmd_a():
result self.exec_request(BackendRequest())
def cmd_b():
result self.exec_request(BackendRequest())
EDIT: Ok, yes, i know, i dont need to create a lot of classes to build simple API. But, let's see what i need in the result:
class APIManager:
def cmd_a(): ...
def cmd_b(): ...
This manager needs to access HTTP REST service to perform each command. So, if i'll get an error during REST request, i'll need to raise exception APIManagerException - i can't leave raw pycurl exception, beacause APIManager user don't knows what pycurl is, he will be confused with getting pycurl error if he will give wrong ID as argument of cmd_x.
So i need to raise informative exceptions for some common cases. Let it be just one exception - APIManagerException. But i dont want to repeat try...except block each time, in each command, to each pycurl request. In fact, i want to process some errors in commands(functions cmd_x), not to parse pycurl errors.
You can create a decorator that wraps all Frontend calls, catches BackendExceptions, and raises FrontendException if they are thrown. (Honestly, though, it's not clear why Frontend and Backend are classes and not a set of functions.) See below:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def back_raiser(func):
def wrapped(*args, **kwargs):
try:
func(*args, **kwargs)
except BackendException:
raise FrontendException
return wrapped
#back_raiser
def cmd_a():
BackendRequest().exec()
#back_raiser
def cmd_b():
BackendRequest().exec()