Most third-party Python libraries throw custom exceptions. Many of these exceptions have their own dependencies and side effects. Consider, for example, the following situation:
class ThirdPartyException(BaseException):
def __init__(self):
print("I do something arcane and expensive upon construction.")
print("Maybe I have a bunch of arguments that can't be None, too.")
def state(self) -> bool:
# In real life, this could be True or False
return True
Let's say, moreover, that I absolutely have to handle this exception, and to do it, I need to look at the exception's state. If I want to write tests to examine the behavior when this exception is handled, I must have the ability to create a ThirdPartyException. But I may not even be able to figure out how, let alone how to do it cheaply.
If this weren't an Exception and I wanted to write tests, I would immediately reach for MagicMock. But I cannot figure out how to use MagicMock with an exception.
How do I test the error handling cases in the following code, ideally using py.test?
def error_causing_thing():
raise ThirdPartyException()
def handle_error_conditionally():
try:
error_causing_thing()
exception ThirdPartyException as e:
if state:
return "Some non-error value"
else:
return "A different non-error value"
I know this is a stale question, and I am not using pytest, but I had a similar issue with unittest, and just found a solution that someone else may find helpful. I added a patch for my custom exception with new keyword having a value of any class that is a subclass of an Exception (or is the Exception class):
import unittest
from unittest import TestCase
from unittest.mock import patch
# ... other imports required for these unit tests
class MockCustomException(Exception):
def state(self):
return self.__class__.state_return_value
class MyTestCase(TestCase):
def setUp(self):
custom_exception_patcher = patch(
'path.to.CustomException',
new=MockCustomException
)
custom_exception_patcher.start() # start patcher
self.addCleanup(custom_exception_patcher.stop) # stop patch after test
def test_when_state_true(self):
MockCustomException.state_return_value = True
self.assertEqual(handle_error_conditionally(), "Some non-error value")
def test_when_state_false(self):
MockCustomException.state_return_value = False
self.assertEqual(handle_error_conditionally(), "A different non-error value")
This method can also be used on a per test basis, by using patch as a decorator or as a context manager:
# ... imports, etc
class MockCustomExceptionStateTrue:
def state(self):
return True
#patch('path.to.CustomException', new=MockCustomExceptionStateTrue)
def test_with_decorator_patch(self):
self.assertEqual(handle_error_conditionally(), "Some non-error value")
def test_with_context_manager(self):
class MockCustomException:
def state(self):
return True
with patch('path.to.CustomException', new=MockCustomException):
self.assertEqual(handle_error_conditionally(), "Some non-error value")
Hopefully this is helpful for someone!! After briefly looking through pytest docs, it looks like monkeypatching is similar to the unittest patch functionality
Related
I was writing a test using pytest library where I need to test a method which takes another method as an argument.
class Certificate:
def upload(self, upload_fn: Callable):
try:
if self.file_name:
upload_fn(self.file_name)
return
raise ValueError("File name doesn't exist")
except Exception as e:
raise e
Now I created a dummy mock function which I am passing while calling upload method but I am not sure how do I make sure if the upload_fn is called.
I am trying to achieve something like this
def test_certificate_upload(certificate):
certificate.upload(some_mock_fn)
assert some_mock_fn.called_once() == True
EDIT: so currently I am testing it in the following way but I think there can be a better approach.
def mock_upload(f_name):
""just an empty mock method""
def mock_upload_raise_error(f_name):
raise Exception e
def test_certificate_upload_raise_exception(certificate):
with pytest.raises(Exception) as e:
certificate.generate(mock_generator_raise_error)
PS: limitation to this approach is we can't assert if the method was called or how many times the method was called or with what params the method was called.
Also, we have to create extra dummy mock methods for differnet scenarios.
You an mock :
def mock_get(self, *args):
return "Result I want"
#mock.patch(upload, side_effect=mock_get)
def test_certificate_upload(certificate):
certificate.upload(some_mock_fn)
assert function_name() == Return_data
I usually declare a base exception for my modules which does nothing, from that one I derive custom errors that could have additional custom data: AFAIK this is the Right Way™ to use exeptions in Python.
I'm also used to build a human readable message from that custom info and pass it along, so I can refer to that message in error handlers. This is an example:
# this code is meant to be compatible with Python-2.7.x
class MycoolmoduleException(Exception):
'''base Mycoolmodule Exception'''
class TooManyFoo(MycoolmoduleException):
'''got too many Foo things'''
def __init__(self, foo_num):
self.foo_num = foo_num
msg = "someone passed me %d Foos" % foo_num
super(TooManyFoo, self).__init__(msg)
# .... somewhere else ....
try:
do_something()
except Exception as exc:
tell_user(exc.message)
# real world example using Click
#click.command()
#click.pass_context
def foo(ctx):
'''do something'''
try:
# ... try really hard to do something useful ...
except MycoolmoduleException as exc:
click.echo(exc.message, err=True)
ctx.exit(-1)
Now, when I run that code through pylint-2.3.1 it complains about my use of MycoolmoduleException.message:
coolmodule.py:458:19: E1101: Instance of 'MycoolmoduleException' has no 'message' member (no-member)
That kind of code always worked for me (both in Python2 and Python3) and hasattr(exc, 'message') in the same code returns True, so why is pylint complaining? And/or: how could that code be improved?
(NB the same happens if I try to catch the built in Exception instead of my own MycoolmoduleException)
I am working with a class in python that is part of a bigger program. The class is calling different methods.
If there is an error in one of the method I would like code to keep running after, but after the program is finished, I want to be able to see which methods had potential errors in them.
Below is roughly how I am structuring it at the moment, and this solution doesn't scale very well with more methods. Is there a better way to provide feedback (after the code has been fully run) as to which of the method had a potential error?
class Class():
def __init__(self):
try:
self.method_1()
except:
self.error_method1 = "Yes"
break
try:
self.method_2()
except:
self.error_method2 = "Yes"
break
try:
self.method_3()
except:
self.error_method3 = "Yes"
break
Although you could use sys.exc_info() to retrieve information about an Exception when one occurs as I mentioned in a comment, doing so may not be required since Python's standard try/expect mechanism seems adequate.
Below is a runnable example showing how to do so in order to provide "feedback" later about the execution of several methods of a class. This approach uses a decorator function, so should scale well since the same decorator can be applied to as many of the class' methods as desired.
from contextlib import contextmanager
from functools import wraps
import sys
from textwrap import indent
def provide_feedback(method):
""" Decorator to trap exceptions and add messages to feedback. """
#wraps(method)
def wrapped_method(self, *args, **kwargs):
try:
return method(self, *args, **kwargs)
except Exception as exc:
self._feedback.append(
'{!r} exception occurred in {}()'.format(exc, method.__qualname__))
return wrapped_method
class Class():
def __init__(self):
with self.feedback():
self.method_1()
self.method_2()
self.method_3()
#contextmanager
def feedback(self):
self._feedback = []
try:
yield
finally:
# Example of what could be done with any exception messages.
# They could instead be appended to some higher-level container.
if self._feedback:
print('Feedback:')
print(indent('\n'.join(self._feedback), ' '))
#provide_feedback
def method_1(self):
raise RuntimeError('bogus')
#provide_feedback
def method_2(self):
pass
#provide_feedback
def method_3(self):
raise StopIteration('Not enough foobar to go around')
inst = Class()
Output:
Feedback:
RuntimeError('bogus') exception occurred in Class.method_1()
StopIteration('Not enough foobar to go around') exception occurred in Class.method_3()
I'm trying to test if the application is retrying.
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
// doing something
VideoManager().convert(gif_url)
return
except Exception as exc:
raise convert_video.retry(exc=exc)
And I'm mocking the test
#patch('src.video_manager.VideoManager.convert')
#patch('requests.post')
def test_retry_failed_task(self, mock_video_manager, mock_requests):
mock_video_manager.return_value= {'webm':'file.webm', 'mp4':'file.mp4', 'ogv' : 'file.ogv', 'snapshot':'snapshot.png'}
mock_video_manager.side_effect = Exception('some error')
server.convert_video.retry = MagicMock()
server.convert_video('gif_url', 'http://www.company.com/webhook?attachment_id=1234')
server.convert_video.retry.assert_called_with(ANY)
And I'm getting this error
TypeError: exceptions must be old-style classes or derived from BaseException, not MagicMock
Which is obvious but I don't know how to do it otherwise to test if the method is being called.
I havn't gotten it to work with just using the built in retry so I have to use a mock with the side effect of the real Retry, this makes it possible to catch it in a test.
I've done it like this:
from celery.exceptions import Retry
from mock import MagicMock
from nose.plugins.attrib import attr
# Set it for for every task-call (or per task below with #patch)
task.retry = MagicMock(side_effect=Retry)
##patch('task.retry', MagicMock(side_effect=Retry)
def test_task(self):
with assert_raises(Retry):
task() # Note, no delay or things like that
# and the task, I don't know if it works without bind.
#Celery.task(bind=True)
def task(self):
raise self.retry()
If anyone knows how I can get rid of the extra step in mocking the Retry "exception" I'd be happy to hear it!
from mock import patch
import pytest
#patch('tasks.convert_video.retry')
#patch('tasks.VideoManager')
def test_retry_on_exception(mock_video_manger, mock_retry):
mock_video_manger.convert.side_effect = error = Exception()
with pytest.raises(Exception):
tasks.convert_video('foo', 'bar')
mock_retry.assert_called_with(exc=error)
you're also missing some stuff in your task:
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
try:
return VideoManager().convert(gif_url)
except Exception as exc:
convert_video.retry(exc=exc)
The answers here didn't help me, so I dived even deeper into celery's code and found a hack that works for me:
def test_celery_retry(monkeypatch):
# so the retry will be eager
monkeypatch.setattr(celery_app.conf, 'task_always_eager', True)
# so celery won't try to raise an error and actually retry
monkeypatch.setattr(celery.app.task.Context, 'called_directly', False)
task.delay()
For me it worked to patch celery.app.task.Task.request. This way I could also simulate later retries (eg. to test that the task is retried multiple times).
Using pytest and unittest.mock.patch() this looks like:
#mock.patch("celery.app.task.Task.request")
def test_celery_task_retry(mock_request):
# Override called_directly so that Task.retry() produces a Retry exception.
mock_request.called_directly = False
# Simulate the 42nd retry.
mock_request.retries = 42
with pytest.raises(celery.exceptions.Retry) as retry_exc:
task()
assert retry_exc.value.when > 0
I got this-like hierarchy and similar code:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def cmd_a():
BackendRequest().exec()
def cmd_b():
BackendRequest().exec()
The goal is to make developer able to operate with Frontend objects and exceptions within functions cmd_x of Frontend.
Basicly, i need a place to handle common BackendException types, to raise FrontendException. For example:
class Frontend:
def cmd_a():
try:
BackendRequest().exec()
except BackendException as e:
raise FrontendException()
And this will be repeated in each cmd_x function! It's so ugly! And it operates with Backend things! I want to remove repeated exception handling.
Any suggestions?
Btw, my solution, but i find it ugly too, so view it after you'll try to suggest me something. Maybe you'll suggest me something about my solution.
class BaseFrontend:
def exec_request(req):
try:
return req.exec()
except BackendException as e:
raise FrontendException
class Frontend(BaseFrontend):
def cmd_a():
result self.exec_request(BackendRequest())
def cmd_b():
result self.exec_request(BackendRequest())
EDIT: Ok, yes, i know, i dont need to create a lot of classes to build simple API. But, let's see what i need in the result:
class APIManager:
def cmd_a(): ...
def cmd_b(): ...
This manager needs to access HTTP REST service to perform each command. So, if i'll get an error during REST request, i'll need to raise exception APIManagerException - i can't leave raw pycurl exception, beacause APIManager user don't knows what pycurl is, he will be confused with getting pycurl error if he will give wrong ID as argument of cmd_x.
So i need to raise informative exceptions for some common cases. Let it be just one exception - APIManagerException. But i dont want to repeat try...except block each time, in each command, to each pycurl request. In fact, i want to process some errors in commands(functions cmd_x), not to parse pycurl errors.
You can create a decorator that wraps all Frontend calls, catches BackendExceptions, and raises FrontendException if they are thrown. (Honestly, though, it's not clear why Frontend and Backend are classes and not a set of functions.) See below:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def back_raiser(func):
def wrapped(*args, **kwargs):
try:
func(*args, **kwargs)
except BackendException:
raise FrontendException
return wrapped
#back_raiser
def cmd_a():
BackendRequest().exec()
#back_raiser
def cmd_b():
BackendRequest().exec()