I usually declare a base exception for my modules which does nothing, from that one I derive custom errors that could have additional custom data: AFAIK this is the Right Way™ to use exeptions in Python.
I'm also used to build a human readable message from that custom info and pass it along, so I can refer to that message in error handlers. This is an example:
# this code is meant to be compatible with Python-2.7.x
class MycoolmoduleException(Exception):
'''base Mycoolmodule Exception'''
class TooManyFoo(MycoolmoduleException):
'''got too many Foo things'''
def __init__(self, foo_num):
self.foo_num = foo_num
msg = "someone passed me %d Foos" % foo_num
super(TooManyFoo, self).__init__(msg)
# .... somewhere else ....
try:
do_something()
except Exception as exc:
tell_user(exc.message)
# real world example using Click
#click.command()
#click.pass_context
def foo(ctx):
'''do something'''
try:
# ... try really hard to do something useful ...
except MycoolmoduleException as exc:
click.echo(exc.message, err=True)
ctx.exit(-1)
Now, when I run that code through pylint-2.3.1 it complains about my use of MycoolmoduleException.message:
coolmodule.py:458:19: E1101: Instance of 'MycoolmoduleException' has no 'message' member (no-member)
That kind of code always worked for me (both in Python2 and Python3) and hasattr(exc, 'message') in the same code returns True, so why is pylint complaining? And/or: how could that code be improved?
(NB the same happens if I try to catch the built in Exception instead of my own MycoolmoduleException)
Related
I have a python class in which I open files and read out data. If some creteria are not met, I raise an error, but before that I specify the error by giving the object an attribute: self.Error = specification. But since the error raising undos everything in the try block I can't access it. This happens in the __init__ function, so the created object doesn't even exist..
Here's the necessary code:
class MyClass:
def __init__(self):
#do something
if this_or_that:
self.Error = specification
raise MyCostumError
try:
object = MyClass()
except MyCostumError:
print(object.Error)
I get: NameError: name 'object' is not defined
Just for clarification:
I have defined MyCostumError, the variable names are just for better understanding: I use good ones and they are defined and I need the clarification, because an Error can be raised in different lines.
So here's my question:
Is there something like try/except, but when an error is raised it does NOT undo everything. Or am I just stupid and there is a much easier method for a achieving this?
If you are raising an exception in the initializer, you should not rely on the object to be created to get some error information to the caller. This is where you should use the exception to pass that information:
class MyCustomError(Exception):
pass
class MyClass:
def __init__(self):
#do something
if this_or_that:
raise MyCustomError(specification) # put the spec in the exception itself
try:
object = MyClass()
except MyCustomError as e:
print(e) # the spec is in the exception object
You are trying to reference to an object that cannot exist. Let me explain:
If an error occurs when you try to initialise an object, that object will not be initialised. So if you try to acced to it when it is not initialised, you will get an error.
try:
object = MyClass() #initialising object successful, object existing.
except: #initialising failed, object does not exist.
print(object.Error) #nameError, since object was never created.
Try/except doesn't undo anything, just stops doing something if an error occurs.
Error raising doesn't undo anything. Have a look at the docs.
As your output states, the object is not defined, this is because when you raise an error in the __init__, it is seen as the initialosor of your class failing, and this does not return an object.
I think this is what you're looking for:
class MyClass:
def __init__(self):
# do initialisation stuff
def other_method(self):
# do something
if this_or_that:
self.Error = specification
raise MyCustomError(specification)
object = MyClass()
try:
object.other_method()
except MyCustomError as e:
print(e)
print(object.Error)
It's not a beautiful solution but it should work:
errorcode = None
class MyClass:
def __init__(self):
global errorcode
#do something
if this_or_that:
errorcode = specification
raise MyCostumError
try:
object = MyClass()
except MyCostumError:
print(errorcode)
Given your question I think the following should fit your use case well.
class MyClass:
def __init__(self):
# Do something
try:
if this_or_that:
self.Error = specification
raise MyCostumError
except MyCustomError as e:
# Handle your custom error however you like
object = MyClass()
In the above case you should be able to mitigate the risk of instantiation failing due to custom exception/error raising failing by handling this behaviour within MyClass.__init__ itself.
This is also a much cleaner solution in terms of keeping logic relating to instantiation of MyClass objects contained within the __init__ function of the class - i.e. you won't have to worry about wrapping instantiations of this class in try/except blocks each time they are present in your code.
Most third-party Python libraries throw custom exceptions. Many of these exceptions have their own dependencies and side effects. Consider, for example, the following situation:
class ThirdPartyException(BaseException):
def __init__(self):
print("I do something arcane and expensive upon construction.")
print("Maybe I have a bunch of arguments that can't be None, too.")
def state(self) -> bool:
# In real life, this could be True or False
return True
Let's say, moreover, that I absolutely have to handle this exception, and to do it, I need to look at the exception's state. If I want to write tests to examine the behavior when this exception is handled, I must have the ability to create a ThirdPartyException. But I may not even be able to figure out how, let alone how to do it cheaply.
If this weren't an Exception and I wanted to write tests, I would immediately reach for MagicMock. But I cannot figure out how to use MagicMock with an exception.
How do I test the error handling cases in the following code, ideally using py.test?
def error_causing_thing():
raise ThirdPartyException()
def handle_error_conditionally():
try:
error_causing_thing()
exception ThirdPartyException as e:
if state:
return "Some non-error value"
else:
return "A different non-error value"
I know this is a stale question, and I am not using pytest, but I had a similar issue with unittest, and just found a solution that someone else may find helpful. I added a patch for my custom exception with new keyword having a value of any class that is a subclass of an Exception (or is the Exception class):
import unittest
from unittest import TestCase
from unittest.mock import patch
# ... other imports required for these unit tests
class MockCustomException(Exception):
def state(self):
return self.__class__.state_return_value
class MyTestCase(TestCase):
def setUp(self):
custom_exception_patcher = patch(
'path.to.CustomException',
new=MockCustomException
)
custom_exception_patcher.start() # start patcher
self.addCleanup(custom_exception_patcher.stop) # stop patch after test
def test_when_state_true(self):
MockCustomException.state_return_value = True
self.assertEqual(handle_error_conditionally(), "Some non-error value")
def test_when_state_false(self):
MockCustomException.state_return_value = False
self.assertEqual(handle_error_conditionally(), "A different non-error value")
This method can also be used on a per test basis, by using patch as a decorator or as a context manager:
# ... imports, etc
class MockCustomExceptionStateTrue:
def state(self):
return True
#patch('path.to.CustomException', new=MockCustomExceptionStateTrue)
def test_with_decorator_patch(self):
self.assertEqual(handle_error_conditionally(), "Some non-error value")
def test_with_context_manager(self):
class MockCustomException:
def state(self):
return True
with patch('path.to.CustomException', new=MockCustomException):
self.assertEqual(handle_error_conditionally(), "Some non-error value")
Hopefully this is helpful for someone!! After briefly looking through pytest docs, it looks like monkeypatching is similar to the unittest patch functionality
I'm trying to test if the application is retrying.
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
// doing something
VideoManager().convert(gif_url)
return
except Exception as exc:
raise convert_video.retry(exc=exc)
And I'm mocking the test
#patch('src.video_manager.VideoManager.convert')
#patch('requests.post')
def test_retry_failed_task(self, mock_video_manager, mock_requests):
mock_video_manager.return_value= {'webm':'file.webm', 'mp4':'file.mp4', 'ogv' : 'file.ogv', 'snapshot':'snapshot.png'}
mock_video_manager.side_effect = Exception('some error')
server.convert_video.retry = MagicMock()
server.convert_video('gif_url', 'http://www.company.com/webhook?attachment_id=1234')
server.convert_video.retry.assert_called_with(ANY)
And I'm getting this error
TypeError: exceptions must be old-style classes or derived from BaseException, not MagicMock
Which is obvious but I don't know how to do it otherwise to test if the method is being called.
I havn't gotten it to work with just using the built in retry so I have to use a mock with the side effect of the real Retry, this makes it possible to catch it in a test.
I've done it like this:
from celery.exceptions import Retry
from mock import MagicMock
from nose.plugins.attrib import attr
# Set it for for every task-call (or per task below with #patch)
task.retry = MagicMock(side_effect=Retry)
##patch('task.retry', MagicMock(side_effect=Retry)
def test_task(self):
with assert_raises(Retry):
task() # Note, no delay or things like that
# and the task, I don't know if it works without bind.
#Celery.task(bind=True)
def task(self):
raise self.retry()
If anyone knows how I can get rid of the extra step in mocking the Retry "exception" I'd be happy to hear it!
from mock import patch
import pytest
#patch('tasks.convert_video.retry')
#patch('tasks.VideoManager')
def test_retry_on_exception(mock_video_manger, mock_retry):
mock_video_manger.convert.side_effect = error = Exception()
with pytest.raises(Exception):
tasks.convert_video('foo', 'bar')
mock_retry.assert_called_with(exc=error)
you're also missing some stuff in your task:
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
try:
return VideoManager().convert(gif_url)
except Exception as exc:
convert_video.retry(exc=exc)
The answers here didn't help me, so I dived even deeper into celery's code and found a hack that works for me:
def test_celery_retry(monkeypatch):
# so the retry will be eager
monkeypatch.setattr(celery_app.conf, 'task_always_eager', True)
# so celery won't try to raise an error and actually retry
monkeypatch.setattr(celery.app.task.Context, 'called_directly', False)
task.delay()
For me it worked to patch celery.app.task.Task.request. This way I could also simulate later retries (eg. to test that the task is retried multiple times).
Using pytest and unittest.mock.patch() this looks like:
#mock.patch("celery.app.task.Task.request")
def test_celery_task_retry(mock_request):
# Override called_directly so that Task.retry() produces a Retry exception.
mock_request.called_directly = False
# Simulate the 42nd retry.
mock_request.retries = 42
with pytest.raises(celery.exceptions.Retry) as retry_exc:
task()
assert retry_exc.value.when > 0
I'm working with an external service which reports errors by code.
I have the list of error codes and the associated messages. Say, the following categories exist: authentication error, server error.
What is the smartest way to implement these errors in Python so I can always lookup an error by code and get the corresponding exception object?
Here's my straightforward approach:
class AuthError(Exception):
pass
class ServerError(Exception):
pass
map = {
1: AuthError,
2: ServerError
}
def raise_code(code, message):
""" Raise an exception by code """
raise map[code](message)
Would like to see better solutions :)
Your method is correct, except that map should be renamed something else (e.g. ERROR_MAP) so it does not shadow the builtin of the same name.
You might also consider making the function return the exception rather than raising it:
def error(code, message):
""" Return an exception by code """
return ERROR_MAP[code](message)
def foo():
raise error(code, message)
By placing the raise statement inside foo, you'd raise the error closer to where the error occurred and there would be one or two less lines to trace through if the stack trace is printed.
Another approach is to create a polymorphic base class which, being instantiated, actually produces a subclass that has the matching code.
This is implemented by traversing __subclasses__() of the parent class and comparing the error code to the one defined in the class. If found, use that class instead.
Example:
class CodeError(Exception):
""" Base class """
code = None # Error code
def __new__(cls, code, *args):
# Pick the appropriate class
for E in cls.__subclasses__():
if E.code == code:
C = E
break
else:
C = cls # fall back
return super(CodeError, cls).__new__(C, code, *args)
def __init__(self, code, message):
super(CodeError, self).__init__(message)
# Subclasses with error codes
class AuthError(CodeError):
code = 1
class ServerError(CodeError):
code = 2
CodeError(1, 'Wrong password') #-> AuthError
CodeError(2, 'Failed') #-> ServerError
With this approach, it's trivial to associate error message presets, and even map one class to multiple codes with a dict.
I got this-like hierarchy and similar code:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def cmd_a():
BackendRequest().exec()
def cmd_b():
BackendRequest().exec()
The goal is to make developer able to operate with Frontend objects and exceptions within functions cmd_x of Frontend.
Basicly, i need a place to handle common BackendException types, to raise FrontendException. For example:
class Frontend:
def cmd_a():
try:
BackendRequest().exec()
except BackendException as e:
raise FrontendException()
And this will be repeated in each cmd_x function! It's so ugly! And it operates with Backend things! I want to remove repeated exception handling.
Any suggestions?
Btw, my solution, but i find it ugly too, so view it after you'll try to suggest me something. Maybe you'll suggest me something about my solution.
class BaseFrontend:
def exec_request(req):
try:
return req.exec()
except BackendException as e:
raise FrontendException
class Frontend(BaseFrontend):
def cmd_a():
result self.exec_request(BackendRequest())
def cmd_b():
result self.exec_request(BackendRequest())
EDIT: Ok, yes, i know, i dont need to create a lot of classes to build simple API. But, let's see what i need in the result:
class APIManager:
def cmd_a(): ...
def cmd_b(): ...
This manager needs to access HTTP REST service to perform each command. So, if i'll get an error during REST request, i'll need to raise exception APIManagerException - i can't leave raw pycurl exception, beacause APIManager user don't knows what pycurl is, he will be confused with getting pycurl error if he will give wrong ID as argument of cmd_x.
So i need to raise informative exceptions for some common cases. Let it be just one exception - APIManagerException. But i dont want to repeat try...except block each time, in each command, to each pycurl request. In fact, i want to process some errors in commands(functions cmd_x), not to parse pycurl errors.
You can create a decorator that wraps all Frontend calls, catches BackendExceptions, and raises FrontendException if they are thrown. (Honestly, though, it's not clear why Frontend and Backend are classes and not a set of functions.) See below:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def back_raiser(func):
def wrapped(*args, **kwargs):
try:
func(*args, **kwargs)
except BackendException:
raise FrontendException
return wrapped
#back_raiser
def cmd_a():
BackendRequest().exec()
#back_raiser
def cmd_b():
BackendRequest().exec()