Is there a way to handle exceptions automatically with Python Click? - python

Click's exception handling documentation mentions that certain kinds of exceptions such as Abort, EOFError and KeyboardInterrupt are automatically handled gracefully by the framework.
For the application I'm writing, there are a lot of points from which exceptions could be generated. Terminating the application is the right step, but printing the stack trace isn't. I could always manually do this:
#cli.command()
def somecommand:
try:
# ...
except Exception as e:
click.echo(e)
However, is there a way to have Click handle all exceptions automatically?

In our CLI, all commands are grouped under a single command group. This allowed us to implement some behavior that needed to be executed for each command. One part of that is the exception handling.
Our entry point looks something like this:
#click.group()
#click.pass_context
def entry_point(ctx):
ctx.obj = {"example": "This could be the configuration"}
We use it to run global code, e.g. configure the context, but you can also define an empty method that does nothing. Other commands can be added to this command group either by using the #entry_point.command() decorator or entry_point.add_command(cmd).
For the exception handling, we wrap the entry_point in another method that handles the exceptions:
def safe_entry_point():
try:
entry_point()
except Exception as e:
click.echo(e)
In setup.py, we configure the entry point for the CLI and point it to the wrapper:
entry_points={
'console_scripts': [
'cli = my.package:safe_entry_point'
]
}
The commands of the CLI can be executed through its command group: e.g. cli command.
There might be more elegant solutions out there, but this is how we solved it. While it introduces a command group as the highest-level element in your CLI, but it allows us do handle all exceptions in a single place without the need to duplicate our error handling in each and every command.

If you only want to handle exception only for certain CLI commands. You could use another decorator, to handle exceptions.
Here's an example:
import click
from functools import wraps, partial
class NumberTooLarge(Exception):
pass
def catch_exception(func=None, *, handle):
if not func:
return partial(catch_exception, handle=handle)
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except handle as e:
raise click.ClickException(e)
return wrapper
#click.command()
#click.option("--count", default=1, help="Number of greetings.")
#catch_exception(handle=(NumberTooLarge, ValueError))
def hello(count):
"""Simple program that greets NAME for a total of COUNT times."""
if count > 100:
raise NumberTooLarge('count cannot be greater than 100')
if count < 0:
raise ValueError('count too small')
click.echo('Great choice!')
if __name__ == "__main__":
hello()

Related

why is pylint complaining about missing Exception.message?

I usually declare a base exception for my modules which does nothing, from that one I derive custom errors that could have additional custom data: AFAIK this is the Right Way™ to use exeptions in Python.
I'm also used to build a human readable message from that custom info and pass it along, so I can refer to that message in error handlers. This is an example:
# this code is meant to be compatible with Python-2.7.x
class MycoolmoduleException(Exception):
'''base Mycoolmodule Exception'''
class TooManyFoo(MycoolmoduleException):
'''got too many Foo things'''
def __init__(self, foo_num):
self.foo_num = foo_num
msg = "someone passed me %d Foos" % foo_num
super(TooManyFoo, self).__init__(msg)
# .... somewhere else ....
try:
do_something()
except Exception as exc:
tell_user(exc.message)
# real world example using Click
#click.command()
#click.pass_context
def foo(ctx):
'''do something'''
try:
# ... try really hard to do something useful ...
except MycoolmoduleException as exc:
click.echo(exc.message, err=True)
ctx.exit(-1)
Now, when I run that code through pylint-2.3.1 it complains about my use of MycoolmoduleException.message:
coolmodule.py:458:19: E1101: Instance of 'MycoolmoduleException' has no 'message' member (no-member)
That kind of code always worked for me (both in Python2 and Python3) and hasattr(exc, 'message') in the same code returns True, so why is pylint complaining? And/or: how could that code be improved?
(NB the same happens if I try to catch the built in Exception instead of my own MycoolmoduleException)

Better way to use try except block

I have a requirement to execute multiple Python statements and few of them might fail during execution, even after failing I want the rest of them to be executed.
Currently, I am doing:
try:
wx.StaticBox.Destroy()
wx.CheckBox.Disable()
wx.RadioButton.Enable()
except:
pass
If any one of the statements fails, except will get executed and program exits. But what I need is even though it is failed it should run all three statements.
How can I do this in Python?
Use a for loop over the methods you wish to call, eg:
for f in (wx.StaticBox.Destroy, wx.CheckBox.Disable, wx.RadioButton.Enable):
try:
f()
except Exception:
pass
Note that we're using except Exception here - that's generally much more likely what you want than a bare except.
If an exception occurs during a try block, the rest of the block is skipped. You should use three separate try clauses for your three separate statements.
Added in response to comment:
Since you apparently want to handle many statements, you could use a wrapper method to check for exceptions:
def mytry(functionname):
try:
functionname()
except Exception:
pass
Then call the method with the name of your function as input:
mytry(wx.StaticBox.Destroy)
I would recommend creating a context manager class that suppress any exception and the exceptions to be logged.
Please look at the code below. Would encourage any improvement to it.
import sys
class catch_exception:
def __init__(self, raising=True):
self.raising = raising
def __enter__(self):
pass
def __exit__(self, type, value, traceback):
if issubclass(type, Exception):
self.raising = False
print ("Type: ", type, " Log me to error log file")
return not self.raising
def staticBox_destroy():
print("staticBox_destroy")
raise TypeError("Passing through")
def checkbox_disable():
print("checkbox_disable")
raise ValueError("Passing through")
def radioButton_enable():
print("radioButton_enable")
raise ValueError("Passing through")
if __name__ == "__main__":
with catch_exception() as cm:
staticBox_destroy()
with catch_exception() as cm:
checkbox_disable()
with catch_exception() as cm:
radioButton_enable()

Use argparse with Setuptools entry_points

I'm writing a script which I want to distribute using Setuptools. I have added this script to the entry_points section in my setup.py.
From the setuptools docs:
The functions you specify are called with no arguments, and their return value is passed to sys.exit(), so you can return an errorlevel or message to print to stderr.
Since the method will return instead of exit it becomes more testable. For testability purposes I accept arguments in the method defaulting to sys.argv. So far so good.
The problem arises when argparse is added to the mix. When argparse fails to parse args it calls sys.exit. Now I would really prefer that argparse doesn't do this as this is handled by the setuptools wrapper. The first thing I could think of to fix this is to override the argparse.ArgumentParser but then I saw this:
# ===============
# Exiting methods
# ===============
def exit(self, status=0, message=None):
if message:
self._print_message(message, _sys.stderr)
_sys.exit(status)
def error(self, message):
"""error(message: string)
Prints a usage message incorporating the message to stderr and
exits.
If you override this in a subclass, it should not return -- it
should either exit or raise an exception.
"""
self.print_usage(_sys.stderr)
self.exit(2, _('%s: error: %s\n') % (self.prog, message))
So the docstring states I should not return and stick with raising an exception. How should I solve this?
The main method if I didn't explain it thoroughly enough:
def main(args=sys.argv):
parser = ArgumentParser(prog='spam')
# parser is configured here
parsed = parser.parse_args(args)
# Parsed args are used here
The reason you don't want to return from error is that the parser will continue parsing. Some errors are raised near the end (e.g. about unparsed strings), but others can occur early (e.g. bad type for the first argument string). The behavior of parse_args is unpredictable if you return from the error method. Normally you want the parser to quit and return control your code.
What you want to do is wrap the parse_args() call in a try: except SystemExit: block. I often use test scripts like this:
for test in ['-o FILE',
...
]:
print(test)
try:
print(parser.parse_args(test.split()))
except SystemExit:
pass
You could use error and/or exit to return other kinds of Exceptions. They could also bypass the usage message. But in one way or other you need to trap the exception in your wrapper.
If you're starting on a fresh project or have time for some refactoring, then you might consider using the Click library. Click has both setuptools integration and 'testability' as features, among other considerations.
Here's an example / test-snippet from the docs that both creates a mini command-line interface, and then tests it immediately:
import click
from click.testing import CliRunner
def test_hello_world():
#click.command()
#click.argument('name')
def hello(name):
click.echo('Hello %s!' % name)
runner = CliRunner()
result = runner.invoke(hello, ['Peter'])
assert result.exit_code == 0
assert result.output == 'Hello Peter!\n'

Using errno with assertRaises in Unit Test

I'm using assertRaises in my unit test to test the raising of specific exceptions.
assertRaises(IOError, testToRun, passedValues)
Though some of the exceptions I need to capture have specific error numbers (errno), so instead of collecting the base exception I'd like to capture the specific error number relating to that exception. Something like this, though it obviously doesn't work :)
assertRaises(IOError.errno(2), testToRun, passedValue)
To get around this when I want to capture specificly numbered exceptions I've been using:-
try:
testToRun(passedValues)
except IOError, e:
if e.errno == 2:
pass
else:
raise
I'm sure it's not perfect but it works, but was wondering if it is possible to use assertRaises to do the same thing is a lot more compact way.
Thanks.
Since 2.7 it's possible to use assertRaises with a context manager:
with self.assertRaises(SomeException) as cm:
do_something()
the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)
You could also create a new TestCase function using your current code:
def assertRaisesErrNo(self, exc, errno, f, *args, **kwargs):
try:
self.assertRaises(exc, f, *args, **kwargs)
except IOError, e:
if e.errno == errno:
pass
else:
raise
unittest.TestCase.assertRaisesErrNo = assertRaisesErrNo
Then use it like any other assert method:
class TestSomething(unittest.TestCase):
def test_somthing(self):
self.assertRaisesErrNo(IOError, 2, myfunction)
You could also turn this into a context manager fairly easily using contextlib.contextmanager

Python argparse and controlling/overriding the exit status code

Apart from tinkering with the argparse source, is there any way to control the exit status code should there be a problem when parse_args() is called, for example, a missing required switch?
I'm not aware of any mechanism to specify an exit code on a per-argument basis. You can catch the SystemExit exception raised on .parse_args() but I'm not sure how you would then ascertain what specifically caused the error.
EDIT: For anyone coming to this looking for a practical solution, the following is the situation:
ArgumentError() is raised appropriately when arg parsing fails. It is passed the argument instance and a message
ArgumentError() does not store the argument as an instance attribute, despite being passed (which would be convenient)
It is possible to re-raise the ArgumentError exception by subclassing ArgumentParser, overriding .error() and getting hold of the exception from sys.exc_info()
All that means the following code - whilst ugly - allows us to catch the ArgumentError exception, get hold of the offending argument and error message, and do as we see fit:
import argparse
import sys
class ArgumentParser(argparse.ArgumentParser):
def _get_action_from_name(self, name):
"""Given a name, get the Action instance registered with this parser.
If only it were made available in the ArgumentError object. It is
passed as it's first arg...
"""
container = self._actions
if name is None:
return None
for action in container:
if '/'.join(action.option_strings) == name:
return action
elif action.metavar == name:
return action
elif action.dest == name:
return action
def error(self, message):
exc = sys.exc_info()[1]
if exc:
exc.argument = self._get_action_from_name(exc.argument_name)
raise exc
super(ArgumentParser, self).error(message)
## usage:
parser = ArgumentParser()
parser.add_argument('--foo', type=int)
try:
parser.parse_args(['--foo=d'])
except argparse.ArgumentError, exc:
print exc.message, '\n', exc.argument
Not tested in any useful way. The usual don't-blame-me-if-it-breaks indemnity applies.
All the answers nicely explain the details of argparse implementation.
Indeed, as proposed in PEP (and pointed by Rob Cowie) one should inherit ArgumentParser and override the behavior of error or exit methods.
In my case I just wanted to replace usage print with full help print in case of the error:
class ArgumentParser(argparse.ArgumentParser):
def error(self, message):
self.print_help(sys.stderr)
self.exit(2, '%s: error: %s\n' % (self.prog, message))
In case of override main code will continue to contain the minimalistic..
# Parse arguments.
args = parser.parse_args()
# On error this will print help and cause exit with explanation message.
Perhaps catching the SystemExit exception would be a simple workaround:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('foo')
try:
args = parser.parse_args()
except SystemExit:
print("do something else")
Works for me, even in an interactive session.
Edit: Looks like #Rob Cowie beat me to the switch. Like he said, this doesn't have very much diagnostic potential, unless you want get silly and try to glean info from the traceback.
As of Python 3.9, this is no longer so painful. You can now handle this via the new argparse.ArgumentParser exit_on_error instantiation argument. Here is an example (slightly modified from the python docs: argparse#exit_on_error):
parser = argparse.ArgumentParser(exit_on_error=False)
parser.add_argument('--integers', type=int)
try:
parser.parse_args('--integers a'.split())
except argparse.ArgumentError:
print('Catching an argumentError')
exit(-1)
You'd have to tinker. Look at argparse.ArgumentParser.error, which is what gets called internally. Or you could make the arguments non-mandatory, then check and exit outside argparse.
You can use one of the exiting methods: http://docs.python.org/library/argparse.html#exiting-methods. It should already handle situations where the arguments are invalid, however (assuming you have defined your arguments properly).
Using invalid arguments:
% [ $(./test_argparse.py> /dev/null 2>&1) ] || { echo error }
error # exited with status code 2
I needed a simple method to catch an argparse error at application start and pass the error to a wxPython form. Combining the best answers from above resulted in the following small solution:
import argparse
# sub class ArgumentParser to catch an error message and prevent application closing
class MyArgumentParser(argparse.ArgumentParser):
def __init__(self, *args, **kwargs):
super(MyArgumentParser, self).__init__(*args, **kwargs)
self.error_message = ''
def error(self, message):
self.error_message = message
def parse_args(self, *args, **kwargs):
# catch SystemExit exception to prevent closing the application
result = None
try:
result = super().parse_args(*args, **kwargs)
except SystemExit:
pass
return result
# testing -------
my_parser = MyArgumentParser()
my_parser.add_argument('arg1')
my_parser.parse_args()
# check for an error
if my_parser.error_message:
print(my_parser.error_message)
running it:
>python test.py
the following arguments are required: arg1
While argparse.error is a method and not a class its not possible to "try", "except" all "unrecognized arguments" errors. If you want to do so you need to override the error function from argparse:
def print_help(errmsg):
print(errmsg.split(' ')[0])
parser.error = print_help
args = parser.parse_args()
on an invalid input it will now print:
unrecognised

Categories