Python3 override argparse error - python

I'm creating a program as a assignment in my school, I'm all done with it except one thing.
We have to make the program exit with different codes depending on how the execution went. In my program I'm processing options using "argparse" and when I use built-in functions like "version" I've managed to override the exit-code, but if I write a option that doesn't exist, then it won't work. It gives me the "unrecognized error"-message, and exits with code "0", I need it to exit with code 1. Is there anyway to do this? Been driving me nuts, have struggled with it for days now...
Thanks in advance!
/feeloor

Use sys.exit(returnCode) to exit with particular codes. Obviously on linux machines you need to no 8 bit right shift inorder to get the right return code.

To achieve something like this, inherit from argparse.ArgumentParser and reimplement the exit method (or perhaps the error method if you like).
For example:
class Parser(argparse.ArgumentParser):
# the default status on the parent class is 0, we're
# changing it to be 1 here ...
def exit(self, status=1, message=None):
return super().exit(status, message)

From the Python argparse documentation
https://docs.python.org/3/library/argparse.html#exiting-methods
16.4.5.9. Exiting methods
ArgumentParser.exit(status=0, message=None)
This method terminates the program, exiting with the specified status and, if given, it prints a message before that.
ArgumentParser.error(message)
This method prints a usage message including the message to the standard error and terminates the program with a status code of 2.
They both get a message, and pass it on. error adds usage and passes it on to exit. You can customize both in a subclassed Parser.
There are also examples of error catching and redirection the unittest file, test/test_argparse.py.
A problem with using a try/except wrapper is that the error information is written to sys.stderr, and not incorporated in the sys.exc_info.
In [117]: try:
parser.parse_args(['ug'])
except:
print('execinfo:',sys.exc_info())
.....:
usage: ipython3 [-h] [--test TEST] [--bar TEST] test test
ipython3: error: the following arguments are required: test
execinfo: (<class 'SystemExit'>, SystemExit(2,), <traceback object at 0xb31fb34c>)
The exit number is available in the exc_info, but not the message.
One option is to redirect sys.stderr at the same time as I do that try/except block.
Here's an example of changing the exit method and wrapping the call in a try block:
In [155]:
def altexit(status, msg):
print(status, msg)
raise ValueError(msg)
.....:
In [156]: parser.exit=altexit
In [157]:
try:
parser.parse_args(['--ug','ug','ug'])
except ValueError:
msg = sys.exc_info()[1]
.....:
usage: ipython3 [-h] [--test TEST] [--bar TEST] test test
2 ipython3: error: unrecognized arguments: --ug
In [158]: msg
Out[158]: ValueError('ipython3: error: unrecognized arguments: --ug\n')
Python lets me replace methods of existing objects. I don't recommend this in production code, but it is convenient when trying ideas. I capture the Error (my choice of ValueError is arbitrary), and save the message for later display or testing.
Generally the type of error (e.g. TypeError, ValueError, etc) is part of the public API, but the text of error is not. It can be refined from one Python release to the next without much notification. So you test for message details at your own risk.

I solved the problem catching SystemExit and determining what error by simply testing and comparing.
Thanks for all the help guys!

Related

How to continue execution of a Python module which calls a failed C++ function?

I have a python file (app.py) which makes a call to a function as follows:
answer = fn1()
The fn1() is actually written in C++ and I've built a wrapper so that I can use it in Python.
The fn1() can either return a valid result, or it may sometimes fail and terminate. Now the issue is that at the times when fn1() fails and aborts, the calling file (i.e. app.py) also terminates and does not go forward to the error handling part.
I would like the calling file to move to my error handling part (i.e. 'except' and 'finally') if fn1() aborts and dumps core. Is there any way to achieve this?
From the OP:
The C++ file that I have built wrapper around aborts in case of exception and dumps core. Python error code is not executed
This was not evident in your question. To catch this sort of error, you can use the signal.signal function in the python standard library (relevant SO answer).
import signal
def sig_handler(signum, frame):
print("segfault")
signal.signal(signal.SIGSEGV, sig_handler)
answer = fn1()
You basically wrote the answer in your question. Use a try except finally block. Refer also to the Python3 documentation on error handling
try:
answer = fn1()
except Exception: # You can use an exception more specific to your failing code.
# do stuff
finally:
# do stuff
What you need to do is to catch the exception in your C++ function and then convert it to a python exception and return that to the python code.

Is it possible to change PyTest's assert statement behaviour in Python

I am using Python assert statements to match the actual and expected behaviour. I do not have a control over these as if there is an error test cases aborts. I want to take control of assertion error and want to define if I want to abort testcase on failure assert or not.
Also I want to add something like if there is an assertion error then test case should be paused and user can resume at any moment.
I do not have any idea how to do this
Code example, we are using pytest here
import pytest
def test_abc():
a = 10
assert a == 10, "some error message"
Below is my expectation
When assert throws an assertionError, i should have a option of pausing the testcase and can debug and later resume. For pause and resume I will use tkinter module. I will make a assert function as below
import tkinter
import tkinter.messagebox
top = tkinter.Tk()
def _assertCustom(assert_statement, pause_on_fail = 0):
#assert_statement will be something like: assert a == 10, "Some error"
#pause_on_fail will be derived from global file where I can change it on runtime
if pause_on_fail == 1:
try:
eval(assert_statement)
except AssertionError as e:
tkinter.messagebox.showinfo(e)
eval (assert_statement)
#Above is to raise the assertion error again to fail the testcase
else:
eval (assert_statement)
Going forward I have to change every assert statement with this function as
import pytest
def test_abc():
a = 10
# Suppose some code and below is the assert statement
_assertCustom("assert a == 10, 'error message'")
This is too much effort for me as I have to make change at thousand of places where I have used assert. Is there any easy way to do that in pytest
Summary: I needs something where I can pause the testcase on failure and then resume after debugging. I know about tkinter and that is the reason I have used it. Any other ideas will be welcomes
Note: Above code is not tested yet. There may be small syntax errors too
Edit: Thanks for the answers. Extending this question a little ahead now. What if I want change the behaviour of assert. Currently when there is an assertion error testcase exits. What if I want to choose if I need testcase exit on particular assert failure or not. I don't want to write custom assert function as mentioned above because this way I have to change at number of places
You are using pytest, which gives you ample options to interact with failing tests. It gives you command line options and and several hooks to make this possible. I'll explain how to use each and where you could make customisations to fit your specific debugging needs.
I'll also go into more exotic options that would allow you to skip specific assertions entirely, if you really feel you must.
Handle exceptions, not assert
Note that a failing test doesn’t normally stop pytest; only if you enabled the explicitly tell it to exit after a certain number of failures. Also, tests fail because an exception is raised; assert raises AssertionError but that’s not the only exception that’ll cause a test to fail! You want to control how exceptions are handled, not alter assert.
However, a failing assert will end the individual test. That's because once an exception is raised outside of a try...except block, Python unwinds the current function frame, and there is no going back on that.
I don't think that that's what you want, judging by your description of your _assertCustom() attempts to re-run the assertion, but I'll discuss your options further down nonetheless.
Post-mortem debugging in pytest with pdb
For the various options to handle failures in a debugger, I'll start with the --pdb command-line switch, which opens the standard debugging prompt when a test fails (output elided for brevity):
$ mkdir demo
$ touch demo/__init__.py
$ cat << EOF > demo/test_foo.py
> def test_ham():
> assert 42 == 17
> def test_spam():
> int("Vikings")
> EOF
$ pytest demo/test_foo.py --pdb
[ ... ]
test_foo.py:2: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(2)test_ham()
-> assert 42 == 17
(Pdb) q
Exit: Quitting debugger
[ ... ]
With this switch, when a test fails pytest starts a post-mortem debugging session. This is essentially exactly what you wanted; to stop the code at the point of a failed test and open the debugger to take a look at the state of your test. You can interact with the local variables of the test, the globals, and the locals and globals of every frame in the stack.
Here pytest gives you full control over whether or not to exit after this point: if you use the q quit command then pytest exits the run too, using c for continue will return control to pytest and the next test is executed.
Using an alternative debugger
You are not bound to the pdb debugger for this; you can set a different debugger with the --pdbcls switch. Any pdb.Pdb() compatible implementation would work, including the IPython debugger implementation, or most other Python debuggers (the pudb debugger requires the -s switch is used, or a special plugin). The switch takes a module and class, e.g. to use pudb you could use:
$ pytest -s --pdb --pdbcls=pudb.debugger:Debugger
You could use this feature to write your own wrapper class around Pdb that simply returns immediately if the specific failure is not something you are interested in. pytest uses Pdb() exactly like pdb.post_mortem() does:
p = Pdb()
p.reset()
p.interaction(None, t)
Here, t is a traceback object. When p.interaction(None, t) returns, pytest continues with the next test, unless p.quitting is set to True (at which point pytest then exits).
Here is an example implementation that prints out that we are declining to debug and returns immediately, unless the test raised ValueError, saved as demo/custom_pdb.py:
import pdb, sys
class CustomPdb(pdb.Pdb):
def interaction(self, frame, traceback):
if sys.last_type is not None and not issubclass(sys.last_type, ValueError):
print("Sorry, not interested in this failure")
return
return super().interaction(frame, traceback)
When I use this with the above demo, this is output (again, elided for brevity):
$ pytest test_foo.py -s --pdb --pdbcls=demo.custom_pdb:CustomPdb
[ ... ]
def test_ham():
> assert 42 == 17
E assert 42 == 17
test_foo.py:2: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Sorry, not interested in this failure
F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../test_foo.py(4)test_spam()
-> int("Vikings")
(Pdb)
The above introspects sys.last_type to determine if the failure is 'interesting'.
However, I can't really recommend this option unless you want to write your own debugger using tkInter or something similar. Note that that is a big undertaking.
Filtering failures; pick and choose when to open the debugger
The next level up is the pytest debugging and interaction hooks; these are hook points for behaviour customisations, to replace or enhance how pytest normally handles things like handling an exception or entering the debugger via pdb.set_trace() or breakpoint() (Python 3.7 or newer).
The internal implementation of this hook is responsible for printing the >>> entering PDB >>> banner above as well, so using this hook to prevent the debugger from running means you won't see this output at all. You can have your own hook then delegate to the original hook when a test failure is 'interesting', and so filter test failures independent of the debugger you are using! You can access the internal implementation by accessing it by name; the internal hook plugin for this is named pdbinvoke. To prevent it from running you need to unregister it but save a reference do we can call it directly as needed.
Here is a sample implementation of such a hook; you can put this in any of the locations plugins are loaded from; I put it in demo/conftest.py:
import pytest
#pytest.hookimpl(trylast=True)
def pytest_configure(config):
# unregister returns the unregistered plugin
pdbinvoke = config.pluginmanager.unregister(name="pdbinvoke")
if pdbinvoke is None:
# no --pdb switch used, no debugging requested
return
# get the terminalreporter too, to write to the console
tr = config.pluginmanager.getplugin("terminalreporter")
# create or own plugin
plugin = ExceptionFilter(pdbinvoke, tr)
# register our plugin, pytest will then start calling our plugin hooks
config.pluginmanager.register(plugin, "exception_filter")
class ExceptionFilter:
def __init__(self, pdbinvoke, terminalreporter):
# provide the same functionality as pdbinvoke
self.pytest_internalerror = pdbinvoke.pytest_internalerror
self.orig_exception_interact = pdbinvoke.pytest_exception_interact
self.tr = terminalreporter
def pytest_exception_interact(self, node, call, report):
if not call.excinfo. errisinstance(ValueError):
self.tr.write_line("Sorry, not interested!")
return
return self.orig_exception_interact(node, call, report)
The above plugin uses the internal TerminalReporter plugin to write out lines to the terminal; this makes the output cleaner when using the default compact test status format, and lets you write things to the terminal even with output capturing enabled.
The example registers the plugin object with pytest_exception_interact hook via another hook, pytest_configure(), but making sure it runs late enough (using #pytest.hookimpl(trylast=True)) to be able to un-register the internal pdbinvoke plugin. When the hook is called, the example tests against the call.exceptinfo object; you can also check the node or the report too.
With the above sample code in place in demo/conftest.py, the test_ham test failure is ignored, only the test_spam test failure, which raises ValueError, results in the debug prompt opening:
$ pytest demo/test_foo.py --pdb
[ ... ]
demo/test_foo.py F
Sorry, not interested!
demo/test_foo.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
demo/test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(4)test_spam()
-> int("Vikings")
(Pdb)
To re-iterate, the above approach has the added advantage that you can combine this with any debugger that works with pytest, including pudb, or the IPython debugger:
$ pytest demo/test_foo.py --pdb --pdbcls=IPython.core.debugger:Pdb
[ ... ]
demo/test_foo.py F
Sorry, not interested!
demo/test_foo.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
demo/test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(4)test_spam()
1 def test_ham():
2 assert 42 == 17
3 def test_spam():
----> 4 int("Vikings")
ipdb>
It also has much more context about what test was being run (via the node argument) and direct access to the exception raised (via the call.excinfo ExceptionInfo instance).
Note that specific pytest debugger plugins (such as pytest-pudb or pytest-pycharm) register their own pytest_exception_interact hooksp. A more complete implementation would have to loop over all plugins in the plugin-manager to override arbitrary plugins, automatically, using config.pluginmanager.list_name_plugin and hasattr() to test each plugin.
Making failures go away altogether
While this gives you full control over failed test debugging, this still leaves the test as failed even if you opted not to open the debugger for a given test. If you want to make failures go away altogether, you can make use a different hook: pytest_runtest_call().
When pytest runs tests, it'll run the test via the above hook, which is expected to return None or raise an exception. From this a report is created, optionally a log entry is created, and if the test failed, the aforementioned pytest_exception_interact() hook is called. So all you need to do is change what the result that this hook produces; instead of an exception it should just not return anything at all.
The best way to do that is to use a hook wrapper. Hook wrappers don't have to do the actual work, but instead are given a chance to alter what happens to the result of a hook. All you have to do is add the line:
outcome = yield
in your hook wrapper implementation and you get access to the hook result, including the test exception via outcome.excinfo. This attribute is set to a tuple of (type, instance, traceback) if an exception was raised in the test. Alternatively, you could call outcome.get_result() and use standard try...except handling.
So how do you make a failed test pass? You have 3 basic options:
You could mark the test as an expected failure, by calling pytest.xfail() in the wrapper.
You could mark the item as skipped, which pretends that the test was never run in the first place, by calling pytest.skip().
You could remove the exception, by using the outcome.force_result() method; set the result to an empty list here (meaning: the registered hook produced nothing but None), and the exception is cleared entirely.
What you use is up to you. Do make sure to check the result for skipped and expected-failure tests first as you don't need to handle those cases as if the test failed. You can access the special exceptions these options raise via pytest.skip.Exception and pytest.xfail.Exception.
Here's an example implementation which marks failed tests that don't raise ValueError, as skipped:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
outcome = yield
try:
outcome.get_result()
except (pytest.xfail.Exception, pytest.skip.Exception, pytest.exit.Exception):
raise # already xfailed, skipped or explicit exit
except ValueError:
raise # not ignoring
except (pytest.fail.Exception, Exception):
# turn everything else into a skip
pytest.skip("[NOTRUN] ignoring everything but ValueError")
When put in conftest.py the output becomes:
$ pytest -r a demo/test_foo.py
============================= test session starts =============================
platform darwin -- Python 3.8.0, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: ..., inifile:
collected 2 items
demo/test_foo.py sF [100%]
=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________
def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'
demo/test_foo.py:4: ValueError
=========================== short test summary info ============================
FAIL demo/test_foo.py::test_spam
SKIP [1] .../demo/conftest.py:12: [NOTRUN] ignoring everything but ValueError
===================== 1 failed, 1 skipped in 0.07 seconds ======================
I used the -r a flag to make it clearer that test_ham was skipped now.
If you replace the pytest.skip() call with pytest.xfail("[XFAIL] ignoring everything but ValueError"), the test is marked as an expected failure:
[ ... ]
XFAIL demo/test_foo.py::test_ham
reason: [XFAIL] ignoring everything but ValueError
[ ... ]
and using outcome.force_result([]) marks it as passed:
$ pytest -v demo/test_foo.py # verbose to see individual PASSED entries
[ ... ]
demo/test_foo.py::test_ham PASSED [ 50%]
It's up to you which one you feel fits your use case best. For skip() and xfail() I mimicked the standard message format (prefixed with [NOTRUN] or [XFAIL]) but you are free to use any other message format you want.
In all three cases pytest will not open the debugger for tests whose outcome you altered using this method.
Altering individual assert statements
If you want to alter assert tests within a test, then you are setting yourself up for a whole lot more work. Yes, this is technically possible, but only by rewriting the very code that Python is going to execute at compile time.
When you use pytest, this is actually already being done. Pytest rewrites assert statements to give you more context when your asserts fail; see this blog post for a good overview of exactly what is being done, as well as the _pytest/assertion/rewrite.py source code. Note that that module is over 1k lines long, and requires that you understand how Python's abstract syntax trees work. If you do, you could monkeypatch that module to add your own modifications there, including surrounding the assert with a try...except AssertionError: handler.
However, you can't just disable or ignore asserts selectively, because subsequent statements could easily depend on state (specific object arrangements, variables set, etc.) that a skipped assert was meant to guard against. If an assert tests that foo is not None, then a later assert relies on foo.bar to exist, then you simply will run into an AttributeError there, etc. Do stick to re-raising the exception, if you need to go this route.
I'm not going to go into further detail on rewriting asserts here, as I don't think this is worth pursuing, not given the amount of work involved, and with post-mortem debugging giving you access to the state of the test at the point of assertion failure anyway.
Note that if you do want to do this, you don't need to use eval() (which wouldn't work anyway, assert is a statement, so you'd need to use exec() instead), nor would you have to run the assertion twice (which can lead to issues if the expression used in the assertion altered state). You would instead embed the ast.Assert node inside a ast.Try node, and attach an except handler that uses an empty ast.Raise node re-raise the exception that was caught.
Using the debugger to skip assertion statements.
The Python debugger actually lets you skip statements, using the j / jump command. If you know up front that a specific assertion will fail, you can use this to bypass it. You could run your tests with --trace, which opens the debugger at the start of every test, then issue a j <line after assert> to skip it when the debugger is paused just before the assert.
You can even automate this. Using the above techniques you can build a custom debugger plugin that
uses the pytest_testrun_call() hook to catch the AssertionError exception
extracts the line 'offending' line number from the traceback, and perhaps with some source code analysis determines the line numbers before and after the assertion required to execute a successful jump
runs the test again, but this time using a Pdb subclass that sets a breakpoint on the line before the assert, and automatically executes a jump to the second when the breakpoint is hit, followed by a c continue.
Or, instead of waiting for an assertion to fail, you could automate setting breakpoints for each assert found in a test (again using source code analysis, you can trivially extract line numbers for ast.Assert nodes in an an AST of the test), execute the asserted test using debugger scripted commands, and use the jump command to skip the assertion itself. You'd have to make a tradeoff; run all tests under a debugger (which is slow as the interpreter has to call a trace function for every statement) or only apply this to failing tests and pay the price of re-running those tests from scratch.
Such a plugin would be a lot of work to create, I'm not going to write an example here, partly because it wouldn't fit in an answer anyway, and partly because I don't think it is worth the time. I'd just open up the debugger and make the jump manually. A failing assert indicates a bug in either the test itself or the code-under-test, so you may as well just focus on debugging the problem.
You can achieve exactly what you want without absolutely any code modification with pytest --pdb.
With your example:
import pytest
def test_abc():
a = 9
assert a == 10, "some error message"
Run with --pdb:
py.test --pdb
collected 1 item
test_abc.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_abc():
a = 9
> assert a == 10, "some error message"
E AssertionError: some error message
E assert 9 == 10
test_abc.py:4: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /private/tmp/a/test_abc.py(4)test_abc()
-> assert a == 10, "some error message"
(Pdb) p a
9
(Pdb)
As soon as a test fails, you can debug it with the builtin python debugger. If you're done debugging, you can continue with the rest of the tests.
If you're using PyCharm then you can add an Exception Breakpoint to pause execution whenever an assert fails. Select View Breakpoints (CTRL-SHIFT-F8) and add an on-raise exception handler for AssertionError. Note that this may slow down the execution of the tests.
Otherwise, if you don't mind pausing at the end of each failing test (just before it errors) rather than at the point the assertion fails, then you have a few options. Note however that by this point various cleanup code, such as closing files that were opened in the test, might have already been run. Possible options are:
You can tell pytest to drop you into the debugger on errors using the --pdb option.
You can define the following decorator and decorate each relevant test function with it. (Apart from logging a message, you could also start a pdb.post_mortem at this point, or even an interactive code.interact with the locals of the frame where the exception originated, as described in this answer.)
from functools import wraps
def pause_on_assert(test_func):
#wraps(test_func)
def test_wrapper(*args, **kwargs):
try:
test_func(*args, **kwargs)
except AssertionError as e:
tkinter.messagebox.showinfo(e)
# re-raise exception to make the test fail
raise
return test_wrapper
#pause_on_assert
def test_abc()
a = 10
assert a == 2, "some error message"
If you don't want to manually decorate every test function, you can instead define an autouse fixture that inspects sys.last_value:
import sys
#pytest.fixture(scope="function", autouse=True)
def pause_on_assert():
yield
if hasattr(sys, 'last_value') and isinstance(sys.last_value, AssertionError):
tkinter.messagebox.showinfo(sys.last_value)
One simple solution, if you're willing to use Visual Studio Code, could be to use conditional breakpoints.
This would allow you to set up your assertions, for example:
import pytest
def test_abc():
a = 10
assert a == 10, "some error message"
Then add a conditional breakpoint in your assert line which will only break when your assertion fails:

Handle invalid arguments with argparse in Python

I am using argparse to parse command line arguments and by default on receiving invalid arguments it prints help message and exit. Is it possible to customize the behavior of argparse when it receives invalid arguments?
Generally I want to catch all invalid arguments and do stuff with them. I am looking for something like:
parser = argparse.ArgumentParser()
# add some arguments here
try:
parser.parse_args()
except InvalidArgvsError, iae:
print "handle this invalid argument '{arg}' my way!".format(arg=iae.get_arg())
So that I can have:
>> python example.py --invalid some_text
handle this invalid argument 'invalid' my way!
You might want to use parse_known_args and then take a look at the second item in the tuple to see what arguments were not understood.
That said, I believe this will only help with extra arguments, not expected arguments that have invalid values.
Some previous questions:
Python argparse and controlling/overriding the exit status code
I want Python argparse to throw an exception rather than usage
and probably more.
The argparse documentation talks about using parse_known_args. This returns a list of arguments that it does not recognize. That's a handy way of dealing with one type of error.
It also talks about writing your own error and exit methods. That error that you don't like passes through those 2 methods. The proper way to change those is to subclass ArgumentParser, though you can monkey-patch an existing parser. The default versions are at the end of the argparse.py file, so you can study what they do.
A third option is to try/except the Systemexit.
try:
parser=argparse.ArgumentParser()
args=parser.parse_args()
except SystemExit:
exc = sys.exc_info()[1]
print(exc)
This way, error/exit still produce the error message (to sys.stderr) but you can block exit and go on and do other things.
1649:~/mypy$ python stack38340252.py -x
usage: stack38340252.py [-h]
stack38340252.py: error: unrecognized arguments: -x
2
One of the earlier question complained that parser.error does not get much information about the error; it just gets a formatted message:
def myerror(message):
print('error message')
print(message)
parser=argparse.ArgumentParser()
parser.error = myerror
args=parser.parse_args()
displays
1705:~/mypy$ python stack38340252.py -x
error message
unrecognized arguments: -x
You could parse that message to find out the -x is the unrecognized string. In an improvement over earlier versions it can list multiple arguments
1705:~/mypy$ python stack38340252.py foo -x abc -b
error message
unrecognized arguments: foo -x abc -b
Look up self.error to see all the cases that can trigger an error message. If you need more ideas, focus on a particular type of error.
===============
The unrecognized arguments error is produced by parse_args, which calls parse_known_args, and raises this error if the extras is not empty. So its special information is the list of strings that parse_known_args could not handle.
parse_known_args for its part calls self.error if it traps an ArgumentError. Generally those are produced when a specific argument (Action) has problems. But _parse_known_args calls self.error if required Action is missing, or if there's a mutually-exclusive-group error. choices can produce a different error, as can type.
You can try subclassing argparse.ArgumentParser() and overriding the error method.
From the argparse source:
def error(self, message):
"""error(message: string)
Prints a usage message incorporating the message to stderr and
exits.
If you override this in a subclass, it should not return -- it
should either exit or raise an exception.
"""
self.print_usage(_sys.stderr)
self.exit(2, _('%s: error: %s\n') % (self.prog, message))
Since the error code 2 is reserved for internal docker usage, I'm using the following to parse arguments in scripts inside docker containers:
ERROR_CODE = 1
class DockerArgumentParser(argparse.ArgumentParser):
def error(self, message):
"""error(message: string)
Prints a usage message incorporating the message to stderr and
exits.
If you override this in a subclass, it should not return -- it
should either exit or raise an exception.
Overrides error method of the parent class to exit with error code 1 since default value is
reserved for internal docker usage
"""
self.print_usage(sys.stderr)
args = {'prog': self.prog, 'message': message}
self.exit(ERROR_CODE, '%(prog)s: error: %(message)s\n' % args)

Kill the interpreter with argparse

I'm testing out some argparse code. I wanted to have an optional argument, which collects n number of inputs from a list of choices. So, I wrote:
import argparse
modules = ["geo", "loc"]
parser = argparse.ArgumentParser()
parser.add_argument("--modules", nargs='*', choices=modules)
With this set up, I'm reliably able to kill the interpreter completely.
It works fine if you pass a valid set of arguments:
>>> parser.parse_args("--module geo loc geo".split())
Namespace(modules=['geo', 'loc', 'geo'])
But if you pass in a miss formed argument, it kills python completely:
>>> parser.parse_args("--module geo metro".split())
usage: [-h] [--modules [{geo,loc} [{geo,loc} ...]]]
: error: argument --modules: invalid choice: 'metro' (choose from 'geo', 'loc')
PS C:\Users\myname\mycode>
My question is two-fold:
Is this expected behavior? If so, what is the reasoning for this?
Will I be okay using this code, since I don't mind if my program dies with ill-formed arguments? Or is there some compelling reason to avoid this?
As a note, I am using Python2.7 on Windows 7.
Yes, this is intended, and documented:
While parsing the command line, parse_args() checks for a variety of errors, including ambiguous options, invalid types, invalid options, wrong number of positional arguments, etc. When it encounters such an error, it exits and prints the error along with a usage message:
The idea is that, if the user gives an invalid option or argument which you don't know how to handle, the best option is to give up instead of second-guess the user's actual intentions.
If you don't mind, then it should be ok, right? Unless you know a reason to implement different behavior, your program is completely consistent with all well-behaved command line tools on all platforms.
If you do want to implement different behavior, catch the SystemExit exception that parse_args might raise.
(The only program that I can think of that behaves differently from the way I just described is the version control tool Git, which does try to guess what the user meant and prints its guesses. It then still exits, though.)
argparse is designed for use when your Python script is run from a command line. That's why invalid arguments cause the program to quit.
This behavior is consistent with virtually all shell (bash/sh/dos/etc.) utilities. Invalid command line args cause the program to quit with an error string and (optionally) a usage message.

python unittest - Using 'buffer' option to suppress stdout - how do I do it?

In the unittest docs [ http://docs.python.org/2/library/unittest.html#unittest.main ], I see the following method signature described:
unittest.main([module[, defaultTest[, argv[, testRunner[, testLoader[, exit[, verbosity[, failfast[, catchbreak[, buffer]]]]]]]]]])
The last option is "buffer". The docs explain the following about this option:
The failfast, catchbreak and buffer parameters have the same effect as the same-name command-line options.
The docs for the command-line options [ http://docs.python.org/2/library/unittest.html#command-line-options ] explain 'buffer' as follows:
-b, --buffer
The standard output and standard error streams are buffered during the test run. Output during a passing test is discarded. Output is echoed normally on test fail or error and is added to the failure messages.
I have the following demo code which does not exhibit the behavior that would be expected:
import unittest2
class DemoTest(unittest2.TestCase):
def test_one(self):
self.assertTrue(True)
def test_two(self):
self.assertTrue(True)
if __name__ == '__main__':
test_program = unittest2.main(verbosity=0, buffer=True, exit=False)
The output of this program is:
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
In fact, I get the same output if I chang the last line in my program to:
test_program = unittest2.main(verbosity=0, buffer="hello", exit=False)
What am I doing wrong? (I tried using unittest instead of unittest2, but it made no difference.)
The point is that buffer option affects stdout writing inside your tests, ignoring that of unittest2 behaviour. That is to say, you will see the difference, if you add string like
print "Suppress me!"
to any test method, this expression will appear on stdout, if you choose buffer=False, while it will be suppressed if you set it to True.
As I explained in the comment, buffer just buffers the output from the tested code. This means you only get output from unittest2 itself. It's working perfectly. (In your case, it's also working trivially—your code doesn't print anything out, so there's nothing for buffer to buffer, which is why you get the same result without it.)
If you don't want any output from unittest2 either, you can always run the script with a shell command line that redirects to /dev/null, or import unittest2 from a script that redirects sys.stdout.
But usually you actually want to read that stdout, not just discard it. Even if you don't want to log it anywhere, you want to check that the last line is "OK", so you can send an electric shock to your programming team or whatever you do on failure. Otherwise, what's the point of running the tests via cron?

Categories