Crashing the logging formatter? - python

I have a logging component in my program. The setup of the formatter is straightforward:
sh.setFormatter(logging.Formatter("%(asctime)s - %(message)s"))
I notice that my program is having problems. After a certain point, the formatter reverts to the default configuration (i.e., ignores the formatting I supplied). On closer inspection it seems that I am crashing it by sending a message that throws a UnicodeDecodeError when rendered in the string. But, I can't seem to fix.
I wrapped the logging call:
try:
my_logger.info(msg)
except UnicodeDecodeError:
pass
Which "catches" the exception, but the logger is still pooched.
Any thoughts?

Any idea what input is causing the UnicodeDecodeError? Ample printing of variables would help! If you want to move on upon receiving that error, you should wrap the calls to the formatter in a try..except block.
try:
# log stuff
except UnicodeDecodeError:
# handle the exception and move on
It would be helpful to see some more code and some of your input data to give you a more clear response.

Take a look at this: http://wiki.python.org/moin/UnicodeDecodeError.
You probably have some string that can't be decoded.

A user of my product had this issue. Go into logging/init.py and add some print statements to print the record.dict. If you see unicode in the asctime that could be your issue.

Related

How to catch when *any* error occurs in a whole script? [duplicate]

How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace

Can warnings warn without returning out of a function?

Is there anyway for the warnings.warn() function to be caught be a caller while still executing the rest of the code after the warn() call? The problem I am having is that function b will warnings.warn() if something happens, and then I want the rest of that function to finish its job and return a list of what it actually did. If a warning was thrown, I want to catch it, email it to someone, and continue on when I call that function from another module, but that isn't happening. here is what it looks like in code:
import warnings
def warn_function(arg_1):
if arg_1 > 10:
warnings.warn("Your argument was greater than 10.")
return arg_1 - 5
with warnings.catch_warnings():
warnings.filterwarnings("error")
try:
answer = warn_function(20)
except Warning:
print("A warning was thrown")
finally:
print(answer)
Yes, warnings can warn without exiting out of a function. But the way you're trying to do things just isn't going to work.
Using catch_warnings with the "error" action means you're explicitly asking Python to raise every warning as an exception. And the Python exception model doesn't have any way to resume from the point where an exception was thrown.
You can reorganize your code to provide explicit ways to "do the rest" after each possible warnings, but for non-trivial cases you either end up doing a ton of work, or building a hacky continuation-passing mechanism.
The right way to handle your use case is logging.captureWarnings. This way, all warnings go to a logger named 'py.warnings' instead of through the normal warning path. You can then configure a log handler that sends these warnings to someone via email, and you're done.
And of course once you've built this, you can use the exact same handler to get emails sent from high-severity log messages to other loggers, or to add in runtime configuration so you can turn up and down the email threshold without deploying a whole new build of the server, and so on.
If you're not already using logging, it may be easier to hook warnings manually. As the warnings introduction explains:
The printing of warning messages is done by calling showwarning(), which may be overridden; the default implementation of this function formats the message by calling formatwarning(), which is also available for use by custom implementations.
Yes, Python is encouraging you to monkeypatch a stdlib module. The code to do this looks something like:
def showwarning(message, category, filename, lineno, file=None, line=None):
fmsg = warning.formatwarning(message, category, filename, lineno, line)
# send fmsg by email
warning.showwarning = showwarning

redirecting integration error to file, python

I am using odeint from scipy.itegrate in python. Sometimes I get integrating errors like,
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
in above message, i1 = 500
in above message, r1 = 0.4082154636630D-03
I would like to NOT print those errors on the screen. Is there any way to print them directly to some error file? I just don't want them to be printed on the screen as I am printing something else there in big loop, and automatically to the result file.
Thanks
If these messages are printed on the stderr, you can capture it and redirect to a file. A minimal implementation is
import sys
sys.stderr = open('the_log_file_for_errors', 'w')
Another, more complex, way can be to encapsulate the code that can give the error in a try...except block, in the except block you can do log the error on a file with some more details (like input params and so on) to check after.

How to ignore all potential exceptions in Python?

I'm writing a custom backup script in Python. Sometimes the mkdir function or the print function or whatever function are failing for diverse reasons. Such exceptions stop the whole script and halt the backup in the middle, which is very frustrating. So far, I've managed these problems by adding try: ... except: ... statements and properly managing these exceptions. However, one day some other statement or function might raise an exception as well because of some other reason that hasn't been triggered yet.
Is there a way to tell a script to proceed anyway? An equivalent of wrapping every single statement of the code in a try: ... except: pass clause? A log would be better of course.
I've noticed that when programming with GUI toolkits like Tkinter, the application keeps running even if exceptions are raised. Is it possible to accomplish this type of thing with the console?
There actually is a module that is supposed to do exactly that: https://github.com/ajalt/fuckitpy.
Although it was obviously written as a joke. I cannot imagine a situation where doing something like that is a good idea. God, I can't believe I'm even suggesting that as a solution.
What you should do instead is identify what lines of code can produce what kind of errors, and handle those errors properly. There is only so much places where errors can actually happen - mostly while interfacing with outside systems, including filesystem, network, user input etc. And remember that actually failing is often better than continuing "work" and messing up your data, files and so on. Exceptions are there for a reason, they are not a result of Guido's malice.
Python has no way of doing that, and for good reasons.
It seems you're confused about what does it mean to write "robust" software: a robust program is not a program that is hard to kill and that will keep running no matter what, but a program that will handle edge cases properly. Keeping running is NOT enough... keeping running doing sensible things is the key point.
Unfortunately there's no way to do reasonable things automatically and you've to think on a case-by-case basis how to handle the error.
Beware that if a program has a lot of catch it's rarely a good program. Exceptions are meant to be raised in a lot of places and caught almost nowhere.
Note also that every catch is potentially a source of bugs... for example:
try:
print my_dict[foo()]
except KeyError:
...
cannot distinguish if the KeyError is coming for accessing a non-existing key in my_dict or if instead escaped from foo(). Rarely the two cases should be handled the same way...
Better is to write:
key = foo()
if key in my_dict:
print my_dict[key]
else:
...
so that only the side case of missing key in my_dict is handled and instead a KeyError exception will stop the program (stopping a program when you're not sure of what it's doing is the only reasonable thing to do).
Python has 'BaseException' as the base class for Exception classes. You can catch and ignore the base class Exception and that will cover all exceptions.
try
... your code here ...
except BaseException as exp:
print "A General Exception Occurred"
try:
# code segment
except:
pass
pass keyword will ignore all the exceptions
Normally, this one should catch everything:
try:
....
except:
pass
Only problem is, that you don't get the exception object with this syntax, but that was not asked for in this case.
You can add a general except block like #Kanwar Saad proposed. The question is, can you continue with your program in a valid state after the exception has been raised?
From the Zen of Python:
Errors should never pass silently.
Unless explicitly silenced.
Trying to catch all exceptions you know is in my opinion is the best way to go here. If you can not explicitly catch an exception you should not try to work around it. You (and your users) should know what exactly went wrong, otherwise your code might become a nightmare to debug.
If you are worried about losing backup data maybe you could do something like this:
def save_unfinished_backup():
# try to find a graceful exit without losing any data
try:
# some code
except OSError:
# handle oS Errors
except Exception:
save_unfinished_backup()
raise
This way you get both: A chance to fend of data loss and the exact error to debug it.
I hope this helps!
On a funny note: You could also use the fuckit module. Which silences ALL errors, including syntax errors. Do not, ever, use this in productive code though.
This should work perfectly. It will not print the "foo", but you will reach the print("bar") without a crash.
import fuckit
with fuckit:
prnt("foo")
print("bar")
New answer for new Gen...
Python now ships with suppress(), this tells the interpreter to suppress the indicated exceptions while running.
Can be easily imported used as below
from contextlib import suppress
with suppress(ValueError):
int('this wont catch')
print('yea')
The above will work, wont raise the ValueError exception of changing invalid int string to int...
It's more clean than third party libraries.
Happy Hacking

filter out exception messages from unittest output

Is it possible to scrap all the exception text that comes as output from using unittest?
I.e., if I have a bunch of tests, and some of them throw exceptions, the unittest module takes it upon itself to print in red (in IDLE at least) all the exceptions. Is there a way to just not print the exceptions (but leave in any text I print using the print keyword?
For example, I have text to print in a tearDownClass() function, and while I'd like that to print, it'd be nice if it wasn't followed by 30 lines of red exception text. Is this possible?
If I understand you right, you just want a self-defined logger, right?
Put all the unit tests in a bit try-except block and catch all the exceptions. Then print it out as you like.
...
try:
def test1(unit.tests):
pass
def test2(unit.tests):
pass
except Exception, e:
print 'here is the exception message', repr(e)
# Use your own function to deal with print function or whatever you want here
...
So, according to comment-41597224, you want to deliberately wipe useful output because you feel it's not your problem.
In that case, replace/make relevant changes to Lib\unittest\result.py:_exc_info_to_string or a method that uses it that applies to your specific case (probably, addError or addFailure).
Alternatively, you can pipe the output to an independent script/command that would postprocess it with regexes.
If results happen to be written to stdout while exceptions to stderr, it's as simple as 2>nul at the command line.
But, I still advise against this. You do care what exceptions you get, because:
they might turn out to be a result of YOUR mistake as well as student's
you could reply with the verbatim output rather than just "pass/fail" that is both less work for you and gives the loser a better hint to fix it
You can get the best of both worlds if you make it so that you get BOTH the (filtered) summary and the opportunity to see the full output if you suspect something is not right.

Categories