Python callback handlers- better error messages? - python

I'm using the stomp.py library to get JSON messages from over a network. I've adapted the simple example they give here which uses a callback to provide message handling.
But I made a simple error when I modified that callback- for example, I called json.load() instead of json.loads() when trying to parse my JSON string.
class MyListener(object):
def on_message(self, headers, message):
data = json.load(message) ## Should be .loads() for a string!
Usually that would be fine- it would AttributeError and I'd see a traceback. But in this case Python prints:
No handlers could be found for logger "stomp.py"
... no traceback, no crash out, and that was all. Very confusing to debug and find out what I did wrong! I was expecting at least the normal traceback along the lines of:
Traceback (most recent call last):
File "./ncl/stomp.py-3.1.3/stompJSONParser.py", line 32, in <module>
[etc etc ...]
... rather than it borking the whole listener. I guess it's because that happens on a different thread?
Now that I've worked out it's like a kind of runtime error in the callback I at least know I've done something wrong when it errors- but if it just spews that error for every mistake I make rather than giving me some kind of useful message, it makes it a bit difficult to code.
What causes this? And what could I do to get the regular, more verbose traceback back?

Looks like its expecting a log handler from the Python Logging Module to be set up in order to capture output. There are lots of possible configurations for logging. But for simple debugging I would use something along the lines of
import logging
logging.basicConfig(level=logging.DEBUG)
That should capture all output of log level DEBUG and above. Read the logging docs for more info :)

Instructions for getting a logger (which is what is being asked for directly) can be found here, but the verbose traceback is suppressed.
If you take a look at the code which is calling on_message, you'll notice that that block is in a try block without an except.
Line 703 is where the method is actually called:
notify_func = getattr(listener, 'on_%s' % frame_type)
notify_func(headers, body)
which is in method __notify (declared on line 639):
def __notify(self, frame_type, headers=None, body=None):
These are the times when it is not in a try block:
331 (for the connected event)
426 for the send event
743 for disconnected
But the time when message is called is line 727:
# line 719
try:
# ....
self.__notify(frame_type, headers, body)

In the end I grabbed the logger by name and set a StreamHandler on it:
import logging
log = logging.getLogger('stomp.py')
strh = logging.StreamHandler()
strh.setLevel(logging.ERROR)
log.addHandler(strh);

Related

Exceptions: logging the traceback only once

I am scratching my head about what is the best-practice to get the traceback in the logfile only once. Please note that in general I know how to get the traceback into the log.
Let's assume I have a big program consisting of various modules and functions that are imported, so that it can have quite some depth and the logger is set up properly.
Whenever an exception may occur I do the following:
try:
do_something()
except MyError as err:
log.error("The error MyError occurred", exc_info=err)
raise
Note that the traceback is written to the log via the option exc_info=err.
My Problem is now that when everything gets a bit more complex and nested I loose control about how often this traceback is written to the log and it gets quite messy.
An example of the situation with my current solution for this problem is as follows:
from other_module import other_f
def main():
try:
# do something
val = other_f()
except (AlreadyLoggedError1, AlreadyLoggedError2, AlreadyLoggedError3):
# The error was caught within other_f() or deeper and
# already logged with traceback info where it occurred
# After logging it was raised like in the above example
# I do not want to log it again, so it is just raised
raise
except BroaderException as err:
# I cannot expect to have thought of all exceptions
# So in case something unexpected happened
# I want to have the traceback logged here
# since the error is not logged yet
log.error("An unecpected error occured", exc_info=err)
raise
The problem with this solution is, that I need to to keep track of all Exceptions that are already logged by myself and the line except (AlreadyLoggedError1, AlreadyLoggedError2, ...) gets arbitrary long and has to be put at any level between main() and the position the error actually occured.
So my question is: Is there some better (pythonic) way handling this? To be more specific: I want to raise the information that the exception was already logged together with the exception so that I do not have to account for that via an extra except block like in my above example.
The solution normally used for larger applications is for the low-level code to not actually do error handling itself if it's just going to be logged, but to put exception logging/handling at the highest level in the code possible, since exceptions will bubble up as far as needed. For example, libraries that send errors to a service like New Relic and Sentry don't need you to instrument each small part of your code that might throw an error, they are set up to just catch any exception and send it to a remote service for aggregation and tracking.

Can warnings warn without returning out of a function?

Is there anyway for the warnings.warn() function to be caught be a caller while still executing the rest of the code after the warn() call? The problem I am having is that function b will warnings.warn() if something happens, and then I want the rest of that function to finish its job and return a list of what it actually did. If a warning was thrown, I want to catch it, email it to someone, and continue on when I call that function from another module, but that isn't happening. here is what it looks like in code:
import warnings
def warn_function(arg_1):
if arg_1 > 10:
warnings.warn("Your argument was greater than 10.")
return arg_1 - 5
with warnings.catch_warnings():
warnings.filterwarnings("error")
try:
answer = warn_function(20)
except Warning:
print("A warning was thrown")
finally:
print(answer)
Yes, warnings can warn without exiting out of a function. But the way you're trying to do things just isn't going to work.
Using catch_warnings with the "error" action means you're explicitly asking Python to raise every warning as an exception. And the Python exception model doesn't have any way to resume from the point where an exception was thrown.
You can reorganize your code to provide explicit ways to "do the rest" after each possible warnings, but for non-trivial cases you either end up doing a ton of work, or building a hacky continuation-passing mechanism.
The right way to handle your use case is logging.captureWarnings. This way, all warnings go to a logger named 'py.warnings' instead of through the normal warning path. You can then configure a log handler that sends these warnings to someone via email, and you're done.
And of course once you've built this, you can use the exact same handler to get emails sent from high-severity log messages to other loggers, or to add in runtime configuration so you can turn up and down the email threshold without deploying a whole new build of the server, and so on.
If you're not already using logging, it may be easier to hook warnings manually. As the warnings introduction explains:
The printing of warning messages is done by calling showwarning(), which may be overridden; the default implementation of this function formats the message by calling formatwarning(), which is also available for use by custom implementations.
Yes, Python is encouraging you to monkeypatch a stdlib module. The code to do this looks something like:
def showwarning(message, category, filename, lineno, file=None, line=None):
fmsg = warning.formatwarning(message, category, filename, lineno, line)
# send fmsg by email
warning.showwarning = showwarning

redirecting integration error to file, python

I am using odeint from scipy.itegrate in python. Sometimes I get integrating errors like,
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
in above message, i1 = 500
in above message, r1 = 0.4082154636630D-03
I would like to NOT print those errors on the screen. Is there any way to print them directly to some error file? I just don't want them to be printed on the screen as I am printing something else there in big loop, and automatically to the result file.
Thanks
If these messages are printed on the stderr, you can capture it and redirect to a file. A minimal implementation is
import sys
sys.stderr = open('the_log_file_for_errors', 'w')
Another, more complex, way can be to encapsulate the code that can give the error in a try...except block, in the except block you can do log the error on a file with some more details (like input params and so on) to check after.

python logging close and application exit

I am using the logging module in an application and it occurred to me that it would be neat if the logging module supported a method which would gracefully close file handles etc and then close the application.
For example:
logger = logging.getLogger('my_app')
logger.fatal("We're toast!")
the fatal method (or some such) would then:
log the message as normal
logging.shutdown()
Call sys.exit(1)
Thoughts?
Does something like this exist?
Is this a bad idea?
Why do I want this?
Well there are a few places in my code where I want the app to die and it seems a waste to keep repeating the code to do 2 and 3.
Perhaps not the cleanest solution, but this springs to mind:
try:
# Your main function
main()
except:
logger.exception('some message')
sys.exit(1)
And in the actual code just raise any exception
Although that will give you a different logger. If it's just about the shutdown part, just use try/finally:
try:
# do whatever here
main()
finally:
logging.shutdown()

Crashing the logging formatter?

I have a logging component in my program. The setup of the formatter is straightforward:
sh.setFormatter(logging.Formatter("%(asctime)s - %(message)s"))
I notice that my program is having problems. After a certain point, the formatter reverts to the default configuration (i.e., ignores the formatting I supplied). On closer inspection it seems that I am crashing it by sending a message that throws a UnicodeDecodeError when rendered in the string. But, I can't seem to fix.
I wrapped the logging call:
try:
my_logger.info(msg)
except UnicodeDecodeError:
pass
Which "catches" the exception, but the logger is still pooched.
Any thoughts?
Any idea what input is causing the UnicodeDecodeError? Ample printing of variables would help! If you want to move on upon receiving that error, you should wrap the calls to the formatter in a try..except block.
try:
# log stuff
except UnicodeDecodeError:
# handle the exception and move on
It would be helpful to see some more code and some of your input data to give you a more clear response.
Take a look at this: http://wiki.python.org/moin/UnicodeDecodeError.
You probably have some string that can't be decoded.
A user of my product had this issue. Go into logging/init.py and add some print statements to print the record.dict. If you see unicode in the asctime that could be your issue.

Categories