I am trying to log the uncaught exception in python. For that, I am setting the sys.excepthook in my __init__.py as follows.
import sys
import logging
import traceback
def log_except_hook(*exc_info):
logger = logging.getLogger(__name__)
text = "".join(traceback.format_exception(*exc_info))
logger.critical(f"Unhandled exception:\n{text}")
sys.excepthook = log_except_hook
My problem is that when I run a process and an exception occurs, the module name in the logged exception is the name of the folder containing the code (src in this case), instead of the name of the function in which it occurred (foo_produce_an_error in this example). Please find an example below:
2019-10-09 18:55:48,638 src CRITICAL: Unhandled exception:
Traceback (most recent call last):
File "/Users/ivallesp/Projects/Project_Folder/main.py", line 109, in <module>
foo_function(7)
File "/Users/ivallesp/Projects/Project_Folder/src/new_module.py", line 8, in foo_function
foo_produce_an_error(x)
File "/Users/ivallesp/Projects/Project_Folder/src/new_module.py", line 12, in foo_produce_an_error
x / 0
ZeroDivisionError: division by zero
How can I make logging to show the module and name of the function where the error ocurred in the first logging line?
You haven't provided enough information to answer the question - for example, how you've configured logging (specifically, the format string/formatter used). I can illustrate how to achieve the desired result in general, with an example. Suppose you have a failing function in a module failfunc:
# failfunc.py
def the_failing_func():
1 / 0
Then, your main script might be:
# logtest_ue.py
import logging
import sys
from failfunc import the_failing_func
def log_except_hook(*exc_info):
logger = logging.getLogger(__name__)
tb = exc_info[-1]
# get the bottom-most traceback entry
while tb.tb_next:
tb = tb.tb_next
modname = tb.tb_frame.f_globals.get('__name__')
funcname = tb.tb_frame.f_code.co_name
logger.critical('Unhandled in module %r, function %r: %s',
modname, funcname, exc_info[1], exc_info=exc_info)
sys.excepthook = log_except_hook
def main():
the_failing_func()
if __name__ == '__main__':
logging.basicConfig(format='%(levelname)s %(message)s')
sys.exit(main())
when this is run, it prints
CRITICAL Unhandled in module 'failfunc', function 'the_failing_func': division by zero
Traceback (most recent call last):
File "logtest_ue.py", line 23, in <module>
sys.exit(main())
File "logtest_ue.py", line 19, in main
the_failing_func()
File "/home/vinay/projects/scratch/failfunc.py", line 2, in the_failing_func
1 / 0
ZeroDivisionError: division by zero
Note the slightly simpler way of getting a traceback into the log, using the exc_info keyword parameter. Also note that in this case, the normal module and function name (which could be displayed using %(module)s and %(funcName)s in a format string) would be those where the sys.excepthook points to, not the values where the exception actually occurred. For those, you would have to use the traceback object in the way I've illustrated to get the bottom-most frame (where the exception actually occurred) and get the module and function names from that frame.
The module is hard to get but you can have filename and line number. They are contained in exc_info which you don't need to format yourself. You can just pass them to the log function and use a formatter to display it all. Here's an example code to give you an idea how this is done:
import sys
import logging
import traceback
def log_except_hook(*exc_info):
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(asctime)s %(fname)s:%(lnum)s %(message)s %(exc_info)s'))
logger.addHandler(handler)
frame = traceback.extract_tb(exc_info[2])[-1]
fname = frame.filename
lnum = frame.lineno
logger.critical("Unhandled exception:", exc_info=exc_info, extra={'fname':fname, 'lnum':lnum})
sys.excepthook = log_except_hook
Related
I'm using Python 3.9 and the logging module. It seems that when I make use of a QueueHandler and the stack_info feature, the stack trace information prints twice. Anyone know how to avoid this duplication?
This works correctly without the QueueHandler:
import logging
import time
stream_handler = logging.StreamHandler ()
lo = logging.getLogger ()
lo.addHandler (stream_handler)
lo.error ("This is an example error", stack_info = True)
time.sleep (1)
This includes the stack trace twice, and the only change I made was to insert a QueueHandler/QueueListener:
import logging
import logging.handlers
import queue
import time
stream_handler = logging.StreamHandler ()
q = queue.Queue (-1)
queue_handler = logging.handlers.QueueHandler (q)
lo = logging.getLogger ()
lo.addHandler (queue_handler)
queue_listener = logging.handlers.QueueListener (q, stream_handler)
queue_listener.start ()
lo.error ("This is an example error", stack_info = True)
time.sleep (1)
Output with the QueueHandler/QueueListener arrangement is the same as without, except the stack trace appears twice:
# ./logtest-q.py
This is an example error
Stack (most recent call last):
File "/home/jcharles/./logtest-q.py", line 19, in <module>
lo.error ("This is an example error", stack_info = True)
Stack (most recent call last):
File "/home/jcharles/./logtest-q.py", line 19, in <module>
lo.error ("This is an example error", stack_info = True)
I'm trying to write a module to use it in different scripts
import logging
from logging.handlers import RotatingFileHandler
_logger_name = "Nagios"
_print_format = "%(asctime)s - %(levelname)s - %(message)s"
_level = logging.DEBUG
class Log():
def __init__(self,log_file,logger_name=_logger_name,level=_level):
self.log_file = log_file
self.logger_name = logger_name
self.level = level
def getLog(self):
"""
Return the logging object
"""
_logger = logging.getLogger(self.logger_name)
_logger.setLevel(self.level)
_logger.addHandler(self._rotateLog())
return _logger
def _rotateLog(self):
"""
Rotating the log files if it exceed the size
"""
rh = RotatingFileHandler(self.log_file,
maxBytes=20*1024*1024, backupCount=2)
formatter = logging.Formatter(_print_format)
rh.setFormatter(formatter)
return rh
log = Log("kdfnknf").getLog()
log("hello")
I see the following error:
Traceback (most recent call last):
File "nagiosLog.py", line 45, in <module>
log("hello")
TypeError: 'Logger' object is not callable
Any idea why I'm getting this error,
When debugged using pdb I do see it returns the object and printing the dir(log) I don't see the Logger module in it.
Am I missing something here
log("Hello")
This is wrong.
Correct is
log.info("Hello")
log must be printed with logging level i.e. info/error/warning
See the logging docs:
You have to use a function, you can't just call Logger:
Logger.info(msg, *args, **kwargs)
Logs a message with level INFO on
this logger. The arguments are interpreted as for debug().
or
Logger.warning(msg, *args, **kwargs)
Logs a message with level WARNING on this logger. The arguments are >interpreted as for debug().
so instead, do:
log.info("Test info level logging...")
My unittest module breaks when testing my main file because my main file references a logger that was not initialized.
We have the following simple example.
logger_main.py:
import logging
def some_func():
logger.info(__name__ + " started ")
# Do something
logger.info(__name__ + " finished ")
return 1
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
some_func()
logger_main_tests.py:
import unittest
import logging
from logger_main import some_func
class Test(unittest.TestCase):
def setUp(self):
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
def testName(self):
self.assertEqual(some_func(), 1)
if __name__ == "__main__":
unittest.main()
logger_main.py runs as expected, however, logger_main_tests.py gives the following error.
Finding files... done.
Importing test modules ... done.
======================================================================
ERROR: testName (logger_main_tests.Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\workspaces\PythonPlayground\DataStoreHierarchy\logger_main_tests.py", line 11, in testName
self.assertEqual(some_func(), 1)
File "C:\workspaces\PythonPlayground\DataStoreHierarchy\logger_main.py", line 4, in some_func
logger.info(__name__ + " started ")
NameError: name 'logger' is not defined
----------------------------------------------------------------------
Ran 1 test in 0.003s
FAILED (errors=1)
The error makes sense since some_func() is trying to use logger that doesn't exists in its scope and I would like to figure out how to set up my unittests with a logger (set at the DEBUG level) so that any logger.info or logger.debug statement inside of my functions (such as some_func()) in logger_main.py would be printed out at the appropriate level.
Move this line outside of your main declaration.
logger = logging.getLogger(__name__)
The main will defined where the logs goes, but should not be used to defined the logger. Your module should declare the logger in its global context.
You can (and should) defined as many loggers as you need, it is common to have one per file or class and declaring it after you imports so it is available anywhere in the code that follow.
import logging
logger = logging.getLogger(__name__)
def some_func():
.....
This is because the logger is only being defined when __name__ == "__main__".
Generally you'll define one logger per file, something like:
logger = logging.getLogger(__name__)
At the top of the file, after the imports.
Also, notice that the logger defined in Test.setUp is local to the function, so it won't do anything.
Consider this code:
import logging
print "print"
logging.error("log")
I get:
print
ERROR:root:log
now if I include a thid-party module at the beginning of the previous code and rerun it I get only:
print
there are some previous question about this, but here I cannot touch the module I'm importing.
The code of the third-party module is here: http://atlas-sw.cern.ch/cgi-bin/viewcvs-atlas.cgi/offline/DataManagement/DQ2/dq2.clientapi/lib/dq2/clientapi/DQ2.py?view=markup, but my question is more general: independently of the module I'm importing I want a clean logging working in the expected way
Some (non-working) proposed solutions:
from dq2.clientapi.DQ2 import DQ2
import logging
del logging.root.handlers[:]
from dq2.clientapi.DQ2 import DQ2
import logging
logging.disable(logging.NOTSET)
logs = logging.getLogger('root')
logs.error("Some error")
the next one works, but produced some additional errors:
from dq2.clientapi.DQ2 import DQ2
import logging
reload(logging)
I get:
print
ERROR:root:log
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43- opt/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 1509, in shutdown
h.close()
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 705, in close
del _handlers[self]
KeyError: <logging.StreamHandler instance at 0x2aea031f7248>
Error in sys.exitfunc:
Traceback (most recent call last):
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 1509, in shutdown
h.close()
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 705, in close
del _handlers[self]
KeyError: <logging.StreamHandler instance at 0x2aea031f7248>
from dq2.clientapi.DQ2 import DQ2
import logging
logger = logging.getLogger(__name__)
ch = logging.StreamHandler()
logger.addHandler(ch)
logger.error("log")
It depends on what the other module is doing; e.g. if it's calling logging.disable then you can call logging.disable(logging.NOTSET) to reset it.
You could try reloading the logging module:
from importlib import reload
logging.shutdown()
reload(logging)
The problem is this will leave the third-party module with its own copy of logging in an unusable state, so could cause more problems later.
To completely clear existing logging configuration from the root logger, this might work:
root = logging.getLogger()
list(map(root.removeHandler, root.handlers))
list(map(root.removeFilter, root.filters))
However, this doesn't reset to the "default", this clears everything. You'd have to then add a StreamHandler to achieve what you want.
A more complete solution that doesn't invalidate any loggers. Should work, unless some module does something strange like holding a reference to a filterer or a handler.
def reset_logging():
manager = logging.root.manager
manager.disabled = logging.NOTSET
for logger in manager.loggerDict.values():
if isinstance(logger, logging.Logger):
logger.setLevel(logging.NOTSET)
logger.propagate = True
logger.disabled = False
logger.filters.clear()
handlers = logger.handlers.copy()
for handler in handlers:
# Copied from `logging.shutdown`.
try:
handler.acquire()
handler.flush()
handler.close()
except (OSError, ValueError):
pass
finally:
handler.release()
logger.removeHandler(handler)
Needless to say, you must setup your logging after running reset_logging().
I want to check for errors in a particular background file, but the standard error stream is being controlled by the program in the foreground and the errors in the file in the question are not being displayed. I can use the logging module and write output to a file, though. I was wondering how I can use this to log all exceptions, errors and their tracebacks.
It's probably a bad idea to log any exception thrown within the program, since Python uses exceptions also for normal control flow.
Therefore you should only log uncaught exceptions. You can easily do this using a logger's exception() method, once you have an exception object.
To handle all uncaught exceptions, you can either wrap your script's entry point in a try...except block, or by installing a custom exception handler by re-assigning sys.excepthook():
import logging
import sys
logger = logging.getLogger('mylogger')
# Configure logger to write to a file...
def my_handler(type, value, tb):
logger.exception("Uncaught exception: {0}".format(str(value)))
# Install exception handler
sys.excepthook = my_handler
# Run your main script here:
if __name__ == '__main__':
main()
import sys
import logging
import traceback
# log uncaught exceptions
def log_exceptions(type, value, tb):
for line in traceback.TracebackException(type, value, tb).format(chain=True):
logging.exception(line)
logging.exception(value)
sys.__excepthook__(type, value, tb) # calls default excepthook
sys.excepthook = log_exceptions
Inspired of the main answer + How to write to a file, using the logging Python module? + Print original exception in excepthook, here is how to write the full traceback into a file test.log, like it would be printed in the console:
import logging, sys, traceback
logger = logging.getLogger('logger')
fh = logging.FileHandler('test.log')
logger.addHandler(fh)
def exc_handler(exctype, value, tb):
logger.exception(''.join(traceback.format_exception(exctype, value, tb)))
sys.excepthook = exc_handler
print("hello")
1/0