I have written this piece of code to get namespace from xml document.
I am trying to handle exception, and write full trace to the log. however trace is not getting written with custom message in log (though i can see it on screen).
I believe, i am missing somthing in Logger handler config. is there any specific configuration we need to deal with? below is my logger config so far.
Any help will be appreciated.!
logger = logging.getLogger(__name__)
hdlr = logging.FileHandler(r'C:\link.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
def get_ns(xmlroot):
"""Retrieve XML Document Namespace """
try:
logger.info("Trying to get XML namespace detail")
nsmap = xmlroot.nsmap.copy()
logger.info("Creating XML Namespace Object, {0}".format(nsmap))
nsmap['xmlns'] = nsmap.pop(None)
except (KeyError, SystemExit):
logging.exception("XML files does not contain namespace, Halting Program! ")
sys.exit()
else:
for ns in nsmap.values():
logger.info("Retrieved XML Namespace {0}".format(ns))
return ns
output on screen:
ERROR:root:XML files does not contain namespace, Halting Program!
Traceback (most recent call last):
File "C:\link.log", line 28, in get_ns
nsmap['xmlns'] = nsmap.pop(None)
KeyError: None
Change
logging.exception("XML files does not contain namespace, Halting Program! ")
to
logger.exception("XML files does not contain namespace, Halting Program! ")
Since it's logger that you have configured to write to file C:\link.log.
Using logging.exception uses the "root logger", which outputs to the console by default.
Related
I am utilizing try-except statements that utilize logging but, even though the log file is being generated, no logs are being created.
Initially my code worked, but it was incorrectly formatted for production: Try except statements that tried running scripts and, upon their failure, pushing log statements to a log.
I was told "Imports -> Functions -> Run functions" + "Functions should have try-except logging, not the other way around".
I've modified my code for this question to isolate the problem: In this code, we have a script that opens a json. The script opening the JSON works. The logging is the only issue.
Where am I going wrong?
When rearranging the code, the script still runs, excluding the logging portion.
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename='C:\\Users\\MWSF\\Documents\\log_test.log',
level=logging.INFO,
format=LOG_FORMAT)
logger = logging.getLogger()
def get_filepath():
'''Find the file path, which is partly defined by the file name.'''
try:
return "C:\\Users\\MWSF\\Documents\\filename.json"
except Exception:
logger.error("Error occured while determining current JSON file path.")
else:
logger.info("Successfully determined current JSON file path.")
path = get_filepath()
Intended result: A function that opens a specified file and a log named log_test.log that has this information:
INFO 2019-04-26 14:52:02,260 - Imported current JSON file.
Actual result: A function that opens a specified file and a log named log_test.log that has this information:
Put the return on the "else" clause rather than under "try". It causes the function to exit rather than doing the logging.
def get_filepath():
'''Find the file path, which is partly defined by the file name.'''
try:
#return "C:\\Users\\MWSF\\Documents\\filename.json"
path = "C:\\Users\\MWSF\\Documents\\filename.json"
except Exception:
logger.error("Error occured while determining current JSON file path.")
else:
logger.info("Successfully determined current JSON file path.")
return path
Sample log_test.log:
INFO 2019-04-29 12:58:53,329 - Successfully determined current JSON file path.
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
and the exception is called as can be seen from my output in my notebook:
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
However, error.log is not being written to. Any thoughts would be appreciated.
try:
except Exception as e:
excep = logger.info("an error I would like to log")
for more information please look at the docs https://docs.python.org/2/library/logging.html
have a great day!
You're using logging.exception, which delegates to the root logger, instead of using logger.exception, which would use yours (and write to error.log).
I'm trying to use the python logging module to create a RotatingFileHandler for my program. My log handler logs output to the file: /var/log/pdmd.log and the basic functionality seems to work and log output as desired.
However, I'm trying to format my log string with this format:
"%(levelname)s %(asctime)s %(funcName)s %(lineno)d %(message)s"
But only the message portion of the exception is getting logged. Here is my code to setup the logger:
#class variable declared at the beginning of the class declaration
log = logging.getLogger("PdmImportDaemon")
def logSetup(self):
FORMAT = "%(levelname)s %(asctime)s %(funcName)s %(lineno)d %(message)s"
logging.basicConfig(format=FORMAT)
#logging.basicConfig(level=logging.DEBUG)
self.log.setLevel(logging.DEBUG) #by setting our logger to the DEBUG level (lowest level) we will include all other levels by default
#setup the rotating file handler to automatically increment the log file name when the max size is reached
self.log.addHandler( logging.handlers.RotatingFileHandler('/var/log/pdmd.log', mode='a', maxBytes=50000, backupCount=5) )
Now, when I run a method and make the program output to the log with the following code:
def dirIterate( self ):
try:
raise Exception( "this is my exception, trying some cool output stuff here!")
except Exception, e:
self.log.error( e )
raise e
And the output in the pdmd.log file is just the exception text and nothing else. For some reason, the formatting is not being respected; I expected:
ERROR 2013-09-03 06:53:18,416 dirIterate 89 this is my exception, trying some cool output stuff here!
Any ideas as to why the formatting that I setup in my logging.basicConfig is not being respected?
You have to add the format to the Handler too.
When you run basicConfig(), you are configuring a new Handler for the root Logger.
In this case your custom Handler is getting no format.
Replace
self.log.addHandler( logging.handlers.RotatingFileHandler('/var/log/pdmd.log', mode='a', maxBytes=50000, backupCount=5) )
with:
rothnd = logging.handlers.RotatingFileHandler('/var/log/pdmd.log', mode='a', maxBytes=50000, backupCount=5)
rothnd.setFormatter(logging.Formatter(FORMAT))
self.log.addHandler(rothnd)
I have following method to handle logging in my python program
def createLogger(logger, logLang):
"""
Setting up logger
"""
log_format = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
file_handler = logging.FileHandler(filename=(os.path.join(OUT_DIR_LOGS, logLang + '-userdynamics.log')))
file_handler.setFormatter(log_format)
logger.setLevel(logging.INFO)
logger.addHandler(file_handler)
This is a large data collection code base and to avoid quota constrain on remote server, I have implemented following gzip and tar procedure,
def gzipLogs(lang):
"""
Compressing and tar
"""
# Compressing logfiles and removing the old logfile
original_filePath = OUT_DIR_LOGS + "/" +lang + "-userdynamics.log"
gzip_filePath = OUT_DIR_LOGS + "/" + lang +"-userdynamics.gz"
with open(original_filePath , 'rb') as original_file:
with gzip.open(gzip_filePath, 'wb') as zipped_file:
zipped_file.writelines(original_file)
os.remove(original_filePath)
# Compressing language folders that has data
folder_path = OUT_DIR + "/" + lang
tar_file = tarfile.open(folder_path + ".tgz", "w:gz")
# add timestamp to arch file
tar_file.add(folder_path, arcname = NOW + "_" + lang)
tar_file.close()
# delete the original file
shutil.rmtree(folder_path)
I do my data gather process in a nested for loop and I call the logger as mentioned bellow:
for something in somethings:
for item in items:
log = logging.getLogger()
# Calling the logging configuration function.
createLogger(log, lang)
Everything works fine but when it get executed, after deleting the file .nsf file residuals get left behind while causing the quota problem to remain as it is.
So I added following code segment to close the logging file handler but with this now I endup getting following error:
Code to close the log file:
unclosed_logs = list(log.handlers)
for uFile in unclosed_logs:
print uFile
log.removeHandler(uFile)
uFile.flush()
uFile.close()
Above code end up giving me this error:
Traceback (most recent call last):
File "/somefilepath/SomePythonFile.py", line 529, in <module>
main()
File "/somefilepath/SomePythonFile.py", line 521, in main
gzipLogs(lang)
File "/somefilepath/SomePythonFile.py", line 61, in gzipLogs
with gzip.open(gzip_filePath, 'wb') as zipped_file:
AttributeError: GzipFile instance has no attribute '__exit__'
This is how main method looks like with the handler closing code segment:
for something in somethings:
for item in items:
log = logging.getLogger()
# Calling the logging configuration function.
createLogger(log, lang)
unclosed_logs = list(log.handlers)
for uFile in unclosed_logs:
print uFile
log.removeHandler(uFile)
uFile.flush()
uFile.close()
What am I doing wrong? Am I handling the logger wrong? or am I closing the file too soon?
There are a number of things which could cause problems:
You should only configure logging (e.g. set levels, add handlers) in one place in your program, ideally from the if __name__ == '__main__' clause. You seem not to be doing this. Note that you can use WatchedFileHandler and use an external rotator to rotate your log files - e.g. logrotate provides rotation and compression functionality.
Your error relating to __exit__ is nothing to do with logging - it's probably related to a Python version problem. GZipFile only became usable with with in Python 2.7 / 3.2 - in older versions, you will get the error message if you try to use a GZipFile in a with statement.
After some research I realize that server that I was executing the file was running against python 2.6 and in 2.6 GZip module didn't have with open. To answer for this question is either switch to python 2.7 or change the implementation to old fashion file open file in a try catch block.
try:
inOriginalFile = open(original_filePath, 'rb')
outGZipFile = gzip.open(gzip_filePath, 'wb')
try:
outGZipFile.writelines(inOriginalFile)
finally:
outGZipFile.close()
inOriginalFile.close()
except IOError as e:
logging.error("Unable to open gzip files, GZIP FAILURE")
This is how I fixed this problem.
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement:
Traceback (most recent call last):
File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit
msg = self.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 648, in format
return fmt.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
Rather than editing installed python code, you can also find the errors like this:
def handleError(record):
raise RuntimeError(record)
handler.handleError = handleError
where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
The logging module is designed to stop bad log messages from killing the rest of the code, so the emit method catches errors and passes them to a method handleError. The easiest thing for you to do would be to temporarily edit /usr/lib/python2.6/logging/__init__.py, and find handleError. It looks something like this:
def handleError(self, record):
"""
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is
encountered during an emit() call. If raiseExceptions is false,
exceptions get silently ignored. This is what is mostly wanted
for a logging system - most users will not care about errors in
the logging system, they are more interested in application errors.
You could, however, replace this with a custom handler if you wish.
The record which was being processed is passed in to this method.
"""
if raiseExceptions:
ei = sys.exc_info()
try:
traceback.print_exception(ei[0], ei[1], ei[2],
None, sys.stderr)
sys.stderr.write('Logged from file %s, line %s\n' % (
record.filename, record.lineno))
except IOError:
pass # see issue 5971
finally:
del ei
Now temporarily edit it. Inserting a simple raise at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me.
My problem was that I replaced all occurrences of print with logging.info ,
so a valid line like print('a',a) became logging.info('a',a) (but it should be logging.info('a %s'%a) instead.
This was also hinted in How to traceback logging errors? , but it doesn't come up in the research
Alternatively you can create a formatter of your own, but then you have to include it everywhere.
class DebugFormatter(logging.Formatter):
def format(self, record):
try:
return super(DebugFormatter, self).format(record)
except:
print "Unable to format record"
print "record.filename ", record.filename
print "record.lineno ", record.lineno
print "record.msg ", record.msg
print "record.args: ",record.args
raise
FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s'
formatter = DebugFormatter(FORMAT)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
Had same problem
Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "https://docs.python.org/3/library/logging.html#formatter-objects"