TimedRotatingFileHandler in logging module - python

I have the following code:
root_logger = logging.getLogger()
fh = logging.handlers.TimedRotatingFileHandler('log_', when='midnight')
root_logger.addHandler(fh)
logging.error('This is an error message')
Expected output -- file with name "log_2014-06-20", but I've got "log_". Why? What am I doing wrong? How can I fix it?

The time is applied when rotation happens, not before. If you use a filename of e.g. myapp.log, that name will be used until rollover, when it will be renamed using a time-based suffix and a new file created called myapp.log. The new file is then logged to until the next rollover.

Related

Why isn't my python log populating despite the rest of the function running properly?

I am utilizing try-except statements that utilize logging but, even though the log file is being generated, no logs are being created.
Initially my code worked, but it was incorrectly formatted for production: Try except statements that tried running scripts and, upon their failure, pushing log statements to a log.
I was told "Imports -> Functions -> Run functions" + "Functions should have try-except logging, not the other way around".
I've modified my code for this question to isolate the problem: In this code, we have a script that opens a json. The script opening the JSON works. The logging is the only issue.
Where am I going wrong?
When rearranging the code, the script still runs, excluding the logging portion.
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename='C:\\Users\\MWSF\\Documents\\log_test.log',
level=logging.INFO,
format=LOG_FORMAT)
logger = logging.getLogger()
def get_filepath():
'''Find the file path, which is partly defined by the file name.'''
try:
return "C:\\Users\\MWSF\\Documents\\filename.json"
except Exception:
logger.error("Error occured while determining current JSON file path.")
else:
logger.info("Successfully determined current JSON file path.")
path = get_filepath()
Intended result: A function that opens a specified file and a log named log_test.log that has this information:
INFO 2019-04-26 14:52:02,260 - Imported current JSON file.
Actual result: A function that opens a specified file and a log named log_test.log that has this information:
Put the return on the "else" clause rather than under "try". It causes the function to exit rather than doing the logging.
def get_filepath():
'''Find the file path, which is partly defined by the file name.'''
try:
#return "C:\\Users\\MWSF\\Documents\\filename.json"
path = "C:\\Users\\MWSF\\Documents\\filename.json"
except Exception:
logger.error("Error occured while determining current JSON file path.")
else:
logger.info("Successfully determined current JSON file path.")
return path
Sample log_test.log:
INFO 2019-04-29 12:58:53,329 - Successfully determined current JSON file path.

How to change the logger associated to logging.captureWarnings()?

In my application, I'm using logging.captureWarnings(True) to make sure any DeprecationWarning gets logged in the normal application log.
This works well, but results in logs like:
WARNING [py.warnings] c:\some\path...
It seems from the documentation that:
If capture is True, warnings issued by the warnings module will be
redirected to the logging system. Specifically, a warning will be
formatted using warnings.formatwarning() and the resulting string
logged to a logger named 'py.warnings' with a severity of WARNING.
So that is all to be expected. But I'd like to change the logger associated to such warnings (use the one my application provides, so that one can know when looking at the logs where the DeprecationWarning comes from).
Is there a way to change the associated logger ?
I just did some more investigation and found a perfect way to achieve that:
Looking at the source code for logging.captureWarnings():
def captureWarnings(capture):
"""
If capture is true, redirect all warnings to the logging package.
If capture is False, ensure that warnings are not redirected to logging
but to their original destinations.
"""
global _warnings_showwarning
if capture:
if _warnings_showwarning is None:
_warnings_showwarning = warnings.showwarning
warnings.showwarning = _showwarning
else:
if _warnings_showwarning is not None:
warnings.showwarning = _warnings_showwarning
_warnings_showwarning = None
It seems one can just change warnings.showwarning to point to another callable that will do whatever logging job you want (or anything else for that matter).
The expected prototype for warnings.showwarning seems to be:
def _show_warning(message, category, filename, lineno, file=None, line=None):
"""Hook to write a warning to a file; replace if you like."""
if file is None:
file = sys.stderr
try:
file.write(formatwarning(message, category, filename, lineno, line))
except IOError:
pass # the file (probably stderr) is invalid - this warning gets lost.
It seems logging.captureWarnings() actually sets the callable to logging._showwarning:
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""
Implementation of showwarnings which redirects to logging, which will first
check to see if the file parameter is None. If a file is specified, it will
delegate to the original warnings implementation of showwarning. Otherwise,
it will call warnings.formatwarning and will log the resulting string to a
warnings logger named "py.warnings" with level logging.WARNING.
"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno, line)
logger = getLogger("py.warnings")
if not logger.handlers:
logger.addHandler(NullHandler())
logger.warning("%s", s)

python logging not printing datetime and other format values

I'm trying to use the python logging module to create a RotatingFileHandler for my program. My log handler logs output to the file: /var/log/pdmd.log and the basic functionality seems to work and log output as desired.
However, I'm trying to format my log string with this format:
"%(levelname)s %(asctime)s %(funcName)s %(lineno)d %(message)s"
But only the message portion of the exception is getting logged. Here is my code to setup the logger:
#class variable declared at the beginning of the class declaration
log = logging.getLogger("PdmImportDaemon")
def logSetup(self):
FORMAT = "%(levelname)s %(asctime)s %(funcName)s %(lineno)d %(message)s"
logging.basicConfig(format=FORMAT)
#logging.basicConfig(level=logging.DEBUG)
self.log.setLevel(logging.DEBUG) #by setting our logger to the DEBUG level (lowest level) we will include all other levels by default
#setup the rotating file handler to automatically increment the log file name when the max size is reached
self.log.addHandler( logging.handlers.RotatingFileHandler('/var/log/pdmd.log', mode='a', maxBytes=50000, backupCount=5) )
Now, when I run a method and make the program output to the log with the following code:
def dirIterate( self ):
try:
raise Exception( "this is my exception, trying some cool output stuff here!")
except Exception, e:
self.log.error( e )
raise e
And the output in the pdmd.log file is just the exception text and nothing else. For some reason, the formatting is not being respected; I expected:
ERROR 2013-09-03 06:53:18,416 dirIterate 89 this is my exception, trying some cool output stuff here!
Any ideas as to why the formatting that I setup in my logging.basicConfig is not being respected?
You have to add the format to the Handler too.
When you run basicConfig(), you are configuring a new Handler for the root Logger.
In this case your custom Handler is getting no format.
Replace
self.log.addHandler( logging.handlers.RotatingFileHandler('/var/log/pdmd.log', mode='a', maxBytes=50000, backupCount=5) )
with:
rothnd = logging.handlers.RotatingFileHandler('/var/log/pdmd.log', mode='a', maxBytes=50000, backupCount=5)
rothnd.setFormatter(logging.Formatter(FORMAT))
self.log.addHandler(rothnd)

Python: File Handler Issue: Delete file without leaving .nsf files

I have following method to handle logging in my python program
def createLogger(logger, logLang):
"""
Setting up logger
"""
log_format = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
file_handler = logging.FileHandler(filename=(os.path.join(OUT_DIR_LOGS, logLang + '-userdynamics.log')))
file_handler.setFormatter(log_format)
logger.setLevel(logging.INFO)
logger.addHandler(file_handler)
This is a large data collection code base and to avoid quota constrain on remote server, I have implemented following gzip and tar procedure,
def gzipLogs(lang):
"""
Compressing and tar
"""
# Compressing logfiles and removing the old logfile
original_filePath = OUT_DIR_LOGS + "/" +lang + "-userdynamics.log"
gzip_filePath = OUT_DIR_LOGS + "/" + lang +"-userdynamics.gz"
with open(original_filePath , 'rb') as original_file:
with gzip.open(gzip_filePath, 'wb') as zipped_file:
zipped_file.writelines(original_file)
os.remove(original_filePath)
# Compressing language folders that has data
folder_path = OUT_DIR + "/" + lang
tar_file = tarfile.open(folder_path + ".tgz", "w:gz")
# add timestamp to arch file
tar_file.add(folder_path, arcname = NOW + "_" + lang)
tar_file.close()
# delete the original file
shutil.rmtree(folder_path)
I do my data gather process in a nested for loop and I call the logger as mentioned bellow:
for something in somethings:
for item in items:
log = logging.getLogger()
# Calling the logging configuration function.
createLogger(log, lang)
Everything works fine but when it get executed, after deleting the file .nsf file residuals get left behind while causing the quota problem to remain as it is.
So I added following code segment to close the logging file handler but with this now I endup getting following error:
Code to close the log file:
unclosed_logs = list(log.handlers)
for uFile in unclosed_logs:
print uFile
log.removeHandler(uFile)
uFile.flush()
uFile.close()
Above code end up giving me this error:
Traceback (most recent call last):
File "/somefilepath/SomePythonFile.py", line 529, in <module>
main()
File "/somefilepath/SomePythonFile.py", line 521, in main
gzipLogs(lang)
File "/somefilepath/SomePythonFile.py", line 61, in gzipLogs
with gzip.open(gzip_filePath, 'wb') as zipped_file:
AttributeError: GzipFile instance has no attribute '__exit__'
This is how main method looks like with the handler closing code segment:
for something in somethings:
for item in items:
log = logging.getLogger()
# Calling the logging configuration function.
createLogger(log, lang)
unclosed_logs = list(log.handlers)
for uFile in unclosed_logs:
print uFile
log.removeHandler(uFile)
uFile.flush()
uFile.close()
What am I doing wrong? Am I handling the logger wrong? or am I closing the file too soon?
There are a number of things which could cause problems:
You should only configure logging (e.g. set levels, add handlers) in one place in your program, ideally from the if __name__ == '__main__' clause. You seem not to be doing this. Note that you can use WatchedFileHandler and use an external rotator to rotate your log files - e.g. logrotate provides rotation and compression functionality.
Your error relating to __exit__ is nothing to do with logging - it's probably related to a Python version problem. GZipFile only became usable with with in Python 2.7 / 3.2 - in older versions, you will get the error message if you try to use a GZipFile in a with statement.
After some research I realize that server that I was executing the file was running against python 2.6 and in 2.6 GZip module didn't have with open. To answer for this question is either switch to python 2.7 or change the implementation to old fashion file open file in a try catch block.
try:
inOriginalFile = open(original_filePath, 'rb')
outGZipFile = gzip.open(gzip_filePath, 'wb')
try:
outGZipFile.writelines(inOriginalFile)
finally:
outGZipFile.close()
inOriginalFile.close()
except IOError as e:
logging.error("Unable to open gzip files, GZIP FAILURE")
This is how I fixed this problem.

Finding the source of format errors when using python logging

When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement:
Traceback (most recent call last):
File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit
msg = self.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 648, in format
return fmt.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
Rather than editing installed python code, you can also find the errors like this:
def handleError(record):
raise RuntimeError(record)
handler.handleError = handleError
where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
The logging module is designed to stop bad log messages from killing the rest of the code, so the emit method catches errors and passes them to a method handleError. The easiest thing for you to do would be to temporarily edit /usr/lib/python2.6/logging/__init__.py, and find handleError. It looks something like this:
def handleError(self, record):
"""
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is
encountered during an emit() call. If raiseExceptions is false,
exceptions get silently ignored. This is what is mostly wanted
for a logging system - most users will not care about errors in
the logging system, they are more interested in application errors.
You could, however, replace this with a custom handler if you wish.
The record which was being processed is passed in to this method.
"""
if raiseExceptions:
ei = sys.exc_info()
try:
traceback.print_exception(ei[0], ei[1], ei[2],
None, sys.stderr)
sys.stderr.write('Logged from file %s, line %s\n' % (
record.filename, record.lineno))
except IOError:
pass # see issue 5971
finally:
del ei
Now temporarily edit it. Inserting a simple raise at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me.
My problem was that I replaced all occurrences of print with logging.info ,
so a valid line like print('a',a) became logging.info('a',a) (but it should be logging.info('a %s'%a) instead.
This was also hinted in How to traceback logging errors? , but it doesn't come up in the research
Alternatively you can create a formatter of your own, but then you have to include it everywhere.
class DebugFormatter(logging.Formatter):
def format(self, record):
try:
return super(DebugFormatter, self).format(record)
except:
print "Unable to format record"
print "record.filename ", record.filename
print "record.lineno ", record.lineno
print "record.msg ", record.msg
print "record.args: ",record.args
raise
FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s'
formatter = DebugFormatter(FORMAT)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
Had same problem
Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "https://docs.python.org/3/library/logging.html#formatter-objects"

Categories