In my application, I'm using logging.captureWarnings(True) to make sure any DeprecationWarning gets logged in the normal application log.
This works well, but results in logs like:
WARNING [py.warnings] c:\some\path...
It seems from the documentation that:
If capture is True, warnings issued by the warnings module will be
redirected to the logging system. Specifically, a warning will be
formatted using warnings.formatwarning() and the resulting string
logged to a logger named 'py.warnings' with a severity of WARNING.
So that is all to be expected. But I'd like to change the logger associated to such warnings (use the one my application provides, so that one can know when looking at the logs where the DeprecationWarning comes from).
Is there a way to change the associated logger ?
I just did some more investigation and found a perfect way to achieve that:
Looking at the source code for logging.captureWarnings():
def captureWarnings(capture):
"""
If capture is true, redirect all warnings to the logging package.
If capture is False, ensure that warnings are not redirected to logging
but to their original destinations.
"""
global _warnings_showwarning
if capture:
if _warnings_showwarning is None:
_warnings_showwarning = warnings.showwarning
warnings.showwarning = _showwarning
else:
if _warnings_showwarning is not None:
warnings.showwarning = _warnings_showwarning
_warnings_showwarning = None
It seems one can just change warnings.showwarning to point to another callable that will do whatever logging job you want (or anything else for that matter).
The expected prototype for warnings.showwarning seems to be:
def _show_warning(message, category, filename, lineno, file=None, line=None):
"""Hook to write a warning to a file; replace if you like."""
if file is None:
file = sys.stderr
try:
file.write(formatwarning(message, category, filename, lineno, line))
except IOError:
pass # the file (probably stderr) is invalid - this warning gets lost.
It seems logging.captureWarnings() actually sets the callable to logging._showwarning:
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""
Implementation of showwarnings which redirects to logging, which will first
check to see if the file parameter is None. If a file is specified, it will
delegate to the original warnings implementation of showwarning. Otherwise,
it will call warnings.formatwarning and will log the resulting string to a
warnings logger named "py.warnings" with level logging.WARNING.
"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno, line)
logger = getLogger("py.warnings")
if not logger.handlers:
logger.addHandler(NullHandler())
logger.warning("%s", s)
Related
I am utilizing try-except statements that utilize logging but, even though the log file is being generated, no logs are being created.
Initially my code worked, but it was incorrectly formatted for production: Try except statements that tried running scripts and, upon their failure, pushing log statements to a log.
I was told "Imports -> Functions -> Run functions" + "Functions should have try-except logging, not the other way around".
I've modified my code for this question to isolate the problem: In this code, we have a script that opens a json. The script opening the JSON works. The logging is the only issue.
Where am I going wrong?
When rearranging the code, the script still runs, excluding the logging portion.
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename='C:\\Users\\MWSF\\Documents\\log_test.log',
level=logging.INFO,
format=LOG_FORMAT)
logger = logging.getLogger()
def get_filepath():
'''Find the file path, which is partly defined by the file name.'''
try:
return "C:\\Users\\MWSF\\Documents\\filename.json"
except Exception:
logger.error("Error occured while determining current JSON file path.")
else:
logger.info("Successfully determined current JSON file path.")
path = get_filepath()
Intended result: A function that opens a specified file and a log named log_test.log that has this information:
INFO 2019-04-26 14:52:02,260 - Imported current JSON file.
Actual result: A function that opens a specified file and a log named log_test.log that has this information:
Put the return on the "else" clause rather than under "try". It causes the function to exit rather than doing the logging.
def get_filepath():
'''Find the file path, which is partly defined by the file name.'''
try:
#return "C:\\Users\\MWSF\\Documents\\filename.json"
path = "C:\\Users\\MWSF\\Documents\\filename.json"
except Exception:
logger.error("Error occured while determining current JSON file path.")
else:
logger.info("Successfully determined current JSON file path.")
return path
Sample log_test.log:
INFO 2019-04-29 12:58:53,329 - Successfully determined current JSON file path.
Can someone explaine the marked line of this method from /usr/lib/python2.7/logging/__init__.py?
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""
Implementation of showwarnings which redirects to logging, which will first
check to see if the file parameter is None. If a file is specified, it will
delegate to the original warnings implementation of showwarning. Otherwise,
it will call warnings.formatwarning and will log the resulting string to a
warnings logger named "py.warnings" with level logging.WARNING.
"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno, line)
logger = getLogger("py.warnings")
if not logger.handlers:
logger.addHandler(NullHandler())
logger.warning("%s", s) # <------ I don't understand this line
Why is the last line not this:
logger.warning(s)
Because this is the correct syntax.
See https://docs.python.org/2/library/logging.html
Logger.warning(msg, *args, **kwargs)
Logs a message with level WARNING on this logger. The arguments are interpreted as for debug().
This syntax is well known and used by all *printf() functions for example.
I have following method to handle logging in my python program
def createLogger(logger, logLang):
"""
Setting up logger
"""
log_format = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
file_handler = logging.FileHandler(filename=(os.path.join(OUT_DIR_LOGS, logLang + '-userdynamics.log')))
file_handler.setFormatter(log_format)
logger.setLevel(logging.INFO)
logger.addHandler(file_handler)
This is a large data collection code base and to avoid quota constrain on remote server, I have implemented following gzip and tar procedure,
def gzipLogs(lang):
"""
Compressing and tar
"""
# Compressing logfiles and removing the old logfile
original_filePath = OUT_DIR_LOGS + "/" +lang + "-userdynamics.log"
gzip_filePath = OUT_DIR_LOGS + "/" + lang +"-userdynamics.gz"
with open(original_filePath , 'rb') as original_file:
with gzip.open(gzip_filePath, 'wb') as zipped_file:
zipped_file.writelines(original_file)
os.remove(original_filePath)
# Compressing language folders that has data
folder_path = OUT_DIR + "/" + lang
tar_file = tarfile.open(folder_path + ".tgz", "w:gz")
# add timestamp to arch file
tar_file.add(folder_path, arcname = NOW + "_" + lang)
tar_file.close()
# delete the original file
shutil.rmtree(folder_path)
I do my data gather process in a nested for loop and I call the logger as mentioned bellow:
for something in somethings:
for item in items:
log = logging.getLogger()
# Calling the logging configuration function.
createLogger(log, lang)
Everything works fine but when it get executed, after deleting the file .nsf file residuals get left behind while causing the quota problem to remain as it is.
So I added following code segment to close the logging file handler but with this now I endup getting following error:
Code to close the log file:
unclosed_logs = list(log.handlers)
for uFile in unclosed_logs:
print uFile
log.removeHandler(uFile)
uFile.flush()
uFile.close()
Above code end up giving me this error:
Traceback (most recent call last):
File "/somefilepath/SomePythonFile.py", line 529, in <module>
main()
File "/somefilepath/SomePythonFile.py", line 521, in main
gzipLogs(lang)
File "/somefilepath/SomePythonFile.py", line 61, in gzipLogs
with gzip.open(gzip_filePath, 'wb') as zipped_file:
AttributeError: GzipFile instance has no attribute '__exit__'
This is how main method looks like with the handler closing code segment:
for something in somethings:
for item in items:
log = logging.getLogger()
# Calling the logging configuration function.
createLogger(log, lang)
unclosed_logs = list(log.handlers)
for uFile in unclosed_logs:
print uFile
log.removeHandler(uFile)
uFile.flush()
uFile.close()
What am I doing wrong? Am I handling the logger wrong? or am I closing the file too soon?
There are a number of things which could cause problems:
You should only configure logging (e.g. set levels, add handlers) in one place in your program, ideally from the if __name__ == '__main__' clause. You seem not to be doing this. Note that you can use WatchedFileHandler and use an external rotator to rotate your log files - e.g. logrotate provides rotation and compression functionality.
Your error relating to __exit__ is nothing to do with logging - it's probably related to a Python version problem. GZipFile only became usable with with in Python 2.7 / 3.2 - in older versions, you will get the error message if you try to use a GZipFile in a with statement.
After some research I realize that server that I was executing the file was running against python 2.6 and in 2.6 GZip module didn't have with open. To answer for this question is either switch to python 2.7 or change the implementation to old fashion file open file in a try catch block.
try:
inOriginalFile = open(original_filePath, 'rb')
outGZipFile = gzip.open(gzip_filePath, 'wb')
try:
outGZipFile.writelines(inOriginalFile)
finally:
outGZipFile.close()
inOriginalFile.close()
except IOError as e:
logging.error("Unable to open gzip files, GZIP FAILURE")
This is how I fixed this problem.
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement:
Traceback (most recent call last):
File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit
msg = self.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 648, in format
return fmt.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
Rather than editing installed python code, you can also find the errors like this:
def handleError(record):
raise RuntimeError(record)
handler.handleError = handleError
where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
The logging module is designed to stop bad log messages from killing the rest of the code, so the emit method catches errors and passes them to a method handleError. The easiest thing for you to do would be to temporarily edit /usr/lib/python2.6/logging/__init__.py, and find handleError. It looks something like this:
def handleError(self, record):
"""
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is
encountered during an emit() call. If raiseExceptions is false,
exceptions get silently ignored. This is what is mostly wanted
for a logging system - most users will not care about errors in
the logging system, they are more interested in application errors.
You could, however, replace this with a custom handler if you wish.
The record which was being processed is passed in to this method.
"""
if raiseExceptions:
ei = sys.exc_info()
try:
traceback.print_exception(ei[0], ei[1], ei[2],
None, sys.stderr)
sys.stderr.write('Logged from file %s, line %s\n' % (
record.filename, record.lineno))
except IOError:
pass # see issue 5971
finally:
del ei
Now temporarily edit it. Inserting a simple raise at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me.
My problem was that I replaced all occurrences of print with logging.info ,
so a valid line like print('a',a) became logging.info('a',a) (but it should be logging.info('a %s'%a) instead.
This was also hinted in How to traceback logging errors? , but it doesn't come up in the research
Alternatively you can create a formatter of your own, but then you have to include it everywhere.
class DebugFormatter(logging.Formatter):
def format(self, record):
try:
return super(DebugFormatter, self).format(record)
except:
print "Unable to format record"
print "record.filename ", record.filename
print "record.lineno ", record.lineno
print "record.msg ", record.msg
print "record.args: ",record.args
raise
FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s'
formatter = DebugFormatter(FORMAT)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
Had same problem
Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "https://docs.python.org/3/library/logging.html#formatter-objects"
I'm issuing lots of warnings in a validator, and I'd like to suppress everything in stdout except the message that is supplied to warnings.warn().
I.e., now I see this:
./file.py:123: UserWarning: My looong warning message
some Python code
I'd like to see this:
My looong warning message
Edit 2: Overriding warnings.showwarning() turned out to work:
def _warning(
message,
category = UserWarning,
filename = '',
lineno = -1):
print(message)
...
warnings.showwarning = _warning
warnings.warn('foo')
There is always monkeypatching:
import warnings
def custom_formatwarning(msg, *args, **kwargs):
# ignore everything except the message
return str(msg) + '\n'
warnings.formatwarning = custom_formatwarning
warnings.warn("achtung")
Monkeypatch warnings.showwarning() with your own custom function.
Use the logging module instead of warnings.
Here's what I'm doing to omit just the source code line. This is by and large as suggested by the documentation, but it was a bit of a struggle to figure out what exactly to change. (In particular, I tried in various ways to keep the source line out of showwarnings but couldn't get it to work the way I wanted.)
# Force warnings.warn() to omit the source code line in the message
formatwarning_orig = warnings.formatwarning
warnings.formatwarning = lambda message, category, filename, lineno, line=None: \
formatwarning_orig(message, category, filename, lineno, line='')
Just passing line=None would cause Python to use filename and lineno to figure out a value for line automagically, but passing an empty string instead fixes that.