logging with dictConfig and writing to the console and a file - python

In my code I have the following for a verbose mode and a non-verbose mode. I'm reading from a logDict object.
I expect that in verbose mode I will get "DEBUG MODE: test debug" and "DEBUG MODE: test error" written to the console and "[uuid] [date] [etc] test error" only written to a file, and that in non-verbose mode that nothing gets printed to the console but "test error" will be written to the file.
First, here is my dictConfig
LOGGING_DICT = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
# we have a uuid for the log so we know what process it came from
'format': '[{0}][%(asctime)s][%(name)s][%(levelname)s] : %(message)s'.format(logger_id),
'datefmt': "%Y-%m-%d %H:%M:%S",
}
},
'loggers': {
'root': {
'handlers': ['console'],
'level': 'DEBUG',
},
'script_A': {
'handlers': ['timed_rotate_file'],
'level': 'INFO',
},
},
'handlers' : {
'timed_rotate_file': {
'filename': 'logs/weekly_tool.log',
'level': 'INFO',
'formatter': 'simple',
'class': 'logging.handlers.TimedRotatingFileHandler',
'encoding': 'utf8',
# Used to configure when backups happen 'seconds, minutes, w0,w1 (monday tuesday)
'when': 'midnight', # Daily backup
# This is used to configure rollover (7=weekly files if when = daily or midnight)
'backupCount': 7,
}
}
And now the script that calls it
from logging.config import dictConfig
from helpers.logging_config import LOGGING_DICT
...
main():
logger.debug("test debug")
logger.error("test error")
if __name__ == "__main__":
if args.verbose:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("DEBUG MODE: %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)
else:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
What I get instead is the following:
~$ python script_A.py
~$ (No output, as expected)
~$ python script_A.py -v
~$ DEBUG MODE: test error
Why is the test_debug not printing to console? Clearly the stream handler is being called, but the level is either not being set correctly or is being ignored.
When I print logger.level in the middle of the script I get 20, which is what I expect given dictConfig, however the handler's level is being set separately, does that mean it is being ignored? (What is the point of setLevel in a python logging handler?) <-- I'm looking at this as well, but my issue is flipped. In dict config my settings are stricter than what I actually want to print, which means that if I reset my log level for the logger I'm getting from dictConfig, things I don't want to print to my file are going to be printed. Can I circumvent this?

I figured this out on my own. Similar to what I posted, I have to reset the log level.
if __name__ == "__main__":
if args.verbose:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("DEBUG MODE: %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)
logger.setLevel(logging.DEBUG)
else:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
I thought that doing this would mean that the file handler level also gets changed, but for some reason that doesn't happen. If anyone knows why I would love to know how the internals work this out.

Related

Python module generates logs if ran directly but not when ran by `pytest`

I'm new-ish to python logging and would like to create a solid logging setup for a project to uses watchdog to log the creation of DICOM files in a given directory. My project structure is:
planqa\
src\
watcher.py
qa_logger.py
logs\
qa_watch.log
tests\
test_watchdog.py
So far I've got a LOGGING_DICT in qa_logger.py. LOGGING_DICT is imported into watcher.py (and eventually other modules), and then near the top of watcher.py, I've put the following lines:
logging.config.dictConfig(LOGGING_DICT)
logger = logging.getLogger()
When I run python watcher.py, all logs are generated nicely. Logs are both printed to the console and written to the file src\logs\qa_watch.log.
The problem arises when I run pytest. No logs are generated at all. Yet, strangely, the file the file src\logs\qa_watch.log is created if it doesn't exist, but it's just never written to (unlike when I run python watcher.py)!
I'd like to use pytest to verify that a log was generated in src\logs\qa_watch.log when a valid DICOM file is created in the watched folder.
Any help would be greatly appreciated, including any comments on how to better structure things! I'm still (and always will be) learning!
EDIT: Log level isn't the issue. I have the same problem if I use logger.warning() instead of logger.info() for the logs in watcher.py.
Files:
# qa_logger.py
import logging
from logging.config import dictConfig
import os
from os.path import dirname, join as pjoin
import sys
LOG_DIR = pjoin(dirname(dirname(__file__)), "logs")
LOGGING_DICT = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'f': {
'format': '%(asctime)s %(name)-12s %(levelname)-8s: %(message)s'
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG
},
'file': {
'class': 'logging.handlers.TimedRotatingFileHandler',
'level': logging.DEBUG,
'formatter': 'f',
'filename': pjoin(LOG_DIR, "qa_watch.log"),
'when': 'midnight',
'interval': 1,
'backupCount': 30
}
},
'root': {
'handlers': ['console', 'file'],
'level': logging.DEBUG,
},
}
# watcher.py
import logging
from time import sleep
from os.path import abspath, dirname, join as pjoin
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
from src.qa_logger import LOGGING_DICT
logging.config.dictConfig(LOGGING_DICT)
logger = logging.getLogger()
class NewFileHandler(PatternMatchingEventHandler):
patterns = ['*.dcm']
ignore_patterns = None
case_sensitive = False
ignore_directories = True
def on_created(self, event):
msg = "Caught creation of {}".format(event.src_path)
logger.info(msg)
def watch_for_dicoms_created_in_folder(folder=None):
if folder is None:
folder = abspath(pjoin(dirname(dirname(__file__)), "tests", "data"))
dcm_handler = NewFileHandler()
observer = Observer()
observer.schedule(dcm_handler, folder, recursive=True)
observer.start()
logger.info("----- Started watcher -----")
try:
while True:
sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
if __name__ == "__main__":
watch_for_dicoms_created_in_folder()
# test_watchdog.py
import logging
import os
from os.path import dirname, join as pjoin, isfile
from multiprocessing import Process
from time import sleep
from src.watcher import watch_for_dicoms_created_in_folder
from src.qa_logger import LOG_DIR
DATA_DIR = pjoin(dirname(__file__), "data")
LOG_PATH = pjoin(LOG_DIR, "qa_watch.log")
def test_watching_for_file_creation():
dcm_test_filepath = pjoin(DATA_DIR, "a_dcm_to_catch.dcm")
nondcm_test_filepath = pjoin(DATA_DIR, "not_a_dcm_to_catch.txt")
# Watcher is run in a separate process or subsequent code will not execute
watch1 = Process(name='watch1', target=watch_for_dicoms_created_in_folder)
watch1.start()
dcm_test_file = open(dcm_test_filepath, 'w')
nondcm_test_file = open(nondcm_test_filepath, 'w')
dcm_test_file.close()
nondcm_test_file.close()
sleep(0.2) # Give watcher time to see files and create logfile
watch1.terminate()
assert isfile(LOG_PATH)
with open(LOG_PATH) as logfile:
assert dcm_test_filepath in logfile
assert not nondcm_test_filepath in logfile
# Cleanup
try:
os.remove(dcm_test_filepath)
os.remove(nondcm_test_filepath)
except OSError:
pass

PermissionError when using python 3.3.4 and RotatingFileHandler

I am trying to get a rotating log file for a GUI application I am writing with python 3.3.4 and PyQt4.
I have the following snippet of code in my main script:
import logging
import resources
logger = logging.getLogger('main.test')
def main():
logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler(resources.LOG_FILE_PATH, maxBytes=500, backupCount=5)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('main')
I have the maxBytes low so that I can test the rotating is working correctly, which it is not. I am getting the following error whenever the log should be rotated:
Traceback (most recent call last):
File "C:\Python33\lib\logging\handlers.py", line 73, in emit
self.doRollover()
File "C:\Python33\lib\logging\handlers.py", line 176, in doRollover
self.rotate(self.baseFilename, dfn)
File "C:\Python33\lib\logging\handlers.py", line 116, in rotate
os.rename(source, dest)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\myuser\\.logtest\\test.log.1'
And nothing is logged. Any help is much appreciated.
Thank you
Spent half a day on this as non previous answer resolved my issue.
My working solution is to use https://pypi.org/project/concurrent-log-handler/ instead of RotatingFileHandler. In multiple thread scenarios like Flask app, PermissionError will be raised when we rotate the log file that reaches maximum size.
Install pypiwin32 to get rid of No Module name win32con error.
Thanks go to https://www.programmersought.com/article/43941158027/
Instead of adding handler to the logger object you can directly specify handler in basicConfig(). If you add RotatingFileHandler to the logger object then one object might open the log file and another at the same time might try to rename it which throws the PermissionError.
Below code seems to work pretty well.
import logging
import resources
from logging.handlers import RotatingFileHandler
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[RotatingFileHandler(filename=resources.LOG_FILE_PATH, maxBytes=500, backupCount=5)])
logger = logging.getLogger('main.test')
def main():
logger.setLevel(logging.DEBUG)
logger.info('main')
In my case it happens only in Windows. To solve it I changed the delay parameter to True for my TimedRotatingFileHandler log handler.
Docs -> https://docs.python.org/3/library/logging.handlers.html#logging.handlers.TimedRotatingFileHandler
You cannot specify the same filename in both basicConfig() and RotatingFileHandler(). I had this same issue and I removed the filename parameter from basicConfig() and it now works.
In my case (Windows Server 2016 + IIS + FastCGI + Flask) finally I fix it by turning off files indexing in the folder.
how-to
Source:
https://stackoverflow.com/a/22467917/9199668
Btw, it was working for months correctly... I have no idea why...
Check that the file isn't being kept open by e.g. Windows file indexing, anti-virus or other software. Files that are open can't be renamed.
I changed the application to use dictConfig and created a separate file that holds the dictionary configuration. At the top of my main application I have:
from log.logger import LOGGING
logging.config.dictConfig(LOGGING)
logger = logging.getLogger('testlogging')
Then in log.logger I have:
import logging
import sys
import resources
LOGGING = {
"version":1,
"handlers":{
"fileHandler":{
"class":"logging.handlers.RotatingFileHandler",
"formatter":"myFormatter",
"filename":resources.LOG_FILE_PATH,
"maxBytes":100000,
"backupCount":5
},
"console":{
"class":"logging.StreamHandler",
"formatter":"myFormatter"
}
},
"loggers":{
"aoconnect":{
"handlers":["fileHandler", "console"],
"level":"DEBUG",
}
},
"formatters":{
"myFormatter":{
"format":"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
}
}
}
This all seems to work pretty well.
In My case file size is full, after removing server.log file it worked
LOGS_DIR = os.path.join(BASE_DIR, 'logs')
LOGGING = {
'version': 1,
'handlers': {
'log_file': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(LOGS_DIR, 'server.log'),
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5*1024*1024 # 5242880 bytes (5MB)
},
},
'loggers': {
'django': {
'handlers':['log_file'],
'propagate': True,
'level':'INFO',
},
},
}

What is the point of setLevel in a python logging handler?

Let's say I have the following code:
import logging
import logging.handlers
a = logging.getLogger('myapp')
h = logging.handlers.RotatingFileHandler('foo.log')
h.setLevel(logging.DEBUG)
a.addHandler(h)
# The effective log level is still logging.WARN
print a.getEffectiveLevel()
a.debug('foo message')
a.warn('warning message')
I expect that setting logging.DEBUG on the handler would cause debug-level messages to be written to the log file. However, this prints 30 for the effective level (equal to logging.WARNING, the default), and only logs the warn message to the log file, not the debug message.
It appears that the handler's log level is being dropped on the floor, e.g. it's silently ignored. Which makes me wonder, why have setLevel on the handler at all?
It allows finer control. By default the root logger has WARNING level set, this means that it wont print messages with lower level(no matter how the handlers' levels are set!). But, if you set the root logger's level to DEBUG, indeed the message get sent to the log file:
import logging
import logging.handlers
a = logging.getLogger('myapp')
a.setLevel(logging.DEBUG) # set root's level
h = logging.handlers.RotatingFileHandler('foo.log')
h.setLevel(logging.DEBUG)
a.addHandler(h)
print a.getEffectiveLevel()
a.debug('foo message')
a.warn('warning message')
Now, image that you want to add a new handler that doesn't record debug information.
You can do this by simply setting the handler logging level:
import logging
import logging.handlers
a = logging.getLogger('myapp')
a.setLevel(logging.DEBUG) # set root's level
h = logging.handlers.RotatingFileHandler('foo.log')
h.setLevel(logging.DEBUG)
a.addHandler(h)
h2 = logging.handlers.RotatingFileHandler('foo2.log')
h2.setLevel(logging.WARNING)
a.addHandler(h2)
print a.getEffectiveLevel()
a.debug('foo message')
a.warn('warning message')
Now, the log file foo.log will contain both messages, while the file foo2.log will only contain the warning message. You could be interested in having a log file of only error-level messages, then simply add a Handler and set its level to logging.ERROR, everything using the same Logger.
You may think of the Logger logging level as a global restriction on which messages are "interesting" for a given logger and its handlers. The messages that are considered by the logger afterwards get sent to the handlers, which perform their own filtering and logging process.
In Python logging there are two different concepts: the level that the logger logs at and the level that the handler actually activates.
When a call to log is made, what is basically happening is:
if self.level <= loglevel:
for handler in self.handlers:
handler(loglevel, message)
While each of those handlers will then call:
if self.level <= loglevel:
# do something spiffy with the log!
If you'd like a real-world demonstration of this, you can look at Django's config settings. I'll include the relevant code here.
LOGGING = {
#snip
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'logging.NullHandler',
},
'console':{
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['special']
}
},
'loggers': {
#snip
'myproject.custom': {
# notice how there are two handlers here!
'handlers': ['console', 'mail_admins'],
'level': 'INFO',
'filters': ['special']
}
}
}
So, in the configuration above, only logs to getLogger('myproject.custom').info and above will get processed for logging. When that happens, the console will output all of the results (it will output everything because it is set to DEBUG level), while the mail_admins logger will happen for all ERRORs, FATALs and CRITICALs.
I suppose some code which isn't Django might help too:
import logging.handlers as hand
import logging as logging
# to make things easier, we'll name all of the logs by the levels
fatal = logging.getLogger('fatal')
warning = logging.getLogger('warning')
info = logging.getLogger('info')
fatal.setLevel(logging.FATAL)
warning.setLevel(logging.WARNING)
info.setLevel(logging.INFO)
fileHandler = hand.RotatingFileHandler('rotating.log')
# notice all three are re-using the same handler.
fatal.addHandler(fileHandler)
warning.addHandler(fileHandler)
info.addHandler(fileHandler)
# the handler should log everything except logging.NOTSET
fileHandler.setLevel(logging.DEBUG)
for logger in [fatal,warning,info]:
for level in ['debug','info','warning','error','fatal']:
method = getattr(logger,level)
method("Debug " + logger.name + " = " + level)
# now, the handler will only do anything for *fatal* messages...
fileHandler.setLevel(logging.FATAL)
for logger in [fatal,warning,info]:
for level in ['debug','info','warning','error','fatal']:
method = getattr(logger,level)
method("Fatal " + logger.name + " = " + level)
That results in:
Debug fatal = fatal
Debug warning = warning
Debug warning = error
Debug warning = fatal
Debug info = info
Debug info = warning
Debug info = error
Debug info = fatal
Fatal fatal = fatal
Fatal warning = fatal
Fatal info = fatal
Again, notice how info logged something at info, warning, error, and fatal when the log handler was set to DEBUG, but when the handler was set to FATAL all of a sudden only FATAL messages made it to the file.
Handlers represent different audiences for logging events. Levels on handlers are used to control the verbosity of output seen by a particular audience, and act in addition to any levels set on loggers. Levels on loggers are used to control the overall verbosity of logging from different parts of an application or library.
See this diagram for more information about how logging events are handled:
the rule
if and only if
handler.level <= message.level
&&
logger.level <= message.level
then the message prints.
Reminder: lower values are more verbose
Level | Numeric value
---------|--------------
CRITICAL | 50
ERROR | 40
WARNING | 30
INFO | 20
DEBUG | 10
NOTSET | 0
ref: https://docs.python.org/3/library/logging.html#logging-levels
in other words
if the logger is set to WARNING, it won't matter if the handler has a more verbose setting. it'll already be filtered by the time it gets to the handler.
a full example
import logging
handler_info = logging.StreamHandler()
handler_info.setLevel("INFO")
handler_info.setFormatter(logging.Formatter(
f"%(levelname)s message for %(name)s handled by handler_info: %(message)s"))
handler_debug = logging.StreamHandler()
handler_debug.setLevel("DEBUG")
handler_debug.setFormatter(logging.Formatter(
f"%(levelname)s message for %(name)s handled by handler_debug: %(message)s"))
logger_info = logging.getLogger('logger_info')
logger_info.setLevel("INFO")
logger_info.addHandler(handler_info)
logger_info.addHandler(handler_debug)
logger_debug = logging.getLogger('logger_debug')
logger_debug.setLevel("DEBUG")
logger_debug.addHandler(handler_info)
logger_debug.addHandler(handler_debug)
print()
print("output for `logger_info.info('hello')`")
logger_info.info("hello")
print()
print("output for `logger_info.debug('bonjour')`")
logger_info.debug("bonjour")
print()
print("output for `logger_debug.info('hola')`")
logger_debug.info("hola")
print()
print("output for `logger_debug.debug('ciao')`")
logger_debug.debug("ciao")
print()
which gives
output for `logger_info.info('hello')`
INFO message for logger_info handled by handler_info: hello
INFO message for logger_info handled by handler_debug: hello
output for `logger_info.debug('bonjour')`
# nothing, because message.level < logger.level
output for `logger_debug.info('hola')`
INFO message for logger_debug handled by handler_info: hola
INFO message for logger_debug handled by handler_debug: hola
output for `logger_debug.debug('ciao')`
DEBUG message for logger_debug handled by handler_debug: ciao
# nothing from handler_info, because message.level < handler.level

Logging formatters in django

From the Django documentation, here is an example format for logging:
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s: %(message)s'
}
}
This prints something like:
ERROR 2012-05-22 14:33:07,261 views 42892 4398727168 hello
Is there a list of items you can include in the string formatting? For example, I'd like to be able to see the function and app where the message is being created, for example:
ERROR time myproject.myapp.views.login_function message
From Python logging module documentation:
asctime: %(asctime)s
Human-readable time when the LogRecord was created. By default this is of the form ‘2003-07-08 16:49:45,896’ (the numbers after the comma are millisecond portion of the time).
created: %(created)f
Time when the LogRecord was created (as returned by time.time()).
filename: %(filename)s
Filename portion of pathname.
funcName: %(funcName)s
Name of function containing the logging call.
levelname: %(levelname)s
Text logging level for the message ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL').
levelno: %(levelno)s
Numeric logging level for the message (DEBUG, INFO, WARNING, ERROR, CRITICAL).
lineno: %(lineno)d
Source line number where the logging call was issued (if available).
module: %(module)s
Module (name portion of filename).
msecs: %(msecs)d
Millisecond portion of the time when the LogRecord was created.
message: %(message)s
The logged message, computed as msg % args. This is set when Formatter.format() is invoked.
name: %(name)s
Name of the logger used to log the call.
pathname: %(pathname)s
Full pathname of the source file where the logging call was issued (if available).
process: %(process)d
Process ID (if available).
processName: %(processName)s
Process name (if available).
relativeCreated: %(relativeCreated)d
Time in milliseconds when the LogRecord was created, relative to the time the logging module was loaded.
thread: %(thread)d
Thread ID (if available).
threadName: %(threadName)s
Thread name (if available).
The following arguments are also available to Formatter.format(), although they are not intended to be included in the format string:
args:
The tuple of arguments merged into msg to produce message.
exc_info:
Exception tuple (à la sys.exc_info) or, if no exception has occurred, None.
msg:
The format string passed in the original logging call. Merged with args to produce message, or an arbitrary object (see Using arbitrary objects as messages).
Step1. Edit your settings.py file
$ cd mysite
$ vim mysite/settings.py
'formatters': {
'simple': {
'format': '%(levelname)s %(asctime)s %(name)s.%(funcName)s:%(lineno)s- %(message)s'
},
},
Step2. You should use logger in your coding like this:
import logging
logger = logging.getLogger(__name__)
def fn1() {
logger.info('great!')
logger.info(__name__)
}
Hope that helps you!

How to change the 'tag' when logging to syslog from 'Unknown'?

I'm logging to syslog fine but can't work out how to specify the 'tag'. The logging currently posts this:
Mar 3 11:45:34 TheMacMini Unknown: INFO FooBar
but I want that 'Unknown' to be set to something. eg:
Mar 3 11:45:34 TheMacMini Foopybar: INFO FooBar
If I use logger from the command line it can be controlled via the -t option...
$ logger -t Foopybar FooBar && tail -1 /var/log/system.log
Mar 3 12:05:00 TheMacMini Foopybar[4566]: FooBar
But logging from python I don't seem to be able to specify the tag:
import logging
logging.info("FooBar")
Just gives me the 'Unknown' tag shown at the top. I've defined this spec:
LOGGING = {
'version': 1,
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'syslog':{
'address': '/var/run/syslog',
'class': 'logging.handlers.SysLogHandler',
'facility': 'local2',
'formatter': 'simple'
}
},
'loggers': {
'': {
'handlers': ['syslog'],
'level': 'INFO',
}
}
}
How do I specify the tag so it's not always "Unknown"?
Simple Way of Tagging Log Messages
Do this:
logging.info("TagName: FooBar")
and you message will be tagged! You just need to start all your messages with "TagName: ". And this is of course not very elegant.
Better Solution
Setup your logger:
log = logging.getLogger('name')
address=('log-server',logging.handlers.SYSLOG_UDP_PORT)
facility=logging.handlers.SysLogHandler.LOG_USER
h=logging.handlers.SysLogHandler( address,facility )
f = logging.Formatter('TagName: %(message)s')
h.setFormatter(f)
log.addHandler(h)
And use it:
log.info('FooBar')
I'm adding this just for the sake of completion, even though #sasha's answer is absolutely correct.
If you happen to log messages to syslog directly using syslog.syslog, you can set the tag using the syslog.openlog function:
import syslog
syslog.openlog('foo')
syslog.syslog('bar')
Going back to the linux shell:
$ tail -f /var/log/syslog
Sep 7 07:01:58 dev-balthazar foo: bar

Categories