In order to improve readability I want to change level names for my logging system. My current approach is to use logging.addLevelName(). Also I want the logger write to stderr and syslog. I can achieve both with the following code:
import logging
import logging.handlers
logging.basicConfig(level=logging.DEBUG)
logging.addLevelName(logging.CRITICAL, "(CC)")
logging.addLevelName(logging.ERROR, "(EE)")
logging.addLevelName(logging.WARNING, "(WW)")
logging.addLevelName(logging.INFO, "(II)")
logging.addLevelName(logging.DEBUG, "(DD)")
logging.addLevelName(logging.NOTSET, "(--)")
logger = logging.getLogger('mylogger')
logger.addHandler(logging.handlers.SysLogHandler(address="/dev/log"))
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(
'[%(asctime)s %(levelname)s] %(message)s', datefmt="%y%m%d-%H%M%S"))
logger.addHandler(handler)
logger.propagate = False
logger.debug("debug")
logger.info("info")
logger.warning("warning")
logger.error("error")
logger.critical("critical")
The output on the terminal now looks like this:
[180303-224014 (DD)] debug
[180303-224014 (II)] info
[180303-224014 (WW)] warning
[180303-224014 (EE)] error
[180303-224014 (CC)] critical
But unfortunately the syslog-handler now writes all messages to the WARNING level:
Mar 3 22:40:14 user.warning debug
Mar 3 22:40:14 user.warning info
Mar 3 22:40:14 user.warning warning
Mar 3 22:40:14 user.warning error
Mar 3 22:40:14 user.warning critical
Is this a bug in Pythons logging module? Is there a better way to just set the strings used for '%(levelname)s'?
It might qualify as a bug. The SysLogHandler uses self.priority_map.get(levelName, "warning") to determine the priority to send to syslog, and it appears addLevelName doesn't (or possibly cannot) update the map. As a workaround, you could update it manually:
SysLogHandler.priority_names.update({
'(CC)': logging.LOG_CRIT,
'(EE)': logging.LOG_ERR,
# etc
})
Related
I have written a bonobo script to extract some data, and I would like to use python's logging module to write some status messages to a file while my job runs. I've done the following:
import logging
logging.basicConfig(filename=INFO["LOGFILE_PATH"]+r'\bonobo_job_'+date.today().isoformat(),
filemode='a',
format='%(name)s - %(levelname)s - %(message)s')
If I simply run the script in Pycharm, it logs to the file as I would expect. But if I run it from the command line with the bonobo run command, it ignores the filename and logs to stdout. How do I fix this? Is there a flag or environment variable I need to set somewhere?
Okay,I figured it out. For some reason, basicConfig doesn't work. I had to use getLogger and add a FileHandler. So in main I did this:
logger = logging.getLogger('bonobo_logger')
ch = logging.FileHandler(logfilename)
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
Then in every node in my graph where I wanted to do logging, I called:
logger = logging.getLogger('bonobo_logger')
and used the logger object to write out all messages. If anyone knows a better way of doing it, please let me know.
A little context to what I am doing. I am running some python scripts through a different programming language on an industrial controller. Since I am not running the python scripts directly I can't watch any print or log statements from the terminal so I need to send the detailed logs to a log file.
Since we are logging a lot of information when debugging, I wanted to find a way to color the log file such as coloredlogs does to logs printed to terminal. I looked at coloredlogs but it appears that it can only print colored logs to files when using VIM. Does anyone know a way to print colored logs to a file using python that can be opened with a program such as wordpad? (maybe a .rtf file).
It can be a solution to use the Windows PowerShell Get-Content function to print a file which contains ANSI escape sequences to color the log.
For example:
import coloredlogs
import logging
# Create a logger object.
logger = logging.getLogger(__name__)
# Create a filehandler object
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# Create a ColoredFormatter to use as formatter for the FileHandler
formatter = coloredlogs.ColoredFormatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
# Install the coloredlogs module on the root logger
coloredlogs.install(level='DEBUG')
logger.debug("this is a debugging message")
logger.info("this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")
When opening a Windows PowerShell you can use Get-Content .\spam.log to print the logs in color.
I am facing little strange issue where, the logger is not picking up the configured time stamp format (ascii) in the log message for the first time it is initialized. It by default prints the log time format in UTC, not sure why.
Below snip is from the /proj/req_proc.py python code, which uwsgi starts, initialize the logger. The log_config.yaml contains a formatter definition to print timestamp in ascii format.
def setup_logging(default_path='=log_config.yaml',
default_level=logging.INFO):
path = default_path
if os.path.exists(path):
with open(path, 'rt') as f:
config = yaml.load(f.read())
logging.config.dictConfig(config)
Below is the snip from my launch script which starts the uwsgi process.
uwsgi -M --processes 1 --threads 2 -s /tmp/uwsgi.sock --wsgi-file=/proj/req_proc.py --daemonize /dev/null
Is there any specific behavior either to python logger or to uwsgi, which picks up the UTC time format by default? When I restart my uwsgi process, it picks the correct/expected time stamp configured in the log_config.yaml
I have the assumption that the uwsgi module is somehow hijacking Python's logging module. Setting the loglevel, logger name and logging itself works, but trying to modify the format even with something basic like:
logging.basicConfig(level=logging.NOTSET, format='[%(process)-5d:%(threadName)-10s] %(name)-25s: %(levelname)-8s %(message)s')
logger = logging.getLogger(__name__)
has no effect.
Update: Here's a way to overwrite uWSGI's default logger:
# remove uUWSGI's default logging configuration, this can be removed in
# more recent versions of uWSGI
root = logging.getLogger()
map(root.removeHandler, root.handlers[:])
map(root.removeFilter, root.filters[:])
logger = logging.getLogger(__name__)
logging.basicConfig(
level=logging.INFO,
format='%(levelname)-8s %(asctime)-15s %(process)4d:%(threadName)-11s %(name)s %(message)s'
)
I am trying to switch from simple printing to proper logging.
I want to use two different loggers, the first one displaying information on screen and the other one in a file.
My problem is that even though I set my handler level to DEBUG, messages are only displayed from WARNING.
Here is a sample of my code :
def setup_logger(self):
"""
Configures our logger to save error messages
"""
# create logger for 'facemovie'
self.my_logger = logging.getLogger('FileLog')
# create file handler which logs even debug messages
fh = logging.FileHandler('log/fm.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
self.console_logger = logging.getLogger('ConsoleLog')
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
#ch.setFormatter(formatter)
##Start logging in file
self.my_logger.info("######")
# add the handlers to the logger
self.my_logger.addHandler(fh)
self.console_logger.addHandler(ch)
# DEBUG
self.console_logger.info("MFCKR")
self.console_logger.debug("MFCKR")
self.console_logger.warning("MFCKR")
self.console_logger.error("MFCKR")
self.console_logger.critical("MFCKR")
self.my_logger.info("MFCKR")
self.my_logger.debug("MFCKR")
self.my_logger.warning("MFCKR")
self.my_logger.error("MFCKR")
self.my_logger.critical("MFCKR")
And the output :
[jll#jll-VirtualBox:~/Documents/FaceMovie]$ python Facemoviefier.py -i data/inputs/samples -o data/
Selected profile is : frontal_face
MFCKR
MFCKR
MFCKR
Starting Application !
Output #0, avi, to 'data/output.avi':
Stream #0.0: Video: mpeg4, yuv420p, 652x498, q=2-31, 20780 kb/s, 90k tbn, 5 tbc
FaceMovie exited successfully!
[jll#jll-VirtualBox:~/Documents/FaceMovie]$ cat log/fm.log
2012-07-15 22:23:24,303 - FileLog - WARNING - MFCKR
2012-07-15 22:23:24,303 - FileLog - ERROR - MFCKR
2012-07-15 22:23:24,303 - FileLog - CRITICAL - MFCKR
I red the doc and searched for similar errors on the web, but couldn't find any.
Would you have ideas about the reason why the logger doesn't display DEBUGs and INFOs ?
Thx !
Just found the answer here
Don't know why I didn't find it before.
The whole logger level has to be set, even though the handler also is.
I was not setting up the logger's level, and I guess in this case default is Warning.
Problem solved !
I am experiencing an issue where I am using the logging module in my app. I am working in Eclipse against the LDT Python (Py 2.7) interface (rather than Pydev) on my MacBook Pro. The logging module works through Eclipse; however, when I transfer my app over to a RHEL5 2.7, logging does not seem to be working at all. It is not throwing any exceptions, it is just not logging anything to console or file (it creates the file though).
Code:
# Initialize logging
log = logging.getLogger('pepPrep')
# Log to stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# Log to file
logname = 'pepPrep.' + datetime.datetime.now().strftime("%Y%m%d_%H:%M") + '.log'
filelog = logging.FileHandler(logname)
filelog.setLevel(logging.DEBUG)
# set a format
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
filelog.setFormatter(formatter)
# add the handler to the root logger
log.addHandler(console)
log.addHandler(filelog)
log.INFO('This is a test')
log.DEBUG('This is a test2')
Any pointers on how I can make this work?
The default threshold for logging is WARNING, so INFO and DEBUG messages are not output by default. To do so, add e.g.
logging.getLogger().setLevel(logging.DEBUG)
to get DEBUG and INFO messages.
You can confirm this is your problem by doing
log.warning('This is a test3')
before adding that setLevel, and confirming that the warning is actually output.