Python logger - multiple logger instances with multiple levels - best practice - python

I have the following requirements:
To have one global logger which you can configure (setup level, additional handlers,..)
To have per module logger which you can configure (setup level, additional handlers,..)
In other words we need more logs with different configuration
Therefore I did the following
create method to setup logger:
def setup_logger(module_name=None, level=logging.INFO, add_stdout_logger=True):
print("Clear all loggers")
for _handler in logging.root.handlers:
logging.root.removeHandler(_handler)
if add_stdout_logger:
print("Add stdout logger")
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(level)
stdout_handler.setFormatter(logging.Formatter(fmt='%(asctime)-11s [%(levelname)s] [%(name)s] %(message)s'))
logging.root.addHandler(stdout_handler)
print("Set root level log")
logging.root.setLevel(level)
if module_name:
return logging.getLogger(module_name)
else:
return logging.getLogger('global')
Then I create logger as following:
logger_global = setup_logger(level=logging.DEBUG)
logger_module_1 = setup_logger(module_name='module1', level=logging.INFO)
logger_module_2 = setup_logger(module_name='module2', level=logging.DEBUG)
logger_global.debug("This is global log and will be visible because it is setup to DEBUG log")
logger_module_1.debug("This is logger_module_1 log and will NOT be visible because it is setup to INFO log")
logger_module_2.debug("This is logger_module_2 log and will be visible because it is setup to DEBUG log")
Before I will try what works and what not and test it more deeply I want to ask you if this is good practice to do it or do you have any other recommendation how to achieve our requrements?
Thanks for help

Finally I found how to do it:
def setup_logger(module_name=None, level=logging.INFO, add_stdout_logger=True):
custom_logger = logging.getLogger('global')
if module_name:
custom_logger = logging.getLogger(module_name)
print("Clear all handlers in logger") # prevent multiple handler creation
module_logger.handlers.clear()
if add_stdout_logger:
print("Add stdout logger")
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(level)
stdout_handler.setFormatter(logging.Formatter(fmt='%(asctime)-11s [%(levelname)s] [%(name)s] %(message)s'))
module_logger.addHandler(stdout_handler)
# here you can add another handlers ,...
# because we use custom handlers which have the different type of log level,
# then our logger has to have the lowest level of logging
custom_logger.setLevel(logging.DEBUG)
return custom_logger
Then simply call the following
logger_module_1 = setup_logger(module_name='module1', level=logging.INFO)
logger_module_2 = setup_logger(module_name='module2', level=logging.DEBUG)
logger_module_1.debug("This is logger_module_1 log and will NOT be visible because it is setup to INFO log")
logger_module_2.debug("This is logger_module_2 log and will be visible because it is setup to DEBUG log")

Related

Logging levels dont work for debug and info

I use the following code in my __init__ class:
# Create a custom logger
self.logger = logging.getLogger(__name__)
# Create handlers
self.handler_cmdline = logging.StreamHandler()
self.handler_file = logging.FileHandler(self.logfile)
self.handler_cmdline.setLevel(logging.DEBUG)
self.handler_file.setLevel(logging.INFO)
# Create formatters and add it to handlers
log_format = logging.Formatter(fmt='%(asctime)s - %(name)s - %(<levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
self.handler_cmdline.setFormatter(log_format)
self.handler_file.setFormatter(log_format)
# Add handlers to the logger
self.logger.addHandler(self.handler_cmdline)
self.logger.addHandler(self.handler_file)
self.logger.debug('Initialisation Complete')
self.logger.info('Initialisation Complete')
self.logger.warning('Initialisation Complete')
self.logger.critical('Initialisation Complete')
The debug and the warning don't work somehow. Everything from warning and above works.
What's wrong here?
import logging
class example():
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logfile = 'example.log'
# Create handlers
self.handler_cmdline = logging.StreamHandler()
self.handler_file = logging.FileHandler(self.logfile)
self.handler_cmdline.setLevel(logging.DEBUG)
self.handler_file.setLevel(logging.INFO)
# Create formatters and add it to handlers
log_format = logging.Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
self.handler_cmdline.setFormatter(log_format)
self.handler_file.setFormatter(log_format)
# Add handlers to the logger
self.logger.addHandler(self.handler_cmdline)
self.logger.addHandler(self.handler_file)
self.logger.setLevel(logging.DEBUG)
# print(self.logger.level)
self.logger.debug('Initialisation Complete')
self.logger.info('Initialisation Complete')
self.logger.warning('Initialisation Complete')
self.logger.critical('Initialisation Complete')
example()
Small snippet to test the fix with your code, the issue is that your logger does not have a level set. Because of which default level is used which causes the INFO and DEBUG levelled logs to be ignored. You can try printing the self.logger.level to see what level it is currently set to.
In your case, you did not have logging level set for your self.logger.
Just adding the self.logger.setLevel(logging.DEBUG) fixes the issue. And you can see this output in the console -
2022-12-12 17:44:23 - __main__ - DEBUG - Initialisation Complete
2022-12-12 17:44:23 - __main__ - INFO - Initialisation Complete
2022-12-12 17:44:23 - __main__ - WARNING - Initialisation Complete
2022-12-12 17:44:23 - __main__ - CRITICAL - Initialisation Complete
while in the file, the debug log would not be there, because of the logging level of the file handler being INFO.
The level of the logger should be minimum of the level of the log handlers of that logger, else those logs would not be logged.(e.g. in this case, if you set the logger level to INFO, the debug log will not be printed in stdout either, even though the log handler has its level set to DEBUG.

Python Custom logging handler to create each log file per key in the dictionary

I have a dictionary with some sample data like below
{"screener": "media", "anchor": "media","reader": "media"}
and I wanted to create a log file for each key in the dictionary. I'm planning to use this logging in streaming job which will get reused for every batch in the streaming. here I'm planning to use the rotating file handler per key as well.
here is my snippet
import logging
from logging.handlers import RotatingFileHandler
import time
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
dict_store = {"screener": "media", "anchor": "media", "reader": "media"}
dict_log_handler = {}
def createFileHandler(name):
handler = RotatingFileHandler(name, maxBytes=2000000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
return handler
def runBatch():
print("logging batch")
for name in dict_store.keys():
print(name)
if name in dict_log_handler:
print(f"getting logger from dict_handler {name}")
handler = dict_log_handler[name]
else:
handler = createFileHandler(name)
dict_log_handler[name] = handler
logger.addHandler(handler)
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.removeHandler(handler)
time.sleep(0.1)
for i in range(0, 3):
runBatch()
It is working as expected currently. I'm just thinking of implementing similar stuff inside the overriding or creating a custom handler (like if we pass a name, it should automatically do this stuff) and the overall expectation is it should not affect the performance.
Question is, I wanted to wrap this in a class and using it directly. possible ?
You question is not clear what exactly you want to do, but if the idea is to use multiple loggers as your code shows then you can do something like this:
logging.getLogger(name) this is the method which is used to access the specific logger, in your code you are using the same logger but using addHandler and removeHandler to switch to specific logger.
You can create multiple loggers like this:
import logging
from logging.handlers import RotatingFileHandler
dict_store = {"screener": "media", "anchor": "media", "reader": "media"}
for name in dict_store:
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler(name, maxBytes=2000000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
You can wrap the above code in your own logging class/method and use it as needed. Keep in mind this needs to called only once.
Next time you can access the specific log and use the logging method:
logger = logging.getLogger(<logger_name>)
logger.debug("debug")
logger.info("info")
logger.warning("warning")
logger.error("error")
logger.critical("critical")

Global Python Logger with file Rotator

I have create a global logger using the following:
def logini():
logfile='/var/log/cs_status.log'
import logging
import logging.handlers
global logger
logger = logging.getLogger()
logging.basicConfig(filename=logfile,filemode='a',format='%(asctime)s %(name)s %(levelname)s %(message)s',datefmt='%y%m%d-%H:%M:%S',level=logging.DEBUG,propagate=0)
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000, backupCount=5)
logger.addHandler(handler)
__builtins__.logger = logger
It works, however I am getting 2 outputs for every log, one with the formatting and one without.
I realize that this is being caused by the file rotater as I can comment out the 2 lines of the handler code and then I get a single outputted correct log entry.
How can I prevent the log rotator from outputting a second entry ?
Currently you're configuring two file loggers that point to the same logfile. To only use the RotatingFileHandler, get rid of the basicConfig call:
logger = logging.getLogger()
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000,
backupCount=5)
formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s',
datefmt='%y%m%d-%H:%M:%S')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
All basicConfig does for you is to provide an easy way to instantiate either a StreamHandler (default) or a FileHandler and set its loglevel and formats (see the docs for more information). If you need a handler other than these two, you should instantiate and configure it yourself.

How do I change the format of a Python log message on a per-logger basis?

After reading the documentation on logging, I know I can use code like this to perform simple logging:
import logging
def main():
logging.basicConfig(filename="messages.log",
level=logging.WARNING,
format='%(filename)s: '
'%(levelname)s: '
'%(funcName)s(): '
'%(lineno)d:\t'
'%(message)s')
logging.debug("Only for debug purposes\n")
logging.shutdown()
main()
However, I realised I don't know how to change the format of log messages on a per-logger basis, since basicConfig is a module-level function. This code works for creating different loggers with different levels, names, etc. but is there a way to change the format of those log messages on a per-logger basis as well, in a way similar to basicConfig?
import inspect
import logging
def function_logger(level=logging.DEBUG):
function_name = inspect.stack()[1][3]
logger = logging.getLogger(function_name)
logger.setLevel(level)
logger.addHandler(logging.FileHandler("{0}.log".format(function_name)))
return logger
def f1():
f1_logger = function_logger()
f1_logger.debug("f1 Debug message")
f1_logger.warning("f1 Warning message")
f1_logger.critical("f1 Critical message")
def f2():
f2_logger = function_logger(logging.WARNING)
f2_logger.debug("f2 Debug message")
f2_logger.warning("f2 Warning message")
f2_logger.critical("f2 Critical message")
def main():
f1()
f2()
logging.shutdown()
main()
Try this
import logging
logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
# create file handler that logs debug and higher level messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')
See http://docs.python.org/howto/logging-cookbook.html#multiple-handlers-and-formatters for more information
You have to create or use an existing subclass of logging.Handler and call the setformatter() method of an instance thereof with an instance of a custom subclass of logger.Formatter. If you set the formatter for a handler that was already attached to the logger you want to modify the output of, you are fine, otherwise you have to retrieve a logger object with logging.getLogger() and call its addHandler() method with the instance of your handler class that you set the formatter on as the argument.

Logging to two files with different settings

I am already using a basic logging config where all messages across all modules are stored in a single file. However, I need a more complex solution now:
Two files: the first remains the same.
The second file should have some custom format.
I have been reading the docs for the module, bu they are very complex for me at the moment. Loggers, handlers...
So, in short:
How to log to two files in Python 3, ie:
import logging
# ...
logging.file1.info('Write this to file 1')
logging.file2.info('Write this to file 2')
You can do something like this:
import logging
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
def setup_logger(name, log_file, level=logging.INFO):
"""To setup as many loggers as you want"""
handler = logging.FileHandler(log_file)
handler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.setLevel(level)
logger.addHandler(handler)
return logger
# first file logger
logger = setup_logger('first_logger', 'first_logfile.log')
logger.info('This is just info message')
# second file logger
super_logger = setup_logger('second_logger', 'second_logfile.log')
super_logger.error('This is an error message')
def another_method():
# using logger defined above also works here
logger.info('Inside method')
def setup_logger(logger_name, log_file, level=logging.INFO):
l = logging.getLogger(logger_name)
formatter = logging.Formatter('%(message)s')
fileHandler = logging.FileHandler(log_file, mode='w')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
l.setLevel(level)
l.addHandler(fileHandler)
l.addHandler(streamHandler)
setup_logger('log1', txtName+"txt")
setup_logger('log2', txtName+"small.txt")
logger_1 = logging.getLogger('log1')
logger_2 = logging.getLogger('log2')
logger_1.info('111messasage 1')
logger_2.info('222ersaror foo')

Categories