python - manage different packages logger - python

I have read :
Using logging in multiple modules,
Managing loggers with Python logging
https://docs.python.org/3/howto/logging-cookbook.html
and other posts.
and still, I don't get it (something is really wrong in the way the logging module works IMO).
Let's see a straight forward example:
module1:
import logging
logger=logging.getLogger(__name__)
def test_log():
logger.info("module1")
logger.debug("module1")
module2:
import logging
logger=logging.getLogger(__name__)
def test_log2():
logger.info("module2")
logger.debug("module2")
main.py
import module1
import module2
import logging
# create logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(ch)
logger.info("main debug")
logger.info("main info")
module1.test_log()
module2.test_log2()
This outputs:
2022-10-28 11:18:09,101 - __main__ - INFO - main debug
2022-10-28 11:18:09,102 - __main__ - INFO - main info
The 2 others modules loggers did not work ...
Also, changing
logger.setLevel(logging.DEBUG) to logger.setLevel(logging.ERROR)
even if there is ch.setLevel(logging.DEBUG) remove the logging below ERROR level.
But the opposite is ALSO TRUE !
Setting ch.setLevel(logging.DEBUG) to ch.setLevel(logging.ERROR)
even if there is logger.setLevel(logging.DEBUG) remove the logging below ERROR level.
What's the point of setting two differents things if they must be equal to work anyway ?
But ! with :
logger.setLevel(logging.DEBUG)
ch.setLevel(logging.INFO)
or with
logger.setLevel(logging.INFO)
ch.setLevel(logging.DEBUG)
we get both messages. I can't make sense of how this is supposed to work.
Anyway, what about my 2 modules ?
logging.getLogger("module1").setLevel(logging.DEBUG)
logging.getLogger("module2").setLevel(logging.DEBUG)
does not fix anything
Finally !
logging.getLogger("module1").setLevel(logging.DEBUG)
logging.getLogger("module2").setLevel(logging.INFO)
logging.basicConfig(level=logging.DEBUG)
logger.info("main debug")
logger.info("main info")
module1.test_log()
module2.test_log2()
outputs:
2022-10-28 11:27:47,843 - __main__ - INFO - main debug
2022-10-28 11:27:47,843 - __main__ - INFO - main debug
2022-10-28 11:27:47,844 - __main__ - INFO - main info
2022-10-28 11:27:47,844 - __main__ - INFO - main info
2022-10-28 11:27:47,844 - module1 - DEBUG - module1
2022-10-28 11:27:47,845 - module1 - INFO - module1
2022-10-28 11:27:47,845 - module2 - INFO - module2
which is a failure because,
main logging has been written twice
it seems really innapropriate to me to go through basicConfig, especially considering that the users of my package should be bothered to deal with the formatting and other options of logging (which in my case seems necessary)
if you want debug level on one module you need to specify DEBUG to basicConfig which result every other package to print in DEBUG level, then forcing the user to set EVERY library levels manually.
So, how should we implement the logger when we have differents modules, packages et different level of logging for each one of them ?
Thanks you !

Related

Logging levels dont work for debug and info

I use the following code in my __init__ class:
# Create a custom logger
self.logger = logging.getLogger(__name__)
# Create handlers
self.handler_cmdline = logging.StreamHandler()
self.handler_file = logging.FileHandler(self.logfile)
self.handler_cmdline.setLevel(logging.DEBUG)
self.handler_file.setLevel(logging.INFO)
# Create formatters and add it to handlers
log_format = logging.Formatter(fmt='%(asctime)s - %(name)s - %(<levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
self.handler_cmdline.setFormatter(log_format)
self.handler_file.setFormatter(log_format)
# Add handlers to the logger
self.logger.addHandler(self.handler_cmdline)
self.logger.addHandler(self.handler_file)
self.logger.debug('Initialisation Complete')
self.logger.info('Initialisation Complete')
self.logger.warning('Initialisation Complete')
self.logger.critical('Initialisation Complete')
The debug and the warning don't work somehow. Everything from warning and above works.
What's wrong here?
import logging
class example():
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logfile = 'example.log'
# Create handlers
self.handler_cmdline = logging.StreamHandler()
self.handler_file = logging.FileHandler(self.logfile)
self.handler_cmdline.setLevel(logging.DEBUG)
self.handler_file.setLevel(logging.INFO)
# Create formatters and add it to handlers
log_format = logging.Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
self.handler_cmdline.setFormatter(log_format)
self.handler_file.setFormatter(log_format)
# Add handlers to the logger
self.logger.addHandler(self.handler_cmdline)
self.logger.addHandler(self.handler_file)
self.logger.setLevel(logging.DEBUG)
# print(self.logger.level)
self.logger.debug('Initialisation Complete')
self.logger.info('Initialisation Complete')
self.logger.warning('Initialisation Complete')
self.logger.critical('Initialisation Complete')
example()
Small snippet to test the fix with your code, the issue is that your logger does not have a level set. Because of which default level is used which causes the INFO and DEBUG levelled logs to be ignored. You can try printing the self.logger.level to see what level it is currently set to.
In your case, you did not have logging level set for your self.logger.
Just adding the self.logger.setLevel(logging.DEBUG) fixes the issue. And you can see this output in the console -
2022-12-12 17:44:23 - __main__ - DEBUG - Initialisation Complete
2022-12-12 17:44:23 - __main__ - INFO - Initialisation Complete
2022-12-12 17:44:23 - __main__ - WARNING - Initialisation Complete
2022-12-12 17:44:23 - __main__ - CRITICAL - Initialisation Complete
while in the file, the debug log would not be there, because of the logging level of the file handler being INFO.
The level of the logger should be minimum of the level of the log handlers of that logger, else those logs would not be logged.(e.g. in this case, if you set the logger level to INFO, the debug log will not be printed in stdout either, even though the log handler has its level set to DEBUG.

Multi module python logger using the name of the main module

I have multiple python modules that I'd like to use the same logger while preserving the call hierarchy in those logs. I'd also like to do this with a logger whose name is the name of the calling module (or calling module stack). I haven't been able to work out how to get the name of the calling module except with messing with the stack trace, but that doesn't feel very pythonic.
Is this possible?
main.py
import logging
from sub_module import sub_log
logger = logging.getLogger(__name__)
logger.info("main_module")
sub_log()
sub_module.py
import logging
def sub_log():
logger = logging.getLogger(???)
logger.info("sub_module")
Desired Output
TIME main INFO main_module
TIME main.sub_module INFO sub_module
To solve your problem pythonic use the Logger Formatter:
For reference check the
Logging Docs
main.py
import logging
from submodule import sub_log
from submodule2 import sub_log2
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('test.log')
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s %(name)s.%(module)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
sub_log("test")
sub_log2("test")
submodule.py
import logging
import __main__
def sub_log(msg):
logger = logging.getLogger(__main__.__name__)
logger.info(msg)
I've created second submodule. ( same code other name)
My Results:
2018-10-16 20:41:23,860 __main__.submodule - INFO - test
2018-10-16 20:41:23,860 __main__.submodule2 - INFO - test
I hope this will help you :)

python logging file is not working when using logging.basicConfig

I have the following lines of code that initialize logging.
I comment one out and leave the other to be used.
The problem that I'm facing is that the one that is meant to log to the file not logging to file. It is instead logging to the console.
Please help.
For logging to Console:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s',)
for file logging
logging.basicConfig(filename='server-soap.1.log',level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s')
I found out what the problem was.
It was in the ordering of the imports and the logging definition.
The effect of the poor ordering was that the libraries that I imported before defining the logging using logging.basicConfig() defined the logging. This therefore took precedence to the logging that I was trying to define later using logging.basicConfig()
Below is how I needed to order it:
import logging
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
But the faulty ordering that I initially had was:
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import logging
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
"Changed in version 3.8: The force argument was added." I think it's a better choice for new version.
For older Version(< 3.8):
From the source code of logging I found the flows:
This function does nothing if the root logger already has handlers
configured. It is a convenience method intended for use by simple scripts
to do one-shot configuration of the logging package.
So, if some module we import called the basicConfig() method before us, our call will do nothing.
A solution I found can work is that you can reload logging before your own calling to basicConfig(), such as
def init_logger(*, fn=None):
# !!! here
from imp import reload # python 2.x don't need to import reload, use it directly
reload(logging)
logging_params = {
'level': logging.INFO,
'format': '%(asctime)s__[%(levelname)s, %(module)s.%(funcName)s](%(name)s)__[L%(lineno)d] %(message)s',
}
if fn is not None:
logging_params['filename'] = fn
logging.basicConfig(**logging_params)
logging.error('init basic configure of logging success')
In case basicConfig() does not work:
logger = logging.getLogger('Spam Logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug Spam message')
logging.debug('debug Spam message')
logger.info('info Ham message')
logger.warning('warn Eggs message')
logger.error('error Spam and Ham message')
logger.critical('critical Ham and Eggs message')
which gives me the following output:
2019-06-20 11:33:48,967 - Spam Logger - DEBUG - debug Spam message
2019-06-20 11:33:48,968 - Spam Logger - INFO - info Ham message
2019-06-20 11:33:48,968 - Spam Logger - WARNING - warn Eggs message
2019-06-20 11:33:48,968 - Spam Logger - ERROR - error Spam and Ham message
2019-06-20 11:33:48,968 - Spam Logger - CRITICAL - critical Ham and Eggs message
For the sake of reference, Python Logging Cookbook is readworthy.
I got the same error, I fixed it by passing the following argument to the basic config.
logging.basicConfig(
level="WARNING",
format="%(asctime)s - %(name)s - [ %(message)s ]",
datefmt='%d-%b-%y %H:%M:%S',
force=True,
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
])
Here as you can see passing force=True overrides any other BasicConfigs
Another solution that worked for me is instead of tracing down which module might be importing logging or even calling basicConfig before me is to just call setLevel after basicConfig again.
import os
import logging
RUNTIME_DEBUG_LEVEL = os.environ.get('RUNTIME_DEBUG_LEVEL').upper()
LOGGING_KWARGS = {
'level': getattr(logging, RUNTIME_DEBUG_LEVEL)
}
logging.basicConfig(**LOGGING_KWARGS)
logging.setLevel(getattr(logging, RUNTIME_DEBUG_LEVEL))
Sort of crude, seems hacky, fixed my problem, worth a share.
IF YOU JUST WANT TO SET THE LOG LEVEL OF ALL LOGGERS
instead of ordering your imports after logging config:
just set level on the root level:
# Option 1:
logging.root.setLevel(logging.INFO)
# Option 2 - make it configurable:
# env variable + default value INFO
logging.root.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
Like vacing's answer mentioned, basicConfig has no effect if the root logger already has handlers configured.
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it appears my app fails to log, but this only happens when executing unit tests with pytest. In a normal app run logs are produced as expected which is enough for my use case.

Get Output From the logging Module in IPython Notebook

When I running the following inside IPython Notebook I don't see any output:
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("test")
Anyone know how to make it so I can see the "test" message inside the notebook?
Try following:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
According to logging.basicConfig:
Does basic configuration for the logging system by creating a
StreamHandler with a default Formatter and adding it to the root
logger. The functions debug(), info(), warning(), error() and
critical() will call basicConfig() automatically if no handlers are
defined for the root logger.
This function does nothing if the root logger already has handlers
configured for it.
It seems like ipython notebook call basicConfig (or set handler) somewhere.
If you still want to use basicConfig, reload the logging module like this
from importlib import reload # Not needed in Python 2
import logging
reload(logging)
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')
My understanding is that the IPython session starts up logging so basicConfig doesn't work. Here is the setup that works for me (I wish this was not so gross looking since I want to use it for almost all my notebooks):
import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='mylog.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)
Now when I run:
logging.error('hello!')
logging.debug('This is a debug message')
logging.info('this is an info message')
logging.warning('tbllalfhldfhd, warning.')
I get a "mylog.log" file in the same directory as my notebook that contains:
2015-01-28 09:49:25,026 - root - ERROR - hello!
2015-01-28 09:49:25,028 - root - DEBUG - This is a debug message
2015-01-28 09:49:25,029 - root - INFO - this is an info message
2015-01-28 09:49:25,032 - root - WARNING - tbllalfhldfhd, warning.
Note that if you rerun this without restarting the IPython session it will write duplicate entries to the file since there would now be two file handlers defined
Bear in mind that stderr is the default stream for the logging module, so in IPython and Jupyter notebooks you might not see anything unless you configure the stream to stdout:
import logging
import sys
logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
logging.info('Hello world!')
What worked for me now (Jupyter, notebook server is: 5.4.1, IPython 7.0.1)
import logging
logging.basicConfig()
logger = logging.getLogger('Something')
logger.setLevel(logging.DEBUG)
Now I can use logger to print info, otherwise I would see only message from the default level (logging.WARNING) or above.
You can configure logging by running %config Application.log_level="INFO"
For more information, see IPython kernel options
I wanted a simple and straightforward answer to this, with nicely styled output so here's my recommendation
import sys
import logging
logging.basicConfig(
format='%(asctime)s [%(levelname)s] %(name)s - %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
stream=sys.stdout,
)
log = logging.getLogger('notebook')
Then you can use log.info() or any of the other logging levels anywhere in your notebook with output that looks like this
2020-10-28 17:07:08 [INFO] notebook - Hello world
2020-10-28 17:12:22 [INFO] notebook - More info here
2020-10-28 17:12:22 [INFO] notebook - And some more
As of logging version 3.8 a force parameter has been added that removes any existing handlers, which allows basicConfig to work. This worked on IPython version 7.29.0 and Jupyter Lab version 3.2.1.
import logging
logging.basicConfig(level=logging.DEBUG,
force = True)
logging.debug("test")
I setup a logger for both file and I wanted it to show up on the notebook. Turns out adding a filehandler clears out the default stream handlder.
logger = logging.getLogger()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# Setup file handler
fhandler = logging.FileHandler('my.log')
fhandler.setLevel(logging.DEBUG)
fhandler.setFormatter(formatter)
# Configure stream handler for the cells
chandler = logging.StreamHandler()
chandler.setLevel(logging.DEBUG)
chandler.setFormatter(formatter)
# Add both handlers
logger.addHandler(fhandler)
logger.addHandler(chandler)
logger.setLevel(logging.DEBUG)
# Show the handlers
logger.handlers
# Log Something
logger.info("Test info")
logger.debug("Test debug")
logger.error("Test error")
setup
import logging
# make a handler
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add it to the root logger
logging.getLogger().addHandler(handler)
log from your own logger
# make a logger for this notebook, set verbosity
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
# send messages
logger.debug("debug message")
logger.info("so much info")
logger.warning("you've veen warned!")
logger.error("bad news")
logger.critical("really bad news")
2021-09-02 18:18:27,397 - __main__ - DEBUG - debug message
2021-09-02 18:18:27,397 - __main__ - INFO - so much info
2021-09-02 18:18:27,398 - __main__ - WARNING - you've veen warned!
2021-09-02 18:18:27,398 - __main__ - ERROR - bad news
2021-09-02 18:18:27,399 - __main__ - CRITICAL - really bad news
capture logging from other libraries
logging.getLogger('google').setLevel('DEBUG')
from google.cloud import storage
client = storage.Client()
2021-09-02 18:18:27,415 - google.auth._default - DEBUG - Checking None for explicit credentials as part of auth process...
2021-09-02 18:18:27,416 - google.auth._default - DEBUG - Checking Cloud SDK credentials as part of auth process...
2021-09-02 18:18:27,416 - google.auth._default - DEBUG - Cloud SDK credentials not found on disk; not using them
...

What could cause the logging module to log a record multiple times?

I have a multi-threaded Python application that makes use of the built in logging module. To control the logging levels and make it easier to swap the StreamHandler with a FileHandler in the future, I have created a common helper function called by each module to create an identical logger (other than its name).
How should I go about trouble shooting this issue?
Key Points
Each module in the project has its own logger instance.
The sample output is generated by a single call to a logger (self._logger.info("Logger Setup"))
I have tried including the current thread name (threading.Thread.getName()) and it confirms that the same thread is calling causing the multiple logs.
Logger Creation - Now working
import logging
import sys
def createSystemLogHandler(logger):
# This is now called once at the logger's root
ch = logging.StreamHandler(sys.stdout) # Normal output is to stderr which doesn't show up on Window's CMD
format = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(message)s')
ch.setFormatter(format)
logger.addHandler(ch)
return logger
def configureSystemLogger(name='', level=logging.WARNING):
logger = logging.getLogger(name)
logger.setLevel(level)
logger.info("Logger Setup")
return logger
Sample Output
2012-04-25 21:59:40,720 - INFO - HW_MGR - Logger Setup
2012-04-25 21:59:40,720 - INFO - HW_MGR - Logger Setup
2012-04-25 21:59:40,720 - INFO - HW_MGR - Logger Setup
2012-04-25 21:59:40,720 - INFO - HW_MGR - Logger Setup
2012-04-25 21:59:40,720 - INFO - HW_MGR - Logger Setup
My guess is that you've got multiple handlers (ie, one message is being emitted multiple times; but on re-reading your question, it looks like you already knew that). To debug that, try:
Looking at logging.getLogger("").handlers (ie, the handlers on the root logger)
Checking your calls to addHandler()
Brandon Rhode's logging_tree module
Use pdb to trace a log message's lifecycle (ie, put a breakpoint just before a call to self._logger.info(…), then step into that function).

Categories