I have a logging function with hardcoded logfile name (LOG_FILE):
setup_logger.py
import logging
import sys
FORMATTER = logging.Formatter("%(levelname)s - %(asctime)s - %(name)s - %(message)s")
LOG_FILE = "my_app.log"
def get_console_handler():
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(FORMATTER)
return console_handler
def get_file_handler():
file_handler = logging.FileHandler(LOG_FILE)
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_console_handler())
logger.addHandler(get_file_handler())
# with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
I use this in various modules this way:
main.py
from _Core import setup_logger as log
def main(incoming_feed_id: int, type: str) -> None:
logger = log.get_logger(__name__)
...rest of my code
database.py
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class Database:
...rest of my code
etl.py
import _Core.database as db
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class ETL:
...rest of my code
What I want to achieve is to always change the logfile's path and name on each run based on arguments passed to the main() function in main.py.
Simplified example:
If main() receives the following arguments: incoming_feed_id = 1, type = simple_load, the logfile's name should be 1simple_load.log.
I am not sure what is the best practice for this. What I came up with is probably the worst thing to do: Add a log_file parameter to the get_logger() function in setup_logger.py, so I can add a filename in main() in main.py. But in this case I would need to pass the parameters from main to the other modules as well, which I do not think I should do as for example the database class is not even used in main.py.
I don't know enough about your application to be sure this'll work for you, but you can just configure the root logger in main() by calling get_logger('', filename_based_on_cmdline_args), and stuff logged to the other loggers will be passed to the root logger's handlers for processing if the logger levels configured allow it. The way you're doing it now seems to open multiple handlers pointing to the same file, which seems sub-optimal. The other modules can just use logging.getLogger(__name__) rather than log.get_logger(__name__).
main module
module A
module B
The main module uses the functions of modules.
The functions of modules include logger.info or logger.warning.. so on to show what i made wrong about the code.
Objective :
- Logging all things in main, A and B.
- A faculty to set logging level of A and B in main jupyter notebook, at the moment. (e.g. When i need more information about a function of A, then increase logging level to DEBUG from INFO)
By the way, the main script has:
import logging, sys
# create logger
logger = logging.getLogger('logger')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
fh = logging.FileHandler('process.log')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s\n')
# add formatter to ch
fh.setFormatter(formatter)
# add ch to logger
logger.addHandler(fh)
logger.addHandler(ch)
I want to use Logger object and configure this object, instead of logging's baseconfig. But if i can't, other ways is ok.
If you do:
logger = logging.getLogger('logger')
Into Module A and Module B, then they should have access to the same logger as your main file. From there, you can set whatever level at any time you want. E.g.
# ../module_a.py
import logging
logger = logging.getLogger('logger')
logger.setLevel(whatever) # Now all instances of "logger" will be set to that level.
Basically, loggers are globally registered by name and accessible through the logging module directly from any other modules.
I want to create 2 loggers which logs to 2 different outputs in Python. This logger of mine is configured in a single module which will be used by 2 other main modules. My problem is since I configured the root logger to allow my logger to be used by 2 different main modules, I cannot separate logging output.
How can this be done?
Here is how I configure my logging:
# logger.py
import logging
def setup_logging():
# If I give name to my getLogger, it will not be configuring root logger and my changes here cannot cascade to all other child loggers.
logger = logging.getLogger()
streamHandler = logging.StreamHandler()
streamFormat = logging.Formatter('%(name)s - %(message)s')
streamHandler.setFormatter(streamFormat)
logger.addHandler(streamHandler)
# main1.py
import logging
from logger import setup_logging
from submodule import log_me
setup_logging()
logger = logging.getLogger('main1')
logger.log('I am from main1')
log_me()
# main2.py
import logging
from logger import setup_logging
from submodule import log_me
setup_logging()
logger = logging.getLogger('main2')
logger.log('I am from main2')
log_me()
# submodule.py
import logging
logger = logging.getLogger('submodule')
def log_me():
logger.info('I am from submodule')
Result from main1:
main1 - I am from main1
submodule - I am from submodule
Result from main2:
main2 - I am from main2
submodule - I am from submodule
Here is what I am trying to achieve (but beautifully fails of course).
# logger.py
import logging
def setup_logging():
logger = logging.getLogger()
streamHandler = logging.StreamHandler()
streamFormat = logging.Formatter('%(name)s - %(message)s')
streamHandler.setFormatter(streamFormat)
logger.addHandler(streamHandler)
def setup_second_logging():
logger2 = logging.getLogger()
fileHandler = logging.FileHandler('./out.log)
fileFormat = logging.Formatter('%(name)s - %(message)s')
fileHandler.setFormatter(fileFormat)
logger.addHandler(fileHandler)
--same main1.py--
--same main2.py--
# submodule.py
import logging
from logger import setup_second_logging
setup_second_logging()
logger = logging.getLogger('submodule')
def log_me():
logger.info('I am from submodule')
Result from main1:
main1 - I am from main1
# no submodule since it is logged to file
Result from main2:
main2 - I am from main2
# no submodule since it is logged to file
You can try to use hierarchy for the loggers. For example:
logger = logging.getLogger(__name__)
And in this case you can set specific handlers for the submodules in logger.py.
I have the following lines of code that initialize logging.
I comment one out and leave the other to be used.
The problem that I'm facing is that the one that is meant to log to the file not logging to file. It is instead logging to the console.
Please help.
For logging to Console:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s',)
for file logging
logging.basicConfig(filename='server-soap.1.log',level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s')
I found out what the problem was.
It was in the ordering of the imports and the logging definition.
The effect of the poor ordering was that the libraries that I imported before defining the logging using logging.basicConfig() defined the logging. This therefore took precedence to the logging that I was trying to define later using logging.basicConfig()
Below is how I needed to order it:
import logging
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
But the faulty ordering that I initially had was:
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import logging
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
"Changed in version 3.8: The force argument was added." I think it's a better choice for new version.
For older Version(< 3.8):
From the source code of logging I found the flows:
This function does nothing if the root logger already has handlers
configured. It is a convenience method intended for use by simple scripts
to do one-shot configuration of the logging package.
So, if some module we import called the basicConfig() method before us, our call will do nothing.
A solution I found can work is that you can reload logging before your own calling to basicConfig(), such as
def init_logger(*, fn=None):
# !!! here
from imp import reload # python 2.x don't need to import reload, use it directly
reload(logging)
logging_params = {
'level': logging.INFO,
'format': '%(asctime)s__[%(levelname)s, %(module)s.%(funcName)s](%(name)s)__[L%(lineno)d] %(message)s',
}
if fn is not None:
logging_params['filename'] = fn
logging.basicConfig(**logging_params)
logging.error('init basic configure of logging success')
In case basicConfig() does not work:
logger = logging.getLogger('Spam Logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug Spam message')
logging.debug('debug Spam message')
logger.info('info Ham message')
logger.warning('warn Eggs message')
logger.error('error Spam and Ham message')
logger.critical('critical Ham and Eggs message')
which gives me the following output:
2019-06-20 11:33:48,967 - Spam Logger - DEBUG - debug Spam message
2019-06-20 11:33:48,968 - Spam Logger - INFO - info Ham message
2019-06-20 11:33:48,968 - Spam Logger - WARNING - warn Eggs message
2019-06-20 11:33:48,968 - Spam Logger - ERROR - error Spam and Ham message
2019-06-20 11:33:48,968 - Spam Logger - CRITICAL - critical Ham and Eggs message
For the sake of reference, Python Logging Cookbook is readworthy.
I got the same error, I fixed it by passing the following argument to the basic config.
logging.basicConfig(
level="WARNING",
format="%(asctime)s - %(name)s - [ %(message)s ]",
datefmt='%d-%b-%y %H:%M:%S',
force=True,
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
])
Here as you can see passing force=True overrides any other BasicConfigs
Another solution that worked for me is instead of tracing down which module might be importing logging or even calling basicConfig before me is to just call setLevel after basicConfig again.
import os
import logging
RUNTIME_DEBUG_LEVEL = os.environ.get('RUNTIME_DEBUG_LEVEL').upper()
LOGGING_KWARGS = {
'level': getattr(logging, RUNTIME_DEBUG_LEVEL)
}
logging.basicConfig(**LOGGING_KWARGS)
logging.setLevel(getattr(logging, RUNTIME_DEBUG_LEVEL))
Sort of crude, seems hacky, fixed my problem, worth a share.
IF YOU JUST WANT TO SET THE LOG LEVEL OF ALL LOGGERS
instead of ordering your imports after logging config:
just set level on the root level:
# Option 1:
logging.root.setLevel(logging.INFO)
# Option 2 - make it configurable:
# env variable + default value INFO
logging.root.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
Like vacing's answer mentioned, basicConfig has no effect if the root logger already has handlers configured.
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it appears my app fails to log, but this only happens when executing unit tests with pytest. In a normal app run logs are produced as expected which is enough for my use case.
I am really confused about this filter thing in logging. I have read the docs and logging cookbook.
I have an application written in several files. Each file has a class and its exceptions.
- main file: mcm
- in mcm I import configurator and initiate its class
- in configurator I import rosApi and initiate its class
What I want to achieve:
- in main file, decide from which modules and its levels I want to log.
- one handler for all. configurable in main file
The idea is that I would like to turn on debugging of given modules on and off in one file, customizable per runtime with option passed to main file.
For example:
If I pass -d it will print (additionally) all debug information from configurator but not rosApi.
If I pass -D it will print all debug from configurator AND rosApi
What I do is create a logger module, something along these lines:
import os
import logging
logger = logging.getLogger()
fh = logging.handlers.RotatingFileHandler(logfile, maxBytes=10000000, backupCount=10)
fm = logging.Formatter('%(asctime)s %(module)s %(levelname)s %(message)s')
fh.setFormatter(fm)
logger.addHandler(fh)
level = os.environ['LOGGING'].upper()
level = getattr(logging, level)
logging.getLogger().setLevel(level)
then I do an import logger at the top of all my modules.