I am trying to setup a logging configuration to use in my different modules. I have followed different tutorials and stackoverflow posts (here, here and here) to write logs in to a project.log file.
While the information is displayed correctly in the console, the log.conf is read correctly, the project.log is created but is not filled with the warning messages.
Here is how I proceeded:
The log.conf file used to write up the handlers and formatting:
[loggers]
keys=root,sLogger
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=fileFormatter,consoleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_sLogger]
level=DEBUG
handlers=consoleHandler,fileHandler
qualname=sLogger
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=WARNING
formatter=consoleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=WARNING
formatter=fileFormatter
args=('%(logfilename)s', 'w')
[formatter_fileFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
In the main.py file:
import logging
import logging.config
def main():
logging.config.fileConfig(fname='./log.conf',
defaults={'logfilename': 'project.log')},
disable_existing_loggers=False)
logger = logging.getLogger(__name__)
logger.warning('This is a message')
In my module.py file:
import logging
logger = logging.getLogger(__name__)
logger.warning('Example of a warning message')
I found the problem ! I didn't realize but when defining handlers in the log.conf file, the order of the handlers and formatters is important !
I changed the log.conf changed to:
[loggers]
keys=root,sLogger
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=consoleFormatter, fileFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler, fileHandler
[logger_sLogger]
level=DEBUG
handlers=consoleHandler, fileHandler
qualname=sLogger
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=WARNING
formatter=consoleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=WARNING
formatter=fileFormatter
args=('%(logfilename)s','a')
[formatter_fileFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
[formatter_consoleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
NOTE the changed order between consoleFormatter and fileHandler in the [formatters] section.
The rest is working perfectly fine.
To make it easier to play around with I put it in a function (which is probably redundant I suppose):
def base_logger(path_conf, fname, savedir):
'''
Base logging function to be started once
in the main script then used in every other
modules.
Args:
- path_conf: str, path to the configuration
file {log.conf}
- fname: str, name of the saved log file
- savedir: str, path of the saving directory
Returns:
- logging object to be used in the other scripts
Example:
In main.py:
main():
base_logger(paht_cong = './log.conf', fname='file', savedir='project)
In module.py:
# AFTER all the import
logger = logging.getLogger(__name__) # To ge the module name reported in the log file
...
logger.error(f'Error message regarding variable {var}')
The logger initialization {logger = logging.getLogger(__name__)},
has to be done in every file that will use logging !
'''
logging.config.fileConfig(fname=path_conf,
defaults={'logfilename': os.path.join(savedir, f'{fname}.log')},
disable_existing_loggers=False)
I would add that the logger = logging.getLogger(__name__) has to be placed after all imports to make sure that the module name displayed in the log is not imported from other modules.
Related
I have a following python logging configuration file which
[loggers]
keys=root,paramiko
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=consoleFormatter,fileFormatter
[logger_paramiko]
level=CRITICAL
handlers=consoleHandler,fileHandler
qualname=paramiko
[logger_root]
level=DEBUG
handlers=consoleHandler,fileHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=consoleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=fileFormatter
args=('run.log', 'w')
[formatter_consoleFormatter]
format=[%(asctime)s - %(levelname)s] %(message)s
[formatter_fileFormatter]
format=[%(asctime)s - %(pathname)s:%(lineno)s - %(levelname)s] %(message)s
As you can see I'm logging to console and to file. The name of the file is run.log. I want to be able to append/prepend to the file name a timestamp, i.e. name my log file as 2019-08-08__18:13:40-run.log. I searched online, but couldn't find anything. How can I do it through a configuration file?
The way that python reads the args key from the config is to run it through eval() so you can put any valid code there. You can put pretty much anything you want there, something that is very likely different on every run would be this:
args=(str(time.time())+'.log', 'w') # needs import time in the code
args=(str(hash(' '))+'.log', 'w') # works without import, 99% chance to be unique
I configure logging in all modules in a project like this:
import logging
import logging.config
logging.config.fileConfig('../logging.ini',defaults={'name':__name__})
logging.getLogger("cassandra").setLevel("WARNING") #move this line into the logging.ini
Now I would like to move the last line into the following config file. What is the best way of doing this? Right now I copy/pasted this line into each module, although I have a shared config-file. :S
Configs I found give only examples for self-created loggers, but not overwriting properties of imported loggers.
logging.ini
[loggers]
keys=root
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=INFO
handlers=consoleHandler,fileHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=DEBUG
formatter=simpleFormatter
args=('../logs/%(name)s.log',)
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=
Try this in your logging.ini:
[loggers]
keys=root,cassandra
[logger_cassandra]
level=WARNING
handlers=consoleHandler,fileHandler
qualname=cassandra
With this config you can write your modules like
import logging.config
logging.config.fileConfig('logging.ini', defaults={'name': __name__})
logging.warning("this is my logging message")
Note: I recommend the usage of logging.dictConfig. It appears to be more straight forward to me and offers a many options for your configuration. You maybe want to checkout this example dict config or even this with colored console logging.
I'm new to Python and I'm trying to log to file and to console,
I saw this question:
logger configuration to log to file and print to stdout
which was very helpful but I Saw that there is a way to configure an ini file and then use it in every module.
Problem is that it seems that it doesn't take the properties from the ini file
and if I don't explicitly define the formatter in the code it uses the default logging without the format that I gave him in the ini file:
configFolder = os.getcwd() + os.sep + 'Configuration'
fileConfig(configFolder + os.sep + 'logging_config.ini')
logger = logging.getLogger(__name__)
# create a file handler
handler = logging.FileHandler('logger.log')
handler.setLevel(logging.INFO)
# add the handlers to the logger
logger.addHandler(handler)
logger.info('hello')
output would be just:
hello
instead of :
2016-08-08 15:16:42,954 - __main__ - INFO - hello
This is my ini file:
[loggers]
keys=root
[handlers]
keys=stream_handler
[formatters]
keys=formatter
[logger_root]
level=INFO
handlers=stream_handler
[handler_stream_handler]
class=StreamHandler
level=INFO
formatter=formatter
args=(sys.stderr,)
[formatter_formatter]
format=%(asctime)s %(name)-12s %(levelname)-8s %(message)s
Change formatter=formatter to match the name of your formatter: formatter_formatter
I want to write into two log files by using two loggers with following .ini config file:
[loggers]
keys=root,teja
[handlers]
keys=fileHandler,tejaFileHandler
[formatters]
keys=simpleFormatter
[logger_teja]
level=DEBUG
handlers=tejaFileHandler
qualname='tejaLogger'
[logger_root]
level=DEBUG
handlers=fileHandler
[handler_fileHandler]
class=logging.FileHandler
level=DEBUG
formatter=simpleFormatter
args=("error.log", "a")
[handler_tejaFileHandler]
class=logging.FileHandler
level=DEBUG
formatter=simpleFormatter
args=("teja.log", "a")
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
And I am using this configuration in my python code as
import logging
import logging.config
# load my module
import my_module
# load the logging configuration
logging.config.fileConfig('logging.ini')
logger1=logging.getLogger('root')
logger1.info('Hi how are you?')
logger2=logging.getLogger('teja')
logger2.debug('checking teja logger?')
I see that logs are written to error.log file whereas no logs are written to teja.log file. Please correct me if I am doing something silly...
You named your logger object 'tejaLogger':
[logger_teja]
level=DEBUG
handlers=tejaFileHandler
qualname='tejaLogger'
# ^^^^^^^^^^^^
Note that the quotes are part of the name.
but your test code picks up teja instead:
logger2=logging.getLogger('teja')
Rename one or the other; although you could use logging.getLogger("'tejaLogger'")
you probably want to drop the quotes and / or rename the logger to what you expected it to be:
[logger_teja]
level=DEBUG
handlers=tejaFileHandler
qualname=teja
It turns out that the problem is on this line (in the [logger_teja] section):
qualname='tejaLogger'
If you add this to your code (it prints all the current loggers):
print(logging.Logger.manager.loggerDict)
You get:
{"'tejaLogger'": <logging.Logger object at 0x7f89631170b8>}
Which means that your logger is called literally 'tejaLogger'. Using:
logger2=logging.getLogger("'tejaLogger'")`
Actually works fine. Either do that, or change qualname='tejaLogger' to qualname=teja.
I am implementing python logging in my application, and I want to be able to leverage the "default" root settings. I want to use root settings because I dont want to have to define a logger per module in a config file.
When I turn on DEBUG level logging for the root logger, I am running into an issue with the QPID Python Client API. My log files get flooded with qpid debug statements:
2011-03-16 09:16:18,664 - qpid.messaging.io.ops - DEBUG - SENT[8de6b2c]: ..
2011-03-16 09:16:18,667 - qpid.messaging.io.raw - DEBUG - ..
2011-03-16 09:16:18,668 - qpid.messaging.io.raw - DEBUG - READ[8de6b2c]: ..
2011-03-16 09:16:18,668 - qpid.messaging.io.ops - DEBUG - ..
Etc..
So two main questions:
1) Is there a way to enable* logging for just my modules without defining a logger per module? In other words is there a way to do shared "logger settings," so instead of having to define a logger_ section per logger is there a way to default the settings?
Something like:
[logger_shared_settings]
loggers = logger_A,logger_B,logger_C,logger_D
level=DEBUG
2) Or How can i filter out the qpid package logging via a config file?
Here is the log.conf file:
[loggers]
keys=root
[handlers]
keys=consoleHandler,fileHandler,nullHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler,fileHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=logging.handlers.RotatingFileHandler
level=DEBUG
formatter=simpleFormatter
args=('out.log',)
Here was what I was trying to avoid:
[loggers]
keys=root, a, b, c, d
[handlers]
keys=consoleHandler,fileHandler,nullHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=ERROR
handlers=nullHandler
[logger_a]
level=DEBUG
handlers=consoleHandler,fileHandler
[logger_b]
level=DEBUG
handlers=consoleHandler,fileHandler
[logger_c]
level=DEBUG
handlers=consoleHandler,fileHandler
With python2.7 you can set NullHandler to qpid logger:
[logger_qpid]
level=NOTSET
handlers=nullHandler
qualname=qpid
propagate=0