logging - merging multiple configuration files - python

I am working on a project where we have a core application that loads multiple plugins.
Each plugin has its own configuration file, and the core application has one as well.
We are using the excellent logging module from python's standard library.
The logging module includes the ability to load the logging configuration from an .ini file.
However, if you load another configuration file, the other files are discarded and only the new configuration is used.
What I would like to do is to split my logging configuration into multiple files, so that the application can load its own configuration file and then load each plugin's merging their logging configuration into the main one.
Note: fileConfig has an option called disable_existing_loggers that can be set to False. However, this only keeps existing loggers alive, but it still clears the internal map of handlers (which means that a plugin's configuration cannot use a handler defined in the application's config file).
I could merge the files manually to produce my own config, but I'd rather avoid that.
Thanks.
To make it clearer, I'd like to do something like this:
# application.ini
[loggers]
keys=root,app
[handlers]
keys=rootHandler,appHandler
[formatters]
keys=myformatter
[logger_root]
# stuff
[handler_rootHandler]
# stuff
[formatter_myformatter]
# stuff
...
# plugin.ini
[loggers]
keys=pluginLogger # no root logger
[handlers]
keys=pluginHandler # no root handler
# no formatters section
[logger_pluginLogger]
# stuff
formatter=myformatter # using the formatter from application.ini

I usually do this with logging.config.dictConfig and the pyYaml package. The package allows you to load the content of a configuration file as a dict object.
The only additional thing needed is a small helper class to handle configuration overwrites/add-ons:
import yaml
class Configuration(dict):
def __init__(self,
config_file_path=None,
overwrites=None):
with open(config_file_path) as config_file:
config = yaml.load(config_file)
super(Configuration, self).__init__(config)
if overwrites is not None:
for overwrite_key, value in overwrites.items():
self.apply_overwrite(self, overwrite_key, value)
def apply_overwrite(self, node, key, value):
if isinstance(value, dict):
for item in value:
self.apply_overwrite(node[key], item, value[item])
else:
node[key] = value
For example, if your main configuration is:
logger:
version: 1
disable_existing_loggers: False
formatters:
simple:
format: '%(levelname)s: Module: %(name)s Msg: %(message)s'
handlers:
file:
level: DEBUG
class: logging.handlers.RotatingFileHandler
maxBytes: 10000000
backupCount: 50
formatter: simple
filename: '/tmp/log1.log'
root:
handlers: [file]
level: DEBUG
and your overwrite is:
logger:
handlers:
file:
filename: '/tmp/log2.log'
you can get your overwritten logger like that:
from configuration import Configuration
from logging.config import dictConfig
import logging
if __name__ == '__main__':
config = Configuration('standard.yml', overwrites=Configuration('overwrite.yml'))
dictConfig(config['logger'])
logger = logging.getLogger(__name__)
logger.info('I logged it')

I couldn't find a way to do what I wanted, so I ended up rolling a class to do it.
Here it is as a convenient github gist.

Related

Disable logger of specific python module in logging config file

I am using the below logging file and load it via fileConfig. I would like to configure the logging behavior of modules that are imported. For example, I want to disable (or set it to some higher log level) logging from urllib3. How would I do that?
[loggers]
keys = root
[logger_root]
level = DEBUG
handlers = root
[handlers]
keys = root
[handler_root]
class = StreamHandler
level = DEBUG
formatter = json
[formatters]
keys = json
[formatter_json]
format = %(name)s%(message)s%(asctime)s%*(levelname)s%*(module)s
class = pythonjsonlogger.jsonlogger.JsonFormatter
Change your configuration to something like (these are just additions/changes to you what you posted):
[loggers]
keys = root,urllib3
[logger_urllib3]
level=CRITICAL
handlers=root
qualname=urllib3

Using program variables in python logging configuration file

I am trying to enable my python logging using the following:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.config
import os
test_filename = 'my_log_file.txt'
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.error("No loggingpy.conf to parse", exc_info=e)
logging.basicConfig(level=logging.WARNING, format="%(asctime)-15s %(message)s")
test1_log = logging.getLogger("test1")
test1_log.critical("test1_log crit")
test1_log.error("test1_log error")
test1_log.warning("test1_log warning")
test1_log.info("test1_log info")
test1_log.debug("test1_log debug")
I would like to use a loggingpy.conf file to control the logging like the following:
[loggers]
keys=root
[handlers]
keys=handRoot
[formatters]
keys=formRoot
[logger_root]
level=INFO
handlers=handRoot
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(test_filename,)
[formatter_formRoot]
format=%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Here I am trying to route the logging to the file named by the local "test_filename". When I run this, I get:
ERROR:root:No loggingpy.conf to parse
Traceback (most recent call last):
File "logging_test.py", line 8, in <module>
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
File "/usr/lib/python2.7/logging/config.py", line 85, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/lib/python2.7/logging/config.py", line 162, in _install_handlers
args = eval(args, vars(logging))
File "<string>", line 1, in <module>
NameError: name 'test_filename' is not defined
CRITICAL:test1:test1_log crit
ERROR:test1:test1_log error
WARNING:test1:test1_log warning
Reading the docs, it seems that the "args" value in the config is eval'd in the context of the logging package namespace rather than the context when fileConfig is called. Is there any decent way to try to get the logging to behave this way through a configuration file so I can configure a dynamic log filename (usually like "InputFile.log"), but still have the flexibility to use the logging config file to change it?
Even though it's an old question, I think this still has relevance. An alternative to the above mentioned solutions would be to use logging.config.dictConfig(...) and manipulating the dictionary.
MWE:
log_config.yml
version: 1
disable_existing_loggers: false
formatters:
default:
format: "%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: default
stream: ext://sys.stdout
level: DEBUG
file:
class: logging.FileHandler
formatter: default
filename: "{path}/service.log"
level: DEBUG
root:
level: DEBUG
handlers:
- file
- console
example.py
import logging.config
import sys
import yaml
log_output_path = sys.argv[1]
log_config = yaml.load(open("log_config.yml"))
log_config["handlers"]["file"]["filename"] = log_config["handlers"]["file"]["filename"].format(path = log_output_path)
logging.config.dictConfig(log_config)
logging.debug("test")
Executable as follows:
python example.py .
Result:
service.log file in current working directory contains one line of log message.
Console outputs one line of log message.
Both state something like this:
2016-06-06 20:56:56,450:root:12232:11 DEBUG test
You could place the filename in the logging namespace with:
logging.test_filename = 'my_log_file.txt'
Then your existing loggingpy.conf file should work
You should be able to pollute the logging namespace with anything you like (within reason - i wouldn't try logging.config = 'something') in your module and that should make it referencable by the the config file.
The args statement is parsed using eval at logging.config.py _install_handlers. So you can add code into the args.
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(os.getenv("LOG_FILE","default_value"),)
Now you only need to populate the environment variable.
This is very hacky so I wouldn't recommend it. But if you for some reason did not want to add to the logging namespace you could pass the log file name through a command line argument and then use sys.argv[1] to access it (sys.argv[0] is the script name).
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(sys.argv[1],)

Python / How to get logger with absolute path?

I try to use the logging of python:
import logging
log = logging.getLogger(__name__)
logInstall = logging.getLogger('/var/log/install.log')
log.info("Everything works great")
logInstall.info("Why is not printed to the log file?")
In the log file of the log, every thing works and it prints: "Everything works great"
But in /var/log/install.log, I don't see anything.
What I do wrong?
Where have you seen that the argument to logging.getLogger() was a file path ??? It's only a name for your logger object:
logging.getLogger([name])
Return a logger with the specified name or, if no name is specified,
return a logger which is the root logger of the hierarchy.
If specified, the name is typically a dot-separated hierarchical name
like “a”, “a.b” or “a.b.c.d”. Choice of these names is entirely up to
the developer who is using logging.
https://docs.python.org/2/library/logging.html#logging.getLogger
Where / how this logger logs to (file, stderr, syslog, mail, network call, whatever) depends on how you configure the logging for your own app - the point here is that you decouple the logger's use (in the libs) from the logger config (in the app).
Note that the usual practice is to use __name__ so you get a logger hierarchy that mimics the packages / modules hierarchy.

Multiple instances and multiple destinations

I have multiple Python modules, each of which has separate log config files. I am using Yaml, so I just do a
log_config_dict=yaml.load(open(config_file1, 'r'))
logging.config.dictConfig(log_config_dict)
self.main_logger=logging.getLogger('Main_Logger')
In another module, I have something like
log_config_dict=yaml.load(open(config_file2, 'r'))
logging.config.dictConfig(log_config_dict)
self.main_logger=logging.getLogger('Poller_Main_Logger')
The 2 loggers are writing to separate log files. Then in the code for each separate module, I do logging as -
self.main_logger.info(log_str)
However this is not working as expected. Say I do logging from module1,then from module2, then from module1 again, the log messages are either written to module2's destination, or not written at all.
Any idea what is happening? Is the problem that each time I do a dictConfig call, previous loggers are disabled? Is there any way around this?
Below - one of the log config files
version: 1
formatters:
default_formatter:
format: '%(asctime)s : %(levelname)s : %(message)s'
datefmt: '%d-%b-%Y %H:%M:%S'
plain_formatter:
format: '%(message)s'
handlers:
console_default_handler:
class: logging.StreamHandler
level: INFO
formatter: default_formatter
stream: ext://sys.stdout
console_error_handler:
class: logging.StreamHandler
level: WARNING
formatter: default_formatter
stream: ext://sys.stderr
logfile_handler:
class: logging.FileHandler
filename: logger.txt
mode: a
formatter: default_formatter
level: DEBUG
errfile_handler:
class: logging.FileHandler
filename: error.txt
mode: a
formatter: default_formatter
level: WARNING
plain_handler:
class: logging.StreamHandler
level: DEBUG
formatter: plain_formatter
stream: ext://sys.stdout
loggers:
Poller_Main_Logger:
level: DEBUG
handlers: [console_default_handler,logfile_handler]
propagate: no
Plain_Logger:
level: DEBUG
handlers: [plain_handler]
propagate: no
Error_Logger:
level: WARNING
handlers: [errfile_handler,console_error_handler,logfile_handler]
propagate: no
root:
level: INFO
handlers: [console_default_handler]
logging doesn't support the usage pattern you want here. By default, rerunning logging.config.dictConfig() blows away all existing loggers, handlers, and formatters. There are the incremental and disable_existing_loggers options available for use in the config dict, but using incremental, you can't load in new handlers or formatters. You either need to combine you configuration in to a single file for your whole program, or use the mechanisms provided in the module to manually construct and add formatters and handlers to loggers.

How do I configure the Python logging module in Django?

I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.
See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.
I'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.
I used this with success (although it does not rotate):
# in settings.py
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \
\033[35m%(message)s\033[0m',
datefmt = '[%d/%b/%Y %H:%M:%S]',
filename = '/tmp/my_django_app.log',
filemode = 'a'
)
I'd suggest to try an absolute path, too.
I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
But there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.

Categories