I am using the below logging file and load it via fileConfig. I would like to configure the logging behavior of modules that are imported. For example, I want to disable (or set it to some higher log level) logging from urllib3. How would I do that?
[loggers]
keys = root
[logger_root]
level = DEBUG
handlers = root
[handlers]
keys = root
[handler_root]
class = StreamHandler
level = DEBUG
formatter = json
[formatters]
keys = json
[formatter_json]
format = %(name)s%(message)s%(asctime)s%*(levelname)s%*(module)s
class = pythonjsonlogger.jsonlogger.JsonFormatter
Change your configuration to something like (these are just additions/changes to you what you posted):
[loggers]
keys = root,urllib3
[logger_urllib3]
level=CRITICAL
handlers=root
qualname=urllib3
Related
I am working on a project where we have a core application that loads multiple plugins.
Each plugin has its own configuration file, and the core application has one as well.
We are using the excellent logging module from python's standard library.
The logging module includes the ability to load the logging configuration from an .ini file.
However, if you load another configuration file, the other files are discarded and only the new configuration is used.
What I would like to do is to split my logging configuration into multiple files, so that the application can load its own configuration file and then load each plugin's merging their logging configuration into the main one.
Note: fileConfig has an option called disable_existing_loggers that can be set to False. However, this only keeps existing loggers alive, but it still clears the internal map of handlers (which means that a plugin's configuration cannot use a handler defined in the application's config file).
I could merge the files manually to produce my own config, but I'd rather avoid that.
Thanks.
To make it clearer, I'd like to do something like this:
# application.ini
[loggers]
keys=root,app
[handlers]
keys=rootHandler,appHandler
[formatters]
keys=myformatter
[logger_root]
# stuff
[handler_rootHandler]
# stuff
[formatter_myformatter]
# stuff
...
# plugin.ini
[loggers]
keys=pluginLogger # no root logger
[handlers]
keys=pluginHandler # no root handler
# no formatters section
[logger_pluginLogger]
# stuff
formatter=myformatter # using the formatter from application.ini
I usually do this with logging.config.dictConfig and the pyYaml package. The package allows you to load the content of a configuration file as a dict object.
The only additional thing needed is a small helper class to handle configuration overwrites/add-ons:
import yaml
class Configuration(dict):
def __init__(self,
config_file_path=None,
overwrites=None):
with open(config_file_path) as config_file:
config = yaml.load(config_file)
super(Configuration, self).__init__(config)
if overwrites is not None:
for overwrite_key, value in overwrites.items():
self.apply_overwrite(self, overwrite_key, value)
def apply_overwrite(self, node, key, value):
if isinstance(value, dict):
for item in value:
self.apply_overwrite(node[key], item, value[item])
else:
node[key] = value
For example, if your main configuration is:
logger:
version: 1
disable_existing_loggers: False
formatters:
simple:
format: '%(levelname)s: Module: %(name)s Msg: %(message)s'
handlers:
file:
level: DEBUG
class: logging.handlers.RotatingFileHandler
maxBytes: 10000000
backupCount: 50
formatter: simple
filename: '/tmp/log1.log'
root:
handlers: [file]
level: DEBUG
and your overwrite is:
logger:
handlers:
file:
filename: '/tmp/log2.log'
you can get your overwritten logger like that:
from configuration import Configuration
from logging.config import dictConfig
import logging
if __name__ == '__main__':
config = Configuration('standard.yml', overwrites=Configuration('overwrite.yml'))
dictConfig(config['logger'])
logger = logging.getLogger(__name__)
logger.info('I logged it')
I couldn't find a way to do what I wanted, so I ended up rolling a class to do it.
Here it is as a convenient github gist.
I am trying to enable my python logging using the following:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.config
import os
test_filename = 'my_log_file.txt'
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.error("No loggingpy.conf to parse", exc_info=e)
logging.basicConfig(level=logging.WARNING, format="%(asctime)-15s %(message)s")
test1_log = logging.getLogger("test1")
test1_log.critical("test1_log crit")
test1_log.error("test1_log error")
test1_log.warning("test1_log warning")
test1_log.info("test1_log info")
test1_log.debug("test1_log debug")
I would like to use a loggingpy.conf file to control the logging like the following:
[loggers]
keys=root
[handlers]
keys=handRoot
[formatters]
keys=formRoot
[logger_root]
level=INFO
handlers=handRoot
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(test_filename,)
[formatter_formRoot]
format=%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Here I am trying to route the logging to the file named by the local "test_filename". When I run this, I get:
ERROR:root:No loggingpy.conf to parse
Traceback (most recent call last):
File "logging_test.py", line 8, in <module>
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
File "/usr/lib/python2.7/logging/config.py", line 85, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/lib/python2.7/logging/config.py", line 162, in _install_handlers
args = eval(args, vars(logging))
File "<string>", line 1, in <module>
NameError: name 'test_filename' is not defined
CRITICAL:test1:test1_log crit
ERROR:test1:test1_log error
WARNING:test1:test1_log warning
Reading the docs, it seems that the "args" value in the config is eval'd in the context of the logging package namespace rather than the context when fileConfig is called. Is there any decent way to try to get the logging to behave this way through a configuration file so I can configure a dynamic log filename (usually like "InputFile.log"), but still have the flexibility to use the logging config file to change it?
Even though it's an old question, I think this still has relevance. An alternative to the above mentioned solutions would be to use logging.config.dictConfig(...) and manipulating the dictionary.
MWE:
log_config.yml
version: 1
disable_existing_loggers: false
formatters:
default:
format: "%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: default
stream: ext://sys.stdout
level: DEBUG
file:
class: logging.FileHandler
formatter: default
filename: "{path}/service.log"
level: DEBUG
root:
level: DEBUG
handlers:
- file
- console
example.py
import logging.config
import sys
import yaml
log_output_path = sys.argv[1]
log_config = yaml.load(open("log_config.yml"))
log_config["handlers"]["file"]["filename"] = log_config["handlers"]["file"]["filename"].format(path = log_output_path)
logging.config.dictConfig(log_config)
logging.debug("test")
Executable as follows:
python example.py .
Result:
service.log file in current working directory contains one line of log message.
Console outputs one line of log message.
Both state something like this:
2016-06-06 20:56:56,450:root:12232:11 DEBUG test
You could place the filename in the logging namespace with:
logging.test_filename = 'my_log_file.txt'
Then your existing loggingpy.conf file should work
You should be able to pollute the logging namespace with anything you like (within reason - i wouldn't try logging.config = 'something') in your module and that should make it referencable by the the config file.
The args statement is parsed using eval at logging.config.py _install_handlers. So you can add code into the args.
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(os.getenv("LOG_FILE","default_value"),)
Now you only need to populate the environment variable.
This is very hacky so I wouldn't recommend it. But if you for some reason did not want to add to the logging namespace you could pass the log file name through a command line argument and then use sys.argv[1] to access it (sys.argv[0] is the script name).
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(sys.argv[1],)
I'd like to have all timestamps in my log file to be UTC timestamp. When specified through code, this is done as follows:
import logging
import time
myHandler = logging.FileHandler('mylogfile.log', 'a')
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(name)-15s:%(lineno)4s: %(message)-80s')
formatter.converter = time.gmtime
myHandler.setFormatter(formatter)
myLogger = logging.getLogger('MyApp')
myLogger.addHandler(myHandler)
myLogger.setLevel(logging.DEBUG)
myLogger.info('here we are')
I'd like to move away from the above 'in-code' configuration to a config file based mechanism.
Here's the config file section for the formatter:
[handler_MyLogHandler]
args=("mylogfile.log", "a",)
class=FileHandler
level=DEBUG
formatter=simpleFormatter
Now, how do I specify the converter attribute (time.gmtime) in the above section?
Edit: The above config file is loaded thus:
logging.config.fileConfig('myLogConfig.conf')
Sadly, there is no way of doing this using the configuration file, other than having e.g. a
class UTCFormatter(logging.Formatter):
converter = time.gmtime
and then using a UTCFormatter in the configuration.
Here Vinay's solution applied to the logging.basicConfig:
import logging
import time
logging.basicConfig(filename='junk.log', level=logging.DEBUG, format='%(asctime)s: %(levelname)s:%(message)s')
logging.Formatter.converter = time.gmtime
logging.info('A message.')
I have a standard logging configuration like:
[loggers]
keys = root, quoting, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_quoting]
level = INFO
handlers =
qualname = quoting
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
And pyramid seems to be ignore it, giving me EVERYTHING on stdout when running with pserve --reload development.ini.
Sample log output at http://pastebin.com/1Q3Vt9xM
The log represents one page load. I'm trying to filter out specifically the SQLAlchemy stuff, but would like to know where I went wrong
I think that echo=True on a SQLAlchemy engine configuration will dump to stdout and ignore the logging configuration. This may be what you're seeing.
I am planning to add logging mechanism in my python&geodjango web service.
Is there log4j look a like logging mechanism in python/geodjango?
I am looking for log4j's dailyrollingfileappender equivalent. Then automatically delete all 1month old log files.
Any guidance is appreciated.
UPDATES1
I am thinking of the below format.
datetime(ms)|log level|current thread name|client ip address|logged username|source file|source file line number|log message
Yes - Python 2.5 includes the 'logging' module. One of the handlers it supports is handlers.TimedRotatingFileHandler, this is what you're looking for. 'logging' is very easy to use:
example:
import logging
import logging.config
logging.fileConfig('mylog.conf')
logger = logging.getLogger('root')
The following is your config file for logging
#======================
# mylog.conf
[loggers]
keys=root
[handlers]
keys=default
[formatters]
keys=default
[logger_root]
level=INFO
handlers=default
qualname=(root) # note - this is used in non-root loggers
propagate=1 # note - this is used in non-root loggers
channel=
parent=
[handler_default]
class=handlers.TimedRotatingFileHandler
level=INFO
formatter=default
args=('try.log', 'd', 1)
[formatter_default]
format=%(asctime)s %(pathname)s(%(lineno)d): %(levelname)s %(message)s