I am trying to enable my python logging using the following:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.config
import os
test_filename = 'my_log_file.txt'
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.error("No loggingpy.conf to parse", exc_info=e)
logging.basicConfig(level=logging.WARNING, format="%(asctime)-15s %(message)s")
test1_log = logging.getLogger("test1")
test1_log.critical("test1_log crit")
test1_log.error("test1_log error")
test1_log.warning("test1_log warning")
test1_log.info("test1_log info")
test1_log.debug("test1_log debug")
I would like to use a loggingpy.conf file to control the logging like the following:
[loggers]
keys=root
[handlers]
keys=handRoot
[formatters]
keys=formRoot
[logger_root]
level=INFO
handlers=handRoot
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(test_filename,)
[formatter_formRoot]
format=%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Here I am trying to route the logging to the file named by the local "test_filename". When I run this, I get:
ERROR:root:No loggingpy.conf to parse
Traceback (most recent call last):
File "logging_test.py", line 8, in <module>
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
File "/usr/lib/python2.7/logging/config.py", line 85, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/lib/python2.7/logging/config.py", line 162, in _install_handlers
args = eval(args, vars(logging))
File "<string>", line 1, in <module>
NameError: name 'test_filename' is not defined
CRITICAL:test1:test1_log crit
ERROR:test1:test1_log error
WARNING:test1:test1_log warning
Reading the docs, it seems that the "args" value in the config is eval'd in the context of the logging package namespace rather than the context when fileConfig is called. Is there any decent way to try to get the logging to behave this way through a configuration file so I can configure a dynamic log filename (usually like "InputFile.log"), but still have the flexibility to use the logging config file to change it?
Even though it's an old question, I think this still has relevance. An alternative to the above mentioned solutions would be to use logging.config.dictConfig(...) and manipulating the dictionary.
MWE:
log_config.yml
version: 1
disable_existing_loggers: false
formatters:
default:
format: "%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: default
stream: ext://sys.stdout
level: DEBUG
file:
class: logging.FileHandler
formatter: default
filename: "{path}/service.log"
level: DEBUG
root:
level: DEBUG
handlers:
- file
- console
example.py
import logging.config
import sys
import yaml
log_output_path = sys.argv[1]
log_config = yaml.load(open("log_config.yml"))
log_config["handlers"]["file"]["filename"] = log_config["handlers"]["file"]["filename"].format(path = log_output_path)
logging.config.dictConfig(log_config)
logging.debug("test")
Executable as follows:
python example.py .
Result:
service.log file in current working directory contains one line of log message.
Console outputs one line of log message.
Both state something like this:
2016-06-06 20:56:56,450:root:12232:11 DEBUG test
You could place the filename in the logging namespace with:
logging.test_filename = 'my_log_file.txt'
Then your existing loggingpy.conf file should work
You should be able to pollute the logging namespace with anything you like (within reason - i wouldn't try logging.config = 'something') in your module and that should make it referencable by the the config file.
The args statement is parsed using eval at logging.config.py _install_handlers. So you can add code into the args.
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(os.getenv("LOG_FILE","default_value"),)
Now you only need to populate the environment variable.
This is very hacky so I wouldn't recommend it. But if you for some reason did not want to add to the logging namespace you could pass the log file name through a command line argument and then use sys.argv[1] to access it (sys.argv[0] is the script name).
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(sys.argv[1],)
Related
I have a config file config.ini as follows,
[loggers]
keys=root
[handlers]
keys=consoleHandler
[formatters]
keys=sampleFormatter
[logger_root]
level=INFO
handlers=consoleHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=sampleFormatter
args=(sys.stderr,)
[formatter_sampleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
which I'm reading from my Python script as follows:
import logging
logging.config.fileConfig(fname="config.ini", disable_existing_loggers=False)
Now, I want to send this format read from the above file as a dictionary. So I use configparser to read this file and then convert to a dictionary as follows,
import configparser
parser = configparser.RawConfigParser()
parser.read(filenames="config.ini")
conf_dict = {section: dict(parser.items(section)) for section in parser.sections()}
Then, I learned that logging module's dictConfig requires the key version, so I add it as follows,
conf_dict["version"] = 1
Then, I try to read this dictionary into dictConfig as follows,
logging.config.dictConfig(config_dict)
However, this results in the following error,
File "run.py", line 41, in load
logging.config.dictConfig(config_dict)
File "/python3.7/logging/config.py", line 799, in dictConfig
File "/python3.7/logging/config.py", line 545, in configure
ValueError: Unable to configure formatter 'keys'
This seems to indicate that the two utilities of the logging module, fileConfig and dictConfig, accept two different formats. Is there a way to translate between these two formats? If so, how?
No, there is no parser in the standard library that'll take an ini file and spit out a dictConfig compatible dictionary. You will have to create your own json file format to populate your dictionary.
You can find a good JSON file example here.
As of Python 3.7 the two (dictConfig and fileConfig) cannot be expressed equivalently. For example, the "args" key can only be provided with the fileConfig. It will be ignored when provided with dictConfig.
As a side note, I came across this when trying to specify "args" for a dictConfig so that I could specify a file path that uses environment variables for a FileHandler.
Reference:
config.py
_install_handlers - looks for the args key to evaluate it
configure_handler - does not look for the args key in the dict
I try to Implement an logger in Django. I had it allready working in Django Version 1.5 and Python 2.7.
But when I try to implement it on my actual Version(Django 2.0.8 and Python 3.6.5)
I get the Error on the following Code in the manage.py:
import logging.config
import os
import sys
PROJECT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(PROJECT_PATH)
os.environ["DJANGO_SETTINGS_MODULE"] = "ProjectServer.settings"
logging.config.fileConfig('ProjectServer/logging.ini')
try:
import settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'settings.py' in the directory containing. ")
sys.exit(1)
if __name__ == "__main__":
import django
django.setup()
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
The Exception i get:
Exception has occurred: TypeError
'>' not supported between instances of 'str' and 'int'
File "C:\path\ProjectServer\manage.py", line 10, in<module>
logging.config.fileConfig('ProjectServer/logging.ini')
My logging.ini:
[loggers]
keys=root
[handlers]
keys=consoleHandler, rotatingFileHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler,rotatingFileHandler
[handler_consoleHandler]
class=logging.StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_rotatingFileHandler]
class=logging.handlers.RotatingFileHandler
args=(r'c:\log\debug.log','maxBytes=1000000','backupCount=3')
level=INFO
formatter=simpleFormatter
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=
I don´t really get how it would come to the Error. When I checked the Pythonversion I didn´t notice any changes regarding the Logging.
Afterwards i´m importing the Logging and use it like this in my Models and Views
import logging
logger = logging.getLogger(__name__)
logger.info('error creating calendar file')
Operator > in python 2.7 can be used with two different types like string and integer. Example:
s = "xxx"
n = 123
s > n // will output True
In python3 this operation is not allowed.
TypeError: unorderable types: str() > int()
Comparison operator > is described in a detailed way in python docs
It seems that somewhere in your code you are trying to compare string vs integer.
EDIT (after OP posted the code)
The problem with operator > unsupported arguments occurs, because your RotatingFileHandler setting maxBytes is interpreted as a string instead of integer. You can fix this problem by providing the list of RotatingFileHandler constructor arguments without using keyword notation. Like this:
[handler_rotatingFileHandler]
class=logging.handlers.RotatingFileHandler
args=(r'c:\log\debug.log', 'a', 1000000, 3)
level=INFO
formatter=simpleFormatter
Second argument is mode and by default is equal to 'a'. reference
I have configured logging in logging.config file. I have created a class where I access this configuration file, enable/disable the logger, and log some Info messages. I am importing this class in all the modules where I need to do some logging. When I try to log to a file, I get this error message. I am not able to understand what this error means.
File "/usr/local/lib/python3.6/configparser.py", line 959, in
getitem raise KeyError(key) KeyError: 'formatters'
logging.config
[loggers]
keys=root
[handlers]
keys=fileHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=INFO
handlers=fileHandler
[handler_fileHandler]
class=FileHandler
level=INFO
formatter=simpleFormatter
args=('example.log','a')
[formatter_simpleFormatter]
class=logging.Formatter
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=
#Log.py
import logging.config
class Monitor(object):
fileName = path.join(path.split(path.dirname(path.abspath(__file__)))[0], "logging.config")
print (fileName) #prints /usr/local/lib/python3.6/site-packages/myproject-0.0.1-py3.6.egg/MyPackageName/logging.config
logging.config.fileConfig(fileName)
logger = logging.getLogger('root')
logger.disabled = False
#staticmethod
def Log(logMessage):
Monitor.logger.info(logMessage)
#sub.py
import Monitor
class Example
def simplelog(self,message):
Monitor.Log("Logging some message here")
#call some function here
Monitor.Log("Logging some other messages here for example")
I had similar issues when I tried to load config from a python script that was not on the project root directory. and what I figured out was that:
logging.config.fileConfig is dependent on configparser and have issues with initializing with absolute path. Try relative path.
Replace
fileName = path.join(path.split(path.dirname(path.abspath(__file__)))[0], "logging.config")
with some thing like:
## get path tree from project root and replace children from root with ".."
path_rslv = path.split(path.dirname(path.abspath(__file__)))[1:]
fileName = path.join(*[".." for dotdot in range(len(path_rslv)], "logging.config")
I am working on a project where we have a core application that loads multiple plugins.
Each plugin has its own configuration file, and the core application has one as well.
We are using the excellent logging module from python's standard library.
The logging module includes the ability to load the logging configuration from an .ini file.
However, if you load another configuration file, the other files are discarded and only the new configuration is used.
What I would like to do is to split my logging configuration into multiple files, so that the application can load its own configuration file and then load each plugin's merging their logging configuration into the main one.
Note: fileConfig has an option called disable_existing_loggers that can be set to False. However, this only keeps existing loggers alive, but it still clears the internal map of handlers (which means that a plugin's configuration cannot use a handler defined in the application's config file).
I could merge the files manually to produce my own config, but I'd rather avoid that.
Thanks.
To make it clearer, I'd like to do something like this:
# application.ini
[loggers]
keys=root,app
[handlers]
keys=rootHandler,appHandler
[formatters]
keys=myformatter
[logger_root]
# stuff
[handler_rootHandler]
# stuff
[formatter_myformatter]
# stuff
...
# plugin.ini
[loggers]
keys=pluginLogger # no root logger
[handlers]
keys=pluginHandler # no root handler
# no formatters section
[logger_pluginLogger]
# stuff
formatter=myformatter # using the formatter from application.ini
I usually do this with logging.config.dictConfig and the pyYaml package. The package allows you to load the content of a configuration file as a dict object.
The only additional thing needed is a small helper class to handle configuration overwrites/add-ons:
import yaml
class Configuration(dict):
def __init__(self,
config_file_path=None,
overwrites=None):
with open(config_file_path) as config_file:
config = yaml.load(config_file)
super(Configuration, self).__init__(config)
if overwrites is not None:
for overwrite_key, value in overwrites.items():
self.apply_overwrite(self, overwrite_key, value)
def apply_overwrite(self, node, key, value):
if isinstance(value, dict):
for item in value:
self.apply_overwrite(node[key], item, value[item])
else:
node[key] = value
For example, if your main configuration is:
logger:
version: 1
disable_existing_loggers: False
formatters:
simple:
format: '%(levelname)s: Module: %(name)s Msg: %(message)s'
handlers:
file:
level: DEBUG
class: logging.handlers.RotatingFileHandler
maxBytes: 10000000
backupCount: 50
formatter: simple
filename: '/tmp/log1.log'
root:
handlers: [file]
level: DEBUG
and your overwrite is:
logger:
handlers:
file:
filename: '/tmp/log2.log'
you can get your overwritten logger like that:
from configuration import Configuration
from logging.config import dictConfig
import logging
if __name__ == '__main__':
config = Configuration('standard.yml', overwrites=Configuration('overwrite.yml'))
dictConfig(config['logger'])
logger = logging.getLogger(__name__)
logger.info('I logged it')
I couldn't find a way to do what I wanted, so I ended up rolling a class to do it.
Here it is as a convenient github gist.
I'd like to have all timestamps in my log file to be UTC timestamp. When specified through code, this is done as follows:
import logging
import time
myHandler = logging.FileHandler('mylogfile.log', 'a')
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(name)-15s:%(lineno)4s: %(message)-80s')
formatter.converter = time.gmtime
myHandler.setFormatter(formatter)
myLogger = logging.getLogger('MyApp')
myLogger.addHandler(myHandler)
myLogger.setLevel(logging.DEBUG)
myLogger.info('here we are')
I'd like to move away from the above 'in-code' configuration to a config file based mechanism.
Here's the config file section for the formatter:
[handler_MyLogHandler]
args=("mylogfile.log", "a",)
class=FileHandler
level=DEBUG
formatter=simpleFormatter
Now, how do I specify the converter attribute (time.gmtime) in the above section?
Edit: The above config file is loaded thus:
logging.config.fileConfig('myLogConfig.conf')
Sadly, there is no way of doing this using the configuration file, other than having e.g. a
class UTCFormatter(logging.Formatter):
converter = time.gmtime
and then using a UTCFormatter in the configuration.
Here Vinay's solution applied to the logging.basicConfig:
import logging
import time
logging.basicConfig(filename='junk.log', level=logging.DEBUG, format='%(asctime)s: %(levelname)s:%(message)s')
logging.Formatter.converter = time.gmtime
logging.info('A message.')