Defaults in python dictConfig - python

If I define my logging config in an .ini file, I can pass default arguments to the config like this:
In the .ini file:
[handler_fileHandler]
class = logging.FileHandler
level = ERROR
formatter = simpleFormatter
# use args to pass arguemnts to the handler
args = (f'{logfilename}', 'a') # filename, append
Loading the config from the file:
# load config and pass default arguemnts
config.fileConfig(
fname="./logging.ini",
# pass the argument filename for the filehandler
defaults={ 'logfilename' : getSomeName() }
disable_existing_loggers=False,
)
Is there any possibility to do the same when I use a yaml file? According to the docs I would say no.

you can set it like this:
args=('%(logfilename)s',)

Related

Python: Reading Logging Level from a Config File and avoiding previous DEBUG Information not getting lost

I have the following python code where logging level of a global logger is set through a Command-line argument:
logging.basicConfig (
level = getattr (logging, clArgs.logLevel),
...
If no logging level has been specified through CL argument, then by default INFO log level is used for global logging:
# Define '--loglevel' argument
clArgsParser.add_argument (
"-l", "--loglevel",
help = "Set the logging level of the program. The default log level is 'INFO'.",
default = 'INFO', # Default log level is 'INFO'
dest = "logLevel", # Store the argument value into the variable 'logLevel'
choices = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
)
Now I would like to give a 2nd option to the user, so that the logging level can be also specified in a Configuration File. Nonetheless, before the Configuration file is read out the script must first read out the CL arguments to figure out if the user has set the log level CL argument. Additionally, the configuration file path can be also specified through a CL argument.
After the script reads out the CL arguments and sets the logging level, it stores some logging information (e.g. log file location, directory where the script is being executed, etc.) as well it stores the DEBUG logging information while reading out the config file in the function readProgramConfig. The config file is read out at the very end (function readProgramConfig) as you can see in the code snippet below:
# Parse the Command-line Arguments
clArgs, progName = parseClArgs ( __program__ )
# Initialize Global Logging with Command-line Arguments
defaultLogFilePath = configLogging ( clArgs )
# Log Program Version used
logging.info ( f"Program version: {__program__.swVersion}")
# Log Script Start Time
logging.info ( f"Program start time: {programStartDate}" )
# Log the program's current directory
logging.debug ( f"Directory where the Program is being executed: {PROGRAM_DIR}" )
# Output the log file location
logging.debug ( f"Log file location: {defaultLogFilePath}")
# Read out the program configuration
programConfig = readProgramConfig ( clArgs.programConfigPath )
This leads to a problem - if no log level is specified through the CL argument and the log level is specified by the user in the config file (e.g. DEBUG), then the following will happen:
No CL-Arg for log level specified -> use INFO log level per default
Do logging (program version, program start time) before reading out the config file -> however, no DEBUG level information is logged as INFO is used by default
Additionally, no DEBUG information is logged while reading out the config file (function readProgramConfig)
Once the config file has been read out, the script will figure out that the config file wants to set the log level to DEBUG, and will then try to change the global logging level from INFO to DEBUG
From now on all DEBUG information will be logged, however the previous DEBUG information is lost, i.e. never logged
So it's sort of like a hen and egg problem.
I do have one solution in mind, but it is rather complicated and I would like to find out if someone of you have a simpler solution.
One possible solution would be:
Start the script with DEBUG log level by default to capture all log messages
Read out the config file:
2.1 If the log level in the config file is set to DEBUG, then continue logging into the log file with the DEBUG log level.
2.2 If the log level in the config file is set to lower log level than DEBUG (e.g. INFO), then delete all DEBUG entries in the log file, and continue logging only using the INFO log level.
You see the solution is rather complicated - it involves editing the log file and writing back and forth ... Not to mention that this approach will not work for logging into the console ...
One solution would be to use logging.config python module. You can read a config file (e.g. in JSON format) and store it as a dictionary. The module provides function logging.config.dictConfig which will configure the logger using the information from the config file, i.e. dictionary. The code for configuring the root logger would look something like this:
import logging # Logging
from logging import config as LogConfig # Logging Configuration
# Create the Log File name
logFileConfigName = "LogConfig.json"
logFileConfigPath = os.path.join ( PROGRAM_DIR, logFileConfigName )
# Open the JSON Config File
jsonLogConfigFile = open ( logFileConfigPath )
# Decode the JSON Config File
jsonLogConfig = json.load ( jsonLogConfigFile )
# Take the logging configuration from a dictionary
LogConfig.dictConfig ( jsonLogConfig )
The logger configuration would be stored in the LogConfig.json file. For both console and file logging there is a seperate handler defined in the config file:
{
"version": 1,
"root":
{
"handlers" : ["console", "file"],
"level": "DEBUG"
},
"handlers":
{
"console":
{
"formatter": "std_out",
"class": "logging.StreamHandler"
},
"file":
{
"filename" : ".\\Program.log",
"formatter" : "std_out",
"class": "logging.FileHandler",
"mode": "w"
}
},
"formatters":
{
"std_out":
{
"format": "%(asctime)s - %(levelname)s: %(message)s",
"datefmt":"%Y-%m-%d - %H:%M:%S"
}
}
}

Python Click - Supply arguments and options from a configuration file

Given the following program:
#!/usr/bin/env python
import click
#click.command()
#click.argument("arg")
#click.option("--opt")
#click.option("--config_file", type=click.Path())
def main(arg, opt, config_file):
print("arg: {}".format(arg))
print("opt: {}".format(opt))
print("config_file: {}".format(config_file))
return
if __name__ == "__main__":
main()
I can run it with the arguments and options provided through command line.
$ ./click_test.py my_arg --config_file my_config_file
arg: my_arg
opt: None
config_file: my_config_file
How do I provide a configuration file (in ini? yaml? py? json?) to --config_file and accept the content as the value for the arguments and options?
For instance, I want my_config_file to contain
opt: my_opt
and have the output of the program show:
$ ./click_test.py my_arg --config_file my_config_file
arg: my_arg
opt: my_opt
config_file: my_config_file
I've found the callback function which looked to be useful but I couldn't find a way to modify the sibling arguments/options to the same function.
This can be done by over riding the click.Command.invoke() method like:
Custom Class:
def CommandWithConfigFile(config_file_param_name):
class CustomCommandClass(click.Command):
def invoke(self, ctx):
config_file = ctx.params[config_file_param_name]
if config_file is not None:
with open(config_file) as f:
config_data = yaml.safe_load(f)
for param, value in ctx.params.items():
if value is None and param in config_data:
ctx.params[param] = config_data[param]
return super(CustomCommandClass, self).invoke(ctx)
return CustomCommandClass
Using Custom Class:
Then to use the custom class, pass it as the cls argument to the command decorator like:
#click.command(cls=CommandWithConfigFile('config_file'))
#click.argument("arg")
#click.option("--opt")
#click.option("--config_file", type=click.Path())
def main(arg, opt, config_file):
Test Code:
# !/usr/bin/env python
import click
import yaml
#click.command(cls=CommandWithConfigFile('config_file'))
#click.argument("arg")
#click.option("--opt")
#click.option("--config_file", type=click.Path())
def main(arg, opt, config_file):
print("arg: {}".format(arg))
print("opt: {}".format(opt))
print("config_file: {}".format(config_file))
main('my_arg --config_file config_file'.split())
Test Results:
arg: my_arg
opt: my_opt
config_file: config_file
I realize that this is way old, but since Click 2.0, there's a more simple solution. The following is a slight modification of the example from the docs.
This example takes explicit --port args, it'll take an environment variable, or a config file (with that precedence).
Command Groups
Our code:
import os
import click
from yaml import load
try:
from yaml import CLoader as Loader
except ImportError:
from yaml import Loader
#click.group(context_settings={'auto_envvar_prefix': 'FOOP'}) # this allows for environment variables
#click.option('--config', default='~/config.yml', type=click.Path()) # this allows us to change config path
#click.pass_context
def foop(ctx, config):
if os.path.exists(config):
with open(config, 'r') as f:
config = load(f.read(), Loader=Loader)
ctx.default_map = config
#foop.command()
#click.option('--port', default=8000)
def runserver(port):
click.echo(f"Serving on http://127.0.0.1:{port}/")
if __name__ == '__main__':
foop()
Assuming our config file (~/config.yml) looks like:
runserver:
port: 5000
and we have a second config file (at ~/config2.yml) that looks like:
runserver:
port: 9000
Then if we call it from bash:
$ foop runserver
# ==> Serving on http://127.0.0.1:5000/
$ FOOP_RUNSERVER_PORT=23 foop runserver
# ==> Serving on http://127.0.0.1:23/
$ FOOP_RUNSERVER_PORT=23 foop runserver --port 34
# ==> Serving on http://127.0.0.1:34/
$ foop --config ~/config2.yml runserver
# ==> Serving on http://127.0.0.1:9000/
Single Commands
If you don't want to use command groups and want to have configs for a single command:
import os
import click
from yaml import load
try:
from yaml import CLoader as Loader
except ImportError:
from yaml import Loader
def set_default(ctx, param, value):
if os.path.exists(value):
with open(value, 'r') as f:
config = load(f.read(), Loader=Loader)
ctx.default_map = config
return value
#click.command(context_settings={'auto_envvar_prefix': 'FOOP'})
#click.option('--config', default='config.yml', type=click.Path(),
callback=set_default, is_eager=True, expose_value=False)
#click.option('--port')
def foop(port):
click.echo(f"Serving on http://127.0.0.1:{port}/")
will give similar behavior.

logging - merging multiple configuration files

I am working on a project where we have a core application that loads multiple plugins.
Each plugin has its own configuration file, and the core application has one as well.
We are using the excellent logging module from python's standard library.
The logging module includes the ability to load the logging configuration from an .ini file.
However, if you load another configuration file, the other files are discarded and only the new configuration is used.
What I would like to do is to split my logging configuration into multiple files, so that the application can load its own configuration file and then load each plugin's merging their logging configuration into the main one.
Note: fileConfig has an option called disable_existing_loggers that can be set to False. However, this only keeps existing loggers alive, but it still clears the internal map of handlers (which means that a plugin's configuration cannot use a handler defined in the application's config file).
I could merge the files manually to produce my own config, but I'd rather avoid that.
Thanks.
To make it clearer, I'd like to do something like this:
# application.ini
[loggers]
keys=root,app
[handlers]
keys=rootHandler,appHandler
[formatters]
keys=myformatter
[logger_root]
# stuff
[handler_rootHandler]
# stuff
[formatter_myformatter]
# stuff
...
# plugin.ini
[loggers]
keys=pluginLogger # no root logger
[handlers]
keys=pluginHandler # no root handler
# no formatters section
[logger_pluginLogger]
# stuff
formatter=myformatter # using the formatter from application.ini
I usually do this with logging.config.dictConfig and the pyYaml package. The package allows you to load the content of a configuration file as a dict object.
The only additional thing needed is a small helper class to handle configuration overwrites/add-ons:
import yaml
class Configuration(dict):
def __init__(self,
config_file_path=None,
overwrites=None):
with open(config_file_path) as config_file:
config = yaml.load(config_file)
super(Configuration, self).__init__(config)
if overwrites is not None:
for overwrite_key, value in overwrites.items():
self.apply_overwrite(self, overwrite_key, value)
def apply_overwrite(self, node, key, value):
if isinstance(value, dict):
for item in value:
self.apply_overwrite(node[key], item, value[item])
else:
node[key] = value
For example, if your main configuration is:
logger:
version: 1
disable_existing_loggers: False
formatters:
simple:
format: '%(levelname)s: Module: %(name)s Msg: %(message)s'
handlers:
file:
level: DEBUG
class: logging.handlers.RotatingFileHandler
maxBytes: 10000000
backupCount: 50
formatter: simple
filename: '/tmp/log1.log'
root:
handlers: [file]
level: DEBUG
and your overwrite is:
logger:
handlers:
file:
filename: '/tmp/log2.log'
you can get your overwritten logger like that:
from configuration import Configuration
from logging.config import dictConfig
import logging
if __name__ == '__main__':
config = Configuration('standard.yml', overwrites=Configuration('overwrite.yml'))
dictConfig(config['logger'])
logger = logging.getLogger(__name__)
logger.info('I logged it')
I couldn't find a way to do what I wanted, so I ended up rolling a class to do it.
Here it is as a convenient github gist.

Can I add ini style configuration to pytest suites?

I am using pytest to run tests in multiple environments and I wanted to include that information (ideally) in an ini style config file. I would also like to override parts or all of the configuration at the command line as well. I tried using the hook pytest_addoption in my conftest.py like so:
def pytest_addoption(parser):
parser.addoption("--hostname", action="store", help="The host")
parser.addoption("--port", action="store", help="The port")
#pytest.fixture
def hostname(request):
return request.config.getoption("--hostname")
#pytest.fixture
def port(request):
return request.config.getoption("--port")
Using this I can add the configuration info at the command line, but not in a config file. I also tried adding
[pytest]
addopts = --hostname host --port 311
to my pytest.ini file, but that didn't work. Is there a way to do this without building my own plugin? Thanks for your time.
The parser object does have an addini method as well that you can use to specify configuration options through an ini file.
Here is the documentation for it: https://pytest.org/latest/writing_plugins.html?highlight=addini#_pytest.config.Parser.addini
addini(name, help, type=None, default=None)[source]
registers an ini-file option.
Name: name of the ini-variable
Type: type of the variable, can be pathlist, args, linelist or bool.
Default: default value if no ini-file option exists but is queried.
The value of ini-variables can be retrieved via a call to config.getini(name).

Python Logging: Specifying converter attribute of a log formatter in config file

I'd like to have all timestamps in my log file to be UTC timestamp. When specified through code, this is done as follows:
import logging
import time
myHandler = logging.FileHandler('mylogfile.log', 'a')
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(name)-15s:%(lineno)4s: %(message)-80s')
formatter.converter = time.gmtime
myHandler.setFormatter(formatter)
myLogger = logging.getLogger('MyApp')
myLogger.addHandler(myHandler)
myLogger.setLevel(logging.DEBUG)
myLogger.info('here we are')
I'd like to move away from the above 'in-code' configuration to a config file based mechanism.
Here's the config file section for the formatter:
[handler_MyLogHandler]
args=("mylogfile.log", "a",)
class=FileHandler
level=DEBUG
formatter=simpleFormatter
Now, how do I specify the converter attribute (time.gmtime) in the above section?
Edit: The above config file is loaded thus:
logging.config.fileConfig('myLogConfig.conf')
Sadly, there is no way of doing this using the configuration file, other than having e.g. a
class UTCFormatter(logging.Formatter):
converter = time.gmtime
and then using a UTCFormatter in the configuration.
Here Vinay's solution applied to the logging.basicConfig:
import logging
import time
logging.basicConfig(filename='junk.log', level=logging.DEBUG, format='%(asctime)s: %(levelname)s:%(message)s')
logging.Formatter.converter = time.gmtime
logging.info('A message.')

Categories