I'd like to dynamically import various settings and configurations into my python program - what you'd typically do with a .ini file or something similar.
I started with JSON for the config file syntax, then moved to YAML, but really I'd like to use Python. It'll minimize the number of formats and allow me to use code in the config file, which can be convenient.
I hacked up an __import__ based system to allow this using code that looks like:
account_config = __import__(settings.CONFIG_DIR + '.account_name', fromlist=[settings.CONFIG_DIR])
It basically works, but I'm running into all kinds of esoteric problems - eg. if I try to import "test" it picks up some internal python library that's in the python path instead of my test.
So I'm wondering: is using python as the configuration language for a python program viable or am I asking for trouble? Are there examples I can steal from?
One easy way is this.
Your config file assigns a bunch of global variables.
CONFIG = "some string"
ANOTHER = [ "some", "list" ]
MORE = "another value"
You use import all these settings like this
settings = {}
execfile('the_config.py', settings )
settings['CONFIG'] == "some string"
Now your settings dictionary has all of the global variables set.
is using python as the configuration language for a python program viable or am I asking for trouble?
Depends; realize that you're getting a Turing-complete configuration file format with on OS interface. That might raise security issues, so don't do this with config files from an untrusted source.
OTOH, this setup can be very convenient.
Are there examples I can steal from?
Django.
Related
I have trace32 installed at C drive and have mentioned that directory in my code. Suppose if some other user run this code in their system, the code does not work because the user has installed application in different location. How can I make this directory generic and dynamic and make it work for all users?
You have multiple possibilities. Bevor explaining them some generic tips:
Make the TRACE32 system path configurable, not a path inside the installation. In your case this would be r"C:\T32". This path is called t32sys or T32SYS.
Make sure you use os.path.join to concatenate your strings, so it works on the users operating system: os.path.join(r"C:\T32", "bin/windows64")
Command line arguments using argparse. This is the simplest solution which requires the user to start the Python script like this: python script.py --t32sys="C:\t32".
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--t32sys", help="TRACE32 system directory.")
args = parser.parse_args()
t32sys = args["t32sys"]
Instead of command line parameters you could also use a configuration file. For this you can use the built-in configparser module. This has the advantage that the user doesn't need to specify the directory as a command line argument, but the disadvantage that the user needs to be aware of the configuration file.
Configuration file (example.ini):
[DEFAULT]
t32sys = C:\T32
import configparser
parser = configparser.ConfigParser()
parser.read("example.ini")
args = parser["DEFAULT"]
t32sys = args["t32sys"]
Environment variables using os.environ. T32SYS is a environment variable often used for this, but it's not ensured to be set. So you have to tell users that they have to set the variable before using your tool. This approach has the advantage to work in the background, but also in my opinion a little bit obfuscating. I'd only use this in combination with argparse or configparse to override.
import os
t32sys = os.environ.get('T32SYS')
You can of course combine multiple ways with fallbacks / overrides.
I've served a directory using
python -m http.server
It works well, but it only shows file names. Is it possible to show created/modified dates and file size, like you see in ftp servers?
I looked through the documentation for the module but couldn't find anything related to it.
Thanks!
http.server is meant for dead-simple use cases, and to serve as sample code.1 That's why the docs link right to the source.
That means that, by design, it doesn't have a lot of configuration settings; instead, you configure it by reading the source and choosing what methods you want to override, then building a subclass that does that.
In this case, what you want to override is list_directory. You can see how the base-class version works, and write your own version that does other stuff—either use scandir instead of listdir, or just call stat on each file, and then work out how you want to cram the results into the custom-built HTML.
Since there's little point in doing this except as a learning exercise, I won't give you complete code, but here's a skeleton:
class StattyServer(http.server.HTTPServer):
def list_directory(self, path):
try:
dirents = os.scandir(path)
except OSError:
# blah blah blah
# etc. up to the end of the header-creating bit
for dirent in dirents:
fullname = dirent.path
displayname = linkname = dirent.name
st = dirent.stat()
# pull stuff out of st
# build a table row to append to r
1. Although really, it's sample code for an obsolete and clunky way of building servers, so maybe that should be "to serve as sample code to understand legacy code that you probably won't ever need to look at but just in case…".
I realise this question as been asked before (What's the best practice using a settings file in Python?) but seeing as this was asked 7 years ago, I feel it is valid to discuss again seeing as how technologies have evolved.
I have a python project that requires different configurations to be used based on the value of an environment variable. Since making use of the environment variable to choose a config file is simple enough, my question is as follows:
What format is seen as the best practice in the software industry for setting up a configuration file in python, when multiple configurations are needed based on the environment?
I realise that python comes with a ConfigParser module but I was wondering if it might be better to use a format such as YAML or JSON because of there raise in popularity due to their ease of use across languages. Which format is seen as easier to maintain when you have multiple configurations?
If you really want to use an environment-based YAML configuration, you could do so like this:
config.py
import yaml
import os
config = None
filename = getenv('env', 'default').lower()
script_dir = os.path.dirname(__file__)
abs_file_path = os.path.join(script_dir, filename)
with open(abs_file_path, 'r') as stream:
try:
config = yaml.load(stream)
except yaml.YAMLError as exc:
print(exc)
I think looking at the standard configuration for a Python Django settings module is a good example of this, since the Python Django web framework is extremely popular for commercial projects and therefore is representative of the software industry.
It doesn't get too fancy with JSON or YAML config files - It simply uses a python module called settings.py that can be imported into any other module that needs to access the settings. Environment variable based settings are also defined there. Here is a link to an example settings.py file for Django on Github:
https://github.com/deis/example-python-django/blob/master/helloworld/settings.py
This is really late to the party, but this is what I use and I'm pleased with it (if you're open to a pure Python solution). I like it because my configurations can be set automatically based on where this is deployed using environment variables. I haven't been using this that long so if someone sees an issue, I'm all ears.
Structure:
|--settings
|--__init__.py
|--config.py
config.py
class Common(object):
XYZ_API_KEY = 'AJSKDF234328942FJKDJ32'
XYZ_API_SECRET = 'KDJFKJ234df234fFW3424##ewrFEWF'
class Local(Common):
DB_URI = 'local/db/uri'
DEBUG = True
class Production(Common):
DB_URI = 'remote/db/uri'
DEBUG = False
class Staging(Production):
DEBUG = True
__init__.py
from settings.config import Local, Production, Staging
import os
config_space = os.getenv('CONFIG_SPACE', None)
if config_space:
if config_space == 'LOCAL':
auto_config = Local
elif config_space == 'STAGING':
auto_config = Staging
elif config_space == 'PRODUCTION':
auto_config = Production
else:
auto_config = None
raise EnvironmentError(f'CONFIG_SPACE is unexpected value: {config_space}')
else:
raise EnvironmentError('CONFIG_SPACE environment variable is not set!')
If my environment variable is set in each place where my app exists, I can bring this into my modules as needed:
from settings import auto_config as cfg
That really depends on your requirements, rather than the format's popularity. For instance, if you just need simple key-value pairs, an INI file would be more than enough. As soon as you need complex structures (e.g., arrays or dictionaries), I'd go for JSON or YAML. JSON simply stores data (it's more intended for automated data flow between systems), while YAML is better for human-generated (or maintained, or read) files, as it has comments, you can reference values elsewhere in the file... And on top of that, if you want robustness, flexibility, and means to check the correct structure of the file (but don't care much about the manual edition of the data), I'd go for XML.
I recommend giving trapdoor a try for turn-key configuration (disclaimer: I'm the author of trapdoor).
I also like to take advantage of the fact that you do not have to compile Python source and use plain Python files for configuration. But in the real world you may have multiple environments, each requires a different configuration, and you may also want to read some (mostly sensitive) information from env vars or files that are not in source control (to prevent committing those by mistake).
That's why I wrote this library: https://github.com/davidohana/kofiko,
which let you use plain Python files for configuration, but is also able to override those config settings from .ini or env-vars, and also support customization for different environments.
Blog post about it: https://medium.com/swlh/code-first-configuration-approach-for-python-f975469433b9
I'm using the QUuid class in my project and for testing and debugging purposes it would be very nice to see the QUuid objects in human readable form instead of their low-level form.
For some reason, the people at Qt have not included a dump method for this type so I attempted to create one on my own, following this documentation and this guide.
I'm not familiar with Python so unfortunately, I could not get something running. Could someone help me create such a function that does nothing more than display the output of QUuid::toString() in the value column of Qt Creator?
Edit:
Mitko's solution worked perfectly. I expanded it a bit so the details can still be read if so desired:
from dumper import *
import gdb
def qdump__QUuid(d, value):
this_ = d.makeExpression(value)
finalValue = gdb.parse_and_eval("%s.toString()" % (this_))
d.putStringValue(finalValue)
d.putNumChild(4)
if d.isExpanded():
with Children(d):
d.putSubItem("data1", value["data1"])
d.putSubItem("data2", value["data2"])
d.putSubItem("data3", value["data3"])
d.putSubItem("data4", value["data4"])
The following python script should do the job:
from dumper import *
import gdb
def qdump__QUuid(d, value):
this = d.makeExpression(value)
stringValue = gdb.parse_and_eval("%s.toString()" % this)
d.putStringValue(stringValue)
d.putNumChild(0)
The easiest way to use it with Qt Creator is to just paste these lines at the end of your <Qt-Creator-Install-Dir>/share/qtcreator/debugger/personaltypes.py file. In this case you can skip the first line, as it's already in the file.
As the personaltypes.py file is overwritten when you update Qt Creator you might want to put the script above in its own file. In that case you'll need to configure Qt Creator to use your file. You can do this by going to Tools > Options... > Debugger > GDB > Extra Debugging Helpers > Browse and selecting your file.
Note:
This script will only work inside Qt Creator, since we use its specific dumper (e.g. putStringValue).
We call QUuid::toString() which creates a QString object. I'm not sure exactly how gdb and python handle this, and if there is a need to clean this up in order to avoid leaking memory. It's probably not a big deal for debugging, but something to be aware of.
I have a script, that uses a config file called config.py. Actually this is rather a configuration module then. Anyways: the configuration-module contains a lot of parameters and dictionaries and lists of dictionaries and so on.
In the script today it is used like this
import config
def main():
myParameter = config.myParameter
Now I have another application scenario for this script that uses a related config ('config_advanced.py', but the parameters and dictionaries have different values.
My goal is now, to chose the name of the used config-modul as a passed command-line argument:
myScript.py -configuration config_advanced.py
Since the configuration-module is in the same folder than the main script, I guess I have to rename the passed configuration file to 'config.py' first. Afterwards I can perform import config. Otherwise, if I used `import config_advanced, I wouldn't be able to use a call like
config.myParameter
in the main script.
Another possibility could be, to put the configuration-modules in subfolders and keep the name config.py. The passed command-line-argument will have to contain the subfolder then.
Either way I won't be able to perform the import at the top of the main file, since I have to do the argument parsing first. This isn't a technically problem, but someone said that this it at least bad pratice.
What do you think?
What is a better way to do the trick with not much effort?
Thanks a lot
Edit:
One working solution has been
import sys fullpath = "d:\\python\\scripts\\projectA\\configurationFiles\\"
sys.path.append(fullpath)
config = __import__('config_advanced')
Without syspath it does NOT work, so those following tries won't work:
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced')
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced.py')
Another possibility that's similar to what you suggest in the question, but which doesn't need you to hide things in subfolders, is to put config_advanced.py and config_plain.py in the same folder as the main script and then dynamically make config.py a link to the actual config file you want to use.
However, martineau's suggestion is much simpler.
OTOH, georg brings up a very valid point, especially if this script isn't just for your own personal use. While using Python itself for the config data is flexible and powerful, it's perhaps a little too powerful. Config data should just be data, not live executable code. If you make a minor mistake when modifying config data you could cause havoc if it's in an executable file. And if a malicious user gets to it, there's no limit to the damage they could cause.
Bad data in a plain old data file will at worst cause a ValueError if it does something weird that your config parsing code isn't suspecting. But bad data in a live Python file could throw all sorts of nasty errors. Or even worse, it could do something evil in complete silence...
In reply to your comments, here's some code to illustrate the first point:
#! /usr/bin/env python
import os
config_file = "config.py"
def link_config(mode):
if os.path.exists(config_file):
os.remove(config_file)
config_name = "config_%s.py" % mode
os.symlink(config_name, config_file)
#.... parse command line to determine config_mode string, then do
link_config(config_mode)
#Now import the newly-linked config file
import config
If config_mode == "plain" the above code will cause config_plain.py to be imported as 'config'
and if config_mode == "advanced" it will cause config_advanced.py to be imported as 'config'
But as I said before, martineau's method is much simpler. And IIRC, os.symlink may not work on non-unix systems.
...
As for your second point, check out the docs for the json module