I would like to define several groups of values where the values of a particular group are used if that group is selected.
Here's a an example to make that clearer:
[environment]
type=prod
[prod]
folder=data/
debug=False
[dev]
folder=dev_data/
debug=True
Then to use it:
print config['folder'] # prints 'data/' because config['environment']=='prod'
Is there a natural or idiomatic way to do this in configobj or otherwise?
Additional Info
My current thoughts are overwriting or adding to the resulting config object using some logic post parsing the config file. However, this feels contrary to the nature of a config file, and feels like it would require somewhat complex logic to validate.
I know this is maybe not exactly what you're searching for, but have you considered using json for easy nested access?
For example, if your config file looks like
{
"environment": {
"type": "prod"
},
"[dev]": {
"debug": "True",
"folder": "dev_data/"
},
"[prod]": {
"debug": "False",
"folder": "data/"
}
}
you can access it with [dev] or [prod] key to get your folder:
>>> config = json.loads(config_data)
>>> config['[dev]']['folder']
'dev_data/'
>>> config['[prod]']['folder']
'data/'
Related
I am trying to reduce the headache of copying same configuration data (Stored in YAML file) by using anchor tag in YAML. The example YAML looks like :
profiles:
home: &home
key1: value1
object1:
subKey1: subVal1
subKey2: subVal2
complexObject:
something: value
someOtherThing: value
work:
<<: *home
object1:
subKey2: completelyDifferentValue # something like this ?!
complexObject.something: notValue # or something like this ?
The equivalent JSON for the above YAML is
{
"profiles": {
"home": {
"key1": "value1",
"object1": {
"subKey1": "subVal1",
"subKey2": "subVal2",
"complexObject": {
"something": "value",
"someOtherThing": "value"
}
}
},
"work": {
"key1": "value1",
"object1": {
"subKey2": "completelyDifferentValue",
"complexObject.something": "notValue"
}
}
}
}
Whereas what i wanted was :
{
"profiles": {
"home": {
"key1": "value1",
"object1": {
"subKey1": "subVal1",
"subKey2": "subVal2",
"complexObject": {
"something": "value",
"someOtherThing": "value"
}
}
},
"work": {
"key1": "value1",
"object1": {
"subKey1": "subVal1",
"subKey2": "completelyDifferentValue",
"complexObject": {
"something": "notValue",
"someOtherThing": "value"
}
}
}
}
}
(the additional subKey1 that was removed)
The YAML config file will have Objects inside Objects and idea is to have one parent object and then just copy it and modify a few keys (inside child objects objects)
I understand that YAML spec might not be very helpful directly in this case and would appreciate any workarounds in python via pyyaml(or some other library) as well!
Due to the bad influence of Java, it is a common misconception that these two YAML structures are equivalent:
a.b: c
a:
b:
c
They are not. A period in YAML is a content character just like a, making the first YAML have a key named a.b which does not imply a nested mapping.
Now about merging: Anchors and aliases exist to be able to serialize arbitrary, possibly cyclic, graphs. Recursive descent (as needed for a deep merge) needs to be wary of such cycles, which is why I assume << is specified not to do this.
What << actually does is that this specific sequence of characters is assigned the tag !!merge. The YAML processor then implements merging as „for every mapping that has a key with tag !!merge, pull the unknown key-value-pairs from that key's value(s) into the current mapping“.
The problem for you is that while libraries like PyYAML allow you to register custom constructors for user-defined tags, these can only produce a value for the tagged item – however, !!merge influences the mapping around the tagged value, so its semantics cannot easily reproduced and expanded via custom constructors.
You can, however, simply override PyYAML's merge implementation. For this, inherit from SafeConstructor, FullConstructor or UnsafeConstructor depending on your needs, reimplement flatten_mapping, then define a loader (see here) that uses your constructor. Theoretically, besides deep merging, you can also implement periods-as-nested-mappings here, but I advise against it. These would then only work at places where you do merging, and not elsewhere, which is counter-intuitive.
I have a json called thefile.json which looks like this:
{
"domain": "Something",
"domain": "Thingie",
"name": "Another",
"description": "Thing"
}
I am trying to write a python script which would made a set of the values in domain. In this example it would return
{'Something', 'Thingie'}
Here is what I tried:
import json
with open("thefile.json") as my_file:
data = json.load(my_file)
ids = set(item["domain"] for item in data.values())
print(ids)
I get the error message
unique_ids.add(item["domain"])
TypeError: string indices must be integers
Having looked up answers on stack exchange, I'm stumped. Why can't I have a string as an index, seeing as I am using a json whose data type is a dictionary (I think!)? How do I get it so that I can get the values for "domain"?
So, to start, you can read more about JSON formats here: https://www.w3schools.com/python/python_json.asp
Second, dictionaries must have unique keys. Therefore, having two keys named domain is incorrect. You can read more about python dictionaries here: https://www.w3schools.com/python/python_dictionaries.asp
Now, I recommend the following two designs that should do what you need:
Multiple Names, Multiple Domains: In this design, you can access websites and check the domain of each of its values like ids = set(item["domain"] for item in data["websites"])
{
"websites": [
{
"domain": "Something.com",
"name": "Something",
"description": "A thing!"
},
{
"domain": "Thingie.com",
"name": "Thingie",
"description": "A thingie!"
},
]
}
One Name, Multiple Domains: In this design, each website has multiple domains that can be accessed using JVM_Domains = set(data["domains"])
{
"domains": ["Something.com","Thingie.com","Stuff.com"]
"name": "Me Domains",
"description": "A list of domains belonging to Me"
}
I hope this helps. Let me know if I missed any details.
You have a problem in your JSON, duplicate keys. I am not sure if it is forbiden, but I am sure it is bad formatted.
Besides that, of course it is gonna bring you lot of problems.
A dictionary can not have duplicate keys, what would be the return of a duplicate key?.
So, fix your JSON, something like this,
{
"domain": ["Something", "Thingie"],
"name": "Another",
"description": "Thing"
}
Guess what, good format almost solve your problem (you can have duplicates in the list) :)
I have a python script which contains dictionaries and is used as input from another python script which performs calculations. I want to use the first script which is used as input, to create more scripts with the exact same structure in the dictionaries but different values for the keys.
Original Script: Car1.py
Owner = {
"Name": "Jim",
"Surname": "Johnson",
}
Car_Type = {
"Make": "Ford",
"Model": "Focus",
"Year": "2008"
}
Car_Info = {
"Fuel": "Gas",
"Consumption": 5,
"Max Speed": 190
}
I want to be able to create more input files with identical format but for different cases, e.g.
New Script: Car2.py
Owner = {
"Name": "Nick",
"Surname": "Perry",
}
Car_Type = {
"Make": "BMW",
"Model": "528",
"Year": "2015"
}
Car_Info = {
"Fuel": "Gas",
"Consumption": 10,
"Max Speed": 280
}
So far, i have only seen answers that print just the keys and the values in a new file but not the actual name of the dictionary as well. Can someone provide some help? Thanks in advance!
If you really want to do it that way (not recommended, because of the reasons statet in the comment by spectras and good alternatives) and import your input Python file:
This question has answers on how to read out the dictionaries names from the imported module. (using the dict() on the module while filtering for variables that do not start with "__")
Then get the new values for the dictionary entries and construct the new dicts.
Finally you need to write a exporter that takes care of storing the data in a python readable form, just like you would construct a normal text file.
I do not see any advantage over just storing it in a storage format.
read the file with something like
text=open('yourfile.py','r').read().split('\n')
and then interpret the list of strings you get... after that you can save it with something like
new_text = open('newfile.py','w')
[new_text.write(line) for line in text]
new_text.close()
as spectras said earlier, not ideal... but if that's what you want to do... go for it
I want to keep some large, static dictionaries in config to keep my main application code clean. Another reason for doing that is so the dicts can be occasionally edited without having to touch the application.
I thought a good solution was using a json config a la:
http://www.ilovetux.com/Using-JSON-Configs-In-Python/
JSON is a natural, readable format for this type of data. Example:
{
"search_dsl_full": {
"function_score": {
"boost_mode": "avg",
"functions": [
{
"filter": {
"range": {
"sort_priority_inverse": {
"gte": 200
}
}
},
"weight": 2.4
}
],
"query": {
"multi_match": {
"fields": [
"name^10",
"search_words^5",
"description",
"skuid",
"backend_skuid"
],
"operator": "and",
"type": "cross_fields"
}
},
"score_mode": "multiply"
}
}
The big problem is, when I import it into my python app and set a dict equal to it like this:
with open("config.json", "r") as fin:
config = json.load(fin)
...
def create_query()
query_dsl = config['search_dsl_full']
return query_dsl
and then later, only when a certain condition is met, I need to update that dict like this:
if (special condition is met):
query_dsl['function_score']['query']['multi_match']['operator'] = 'or'
Since query_dsl is a reference, it updates the config dictionary too. So when I call the function again, it reflects the updated-for-special-condition version ("or") rather than the the desired config default ("and").
I realize this is a newb issue (yes, I'm a python newb), but I can't seem to figure out a 'pythonic' solution. I'm trying to not be a hack.
Possible options:
When I set query_dsl equal to the config dict, use copy.deepcopy()
Figure out how to make all nested slices of the config dictionary immutable
Maybe find a better way to accomplish what I'm trying to do? I'm totally open to this whole approach being a preposterous newbie mistake.
Any help appreciated. Thanks!
This is a simplistic example of a dictionary created by a json.load that I have t deal with:
{
"name": "USGS REST Services Query",
"queryInfo": {
"timePeriod": "PT4H",
"format": "json",
"data": {
"sites": [{
"id": "03198000",
"params": "[00060, 00065]"
},
{
"id": "03195000",
"params": "[00060, 00065]"
}]
}
}
}
Sometimes there may be 15-100 sites with unknown sets of parameters at each site. My goal is to either create two lists (one storing "site" IDs and the other storing "params") or a much simplified dictionary from this original dictionary. Is there a way to do this using nested for loops with kay,value pairs using the iteritem() method?
What I have tried to far is this:
queryDict = {}
for key,value in WS_Req_dict.iteritems():
if key == "queryInfo":
if value == "data":
for key, value in WS_Req_dict[key][value].iteritems():
if key == "sites":
siteVal = key
if value == "params":
paramList = [value]
queryDict["sites"] = siteVal
queryDict["sites"]["params"] = paramList
I run into trouble getting the second FOR loop to work. I haven't looked into pulling out lists yet.
I think this maybe an overall stupid way of doing it, but I can't see around it yet.
I think you can make your code much simpler by just indexing, when feasible, rather than looping over iteritems.
for site in WS_Req_dict['queryInfo']['data']['sites']:
queryDict[site['id']] = site['params']
If some of the keys might be missing, dict's get method is your friend:
for site in WS_Req_dict.get('queryInfo',{}).get('data',{}).get('sites',[]):
would let you quietly ignore missing keys. But, this is much less readable, so, if I needed it, I'd encapsulate it into a function -- and often you may not need this level of precaution! (Another good alternative is a try/except KeyError encapsulation to ignore missing keys, if they are indeed possible in your specific use case).