pyyaml 3.11 pass dictionary to iterator? - python

I use following YAML data:
Document:
InPath: /home/me
OutPath: /home/me
XLOutFile: TestFile1.xlsx
Sheets:
- Sheet: Test123
InFile: Test123.MQSC
Server: Testsystem1
- Sheet: Test345
InFile: Test345.MQSC
Server: Testsystem2
Title:
A: "Server Name"
B: "MQ Version"
C: "Broker Version"
Fields:
A: ServerName
B: MQVersion
C: BrokerVersion
and following code:
import yaml
class cfgReader():
def __init__(self):
self.stream = ""
self.ymldata = ""
self.ymlkey = ""
self.ymld = ""
def read(self,infilename):
self.stream = self.stream = file(infilename, 'r') #Read the yamlfile
self.ymldata = yaml.load(self.stream) #Instanciate yaml object and parse the input "stream".
def docu(self):
print self.ymldata
print self.ymldata['Sheets']
for self.ymlkey in self.ymldata['Document']: #passes String to iterator
print self.ymlkey
for sheets in self.ymldata['Sheets']: #passes Dictionary to iterator
print sheets['Sheet']
for title in self.ymldata['Title']:
print title
for fields in self.ymldata['Fields']:
print fields
The print output is:
{'Fields': {'A': 'ServerName', 'C': 'BrokerVersion', 'B': 'MQVersion'}, 'Document': {'XLOutFile': 'TestFile1.xlsx', 'InPath': '/home/me', 'OutPath': '/home/me'}, 'Sheets': [{'Sheet': 'Test123', 'InFile': 'Test123.MQSC', 'Server': 'Testsystem1'}, {'Sheet': 'Test345', 'InFile': 'Test345.MQSC', 'Server': 'Testsystem2'}], 'Title': {'A': 'Server Name', 'C': 'Broker Version', 'B': 'MQ Version'}}
[{'Sheet': 'Test123', 'InFile': 'Test123.MQSC', 'Server': 'Testsystem1'}, {'Sheet': 'Test345', 'InFile': 'Test345.MQSC', 'Server': 'Testsystem2'}]
X
I
O
Test123
Test345
A
C
B
A
C
B
I could not find out how to control the way data is passed to the iterator. What I want is to pass it as dictionaries so that I can access the value through the key. This works for "Sheets" but I don't understand why. The documentation was not describing it clearly : http://pyyaml.org/wiki/PyYAMLDocumentation

In your code self.ymldata['Sheets'] is a list of dictionaries because your YAML source for that:
- Sheet: Test123
InFile: Test123.MQSC
Server: Testsystem1
- Sheet: Test345
InFile: Test345.MQSC
Server: Testsystem2
is a sequence of mappings (and this is the value for the key Sheets of the top-level mapping in your YAML file).
The values for the other top-level keys are all mappings (and not sequences of mappings), which get loaded as Python dict. And if you iterate over a dict as you do, you get the key values.
If you don't want to iterate over these dictionaries then you should not start a for loop. You might want to test what the value for a toplevel keys is and then act accordingly, e.g. to print out all dictionaries loaded from the YAML file except for the top-level mapping do:
import ruamel.yaml as yaml
class CfgReader():
def __init__(self):
self.stream = ""
self.ymldata = ""
self.ymlkey = ""
self.ymld = ""
def read(self, infilename):
self.stream = open(infilename, 'r') # Read the yamlfile
self.ymldata = yaml.load(self.stream) # Instanciate yaml object and parse the input "stream".
def docu(self):
for k in self.ymldata:
v = self.ymldata[k]
if isinstance(v, list):
for elem in v:
print(elem)
else:
print(v)
cfg_reader = CfgReader()
cfg_reader.read('in.yaml')
cfg_reader.docu()
which prints:
{'InFile': 'Test123.MQSC', 'Sheet': 'Test123', 'Server': 'Testsystem1'}
{'InFile': 'Test345.MQSC', 'Sheet': 'Test345', 'Server': 'Testsystem2'}
{'B': 'MQVersion', 'A': 'ServerName', 'C': 'BrokerVersion'}
{'B': 'MQ Version', 'A': 'Server Name', 'C': 'Broker Version'}
{'XLOutFile': 'TestFile1.xlsx', 'InPath': '/home/me', 'OutPath': '/home/me'}
Please also note some general things, you should be aware off
I use ruamel.yaml (disclaimer: I am the author of that package), which supports YAML 1.2 (PyYAML supports the 1.1 standard from 2005). For your purposes they act the same.
don't use file() it is not available in Python3, use open()
assigning the same value twice to the same attribute makes no sense (self.stream = self.stream = ...)
your opened file/stream never gets closed, you might want to consider using
with open(infilename) as self.stream:
self.ymldata = yaml.load(self.stream)
class names, by convention, should start with an upper case character.

Related

Python new section in config class

I am trying to write a dynamic config .ini
Where I could add new sections with key and values also I could add key less values.
I have written a code which create a .ini. But the section is coming as 'default'.
Also it is just overwriting the file every time with out adding new section.
I have written a code in python 3 to create a .ini file.
import configparser
"""Generates the configuration file with the config class.
The file is a .ini file"""
class Config:
"""Class for data in uuids.ini file management"""
def __init__(self):
self.config = configparser.ConfigParser()
self.config_file = "conf.ini"
# self.config.read(self.config_file)
def wrt(self, config_name={}):
condict = {
"test": "testval",
'test1': 'testval1',
'test2': 'testval2'
}
for name, val in condict.items():
self.config.set(config_name, name, val)
#self.config.read(self.config_file)
with open(self.config_file, 'w+') as out:
self.config.write(out)
if __name__ == "__main__":
Config().wrt()
I should be able to add new sections with key or with out keys.
Append keys or value.
It should have proper section name.
Some problems with your code:
The usage of mutable objects as default parameters can be a little
tricky and you may see unexpected behavior.
You are using config.set() which is legacy.
you are defaulting config_name to a dictionary, why?
Too much white space :p
You don't need to iterate through the dictionary items to write them using the newer (none legacy) function, as shown below
This should work:
"""Generates the configuration file with the config class.
The file is a .ini file
"""
import configparser
import re
class Config:
"""Class for data in uuids.ini file management."""
def __init__(self):
self.config = configparser.ConfigParser()
self.config_file = "conf.ini"
# self.config.read(self.config_file)
def wrt(self, config_name='DEFAULT', condict=None):
if condict is None:
self.config.add_section(config_name)
return
self.config[config_name] = condict
with open(self.config_file, 'w') as out:
self.config.write(out)
# after writing to file check if keys have no value in the ini file (e.g: key0 = )
# the last character is '=', let us strip it off to only have the key
with open(self.config_file) as out:
ini_data = out.read()
with open(self.config_file, 'w') as out:
new_data = re.sub(r'^(.*?)=\s+$', r'\1', ini_data, 0, re.M)
out.write(new_data)
out.write('\n')
condict = {"test": "testval", 'test1': 'testval1', 'test2': 'testval2'}
c = Config()
c.wrt('my section', condict)
c.wrt('EMPTY')
c.wrt(condict={'key': 'val'})
c.wrt(config_name='NO_VALUE_SECTION', condict={'key0': '', 'key1': ''})
This outputs:
[DEFAULT]
key = val
[my section]
test = testval
test1 = testval1
test2 = testval2
[EMPTY]
[NO_VALUE_SECTION]
key1
key0

How to limit the number of float digits JSONEncoder produces?

I am trying to set the python json library up in order to save to file a dictionary having as elements other dictionaries. There are many float numbers and I would like to limit the number of digits to, for example, 7.
According to other posts on SO encoder.FLOAT_REPR shall be used. However it is not working.
For example the code below, run in Python3.7.1, prints all the digits:
import json
json.encoder.FLOAT_REPR = lambda o: format(o, '.7f' )
d = dict()
d['val'] = 5.78686876876089075543
d['name'] = 'kjbkjbkj'
f = open('test.json', 'w')
json.dump(d, f, indent=4)
f.close()
How can I solve that?
It might be irrelevant but I am on macOS.
EDIT
This question was marked as duplicated. However in the accepted answer (and until now the only one) to the original post it is clearly stated:
Note: This solution doesn't work on python 3.6+
So that solution is not the proper one. Plus it is using the library simplejson not the library json.
It is still possible to monkey-patch json in Python 3, but instead of FLOAT_REPR, you need to modify float. Make sure to disable c_make_encoder just like in Python 2.
import json
class RoundingFloat(float):
__repr__ = staticmethod(lambda x: format(x, '.2f'))
json.encoder.c_make_encoder = None
if hasattr(json.encoder, 'FLOAT_REPR'):
# Python 2
json.encoder.FLOAT_REPR = RoundingFloat.__repr__
else:
# Python 3
json.encoder.float = RoundingFloat
print(json.dumps({'number': 1.0 / 81}))
Upsides: simplicity, can do other formatting (e.g. scientific notation, strip trailing zeroes etc). Downside: it looks more dangerous than it is.
Option 1: Use regular expression matching to round.
You can dump your object to a string using json.dumps and then use the technique shown on this post to find and round your floating point numbers.
To test it out, I added some more complicated nested structures on top of the example you provided::
d = dict()
d['val'] = 5.78686876876089075543
d['name'] = 'kjbkjbkj'
d["mylist"] = [1.23456789, 12, 1.23, {"foo": "a", "bar": 9.87654321}]
d["mydict"] = {"bar": "b", "foo": 1.92837465}
# dump the object to a string
d_string = json.dumps(d, indent=4)
# find numbers with 8 or more digits after the decimal point
pat = re.compile(r"\d+\.\d{8,}")
def mround(match):
return "{:.7f}".format(float(match.group()))
# write the modified string to a file
with open('test.json', 'w') as f:
f.write(re.sub(pat, mround, d_string))
The output test.json looks like:
{
"val": 5.7868688,
"name": "kjbkjbkj",
"mylist": [
1.2345679,
12,
1.23,
{
"foo": "a",
"bar": 9.8765432
}
],
"mydict": {
"bar": "b",
"foo": 1.9283747
}
}
One limitation of this method is that it will also match numbers that are within double quotes (floats represented as strings). You could come up with a more restrictive regex to handle this, depending on your needs.
Option 2: subclass json.JSONEncoder
Here is something that will work on your example and handle most of the edge cases you will encounter:
import json
class MyCustomEncoder(json.JSONEncoder):
def iterencode(self, obj):
if isinstance(obj, float):
yield format(obj, '.7f')
elif isinstance(obj, dict):
last_index = len(obj) - 1
yield '{'
i = 0
for key, value in obj.items():
yield '"' + key + '": '
for chunk in MyCustomEncoder.iterencode(self, value):
yield chunk
if i != last_index:
yield ", "
i+=1
yield '}'
elif isinstance(obj, list):
last_index = len(obj) - 1
yield "["
for i, o in enumerate(obj):
for chunk in MyCustomEncoder.iterencode(self, o):
yield chunk
if i != last_index:
yield ", "
yield "]"
else:
for chunk in json.JSONEncoder.iterencode(self, obj):
yield chunk
Now write the file using the custom encoder.
with open('test.json', 'w') as f:
json.dump(d, f, cls = MyCustomEncoder)
The output file test.json:
{"val": 5.7868688, "name": "kjbkjbkj", "mylist": [1.2345679, 12, 1.2300000, {"foo": "a", "bar": 9.8765432}], "mydict": {"bar": "b", "foo": 1.9283747}}
In order to get other keyword arguments like indent to work, the easiest way would be to read in the file that was just written and write it back out using the default encoder:
# write d using custom encoder
with open('test.json', 'w') as f:
json.dump(d, f, cls = MyCustomEncoder)
# load output into new_d
with open('test.json', 'r') as f:
new_d = json.load(f)
# write new_d out using default encoder
with open('test.json', 'w') as f:
json.dump(new_d, f, indent=4)
Now the output file is the same as shown in option 1.
Here's something that you may be able to use that's based on my answer to the question:
Write two-dimensional list to JSON file.
I say may because it requires "wrapping" all the float values in the Python dictionary (or list) before JSON encoding it with dump().
(Tested with Python 3.7.2.)
from _ctypes import PyObj_FromPtr
import json
import re
class FloatWrapper(object):
""" Float value wrapper. """
def __init__(self, value):
self.value = value
class MyEncoder(json.JSONEncoder):
FORMAT_SPEC = '##{}##'
regex = re.compile(FORMAT_SPEC.format(r'(\d+)')) # regex: r'##(\d+)##'
def default(self, obj):
return (self.FORMAT_SPEC.format(id(obj)) if isinstance(obj, FloatWrapper)
else super(MyEncoder, self).default(obj))
def iterencode(self, obj, **kwargs):
for encoded in super(MyEncoder, self).iterencode(obj, **kwargs):
# Check for marked-up float values (FloatWrapper instances).
match = self.regex.search(encoded)
if match: # Get FloatWrapper instance.
id = int(match.group(1))
float_wrapper = PyObj_FromPtr(id)
json_obj_repr = '%.7f' % float_wrapper.value # Create alt repr.
encoded = encoded.replace(
'"{}"'.format(self.FORMAT_SPEC.format(id)), json_obj_repr)
yield encoded
d = dict()
d['val'] = FloatWrapper(5.78686876876089075543) # Must wrap float values.
d['name'] = 'kjbkjbkj'
with open('float_test.json', 'w') as file:
json.dump(d, file, cls=MyEncoder, indent=4)
Contents of file created:
{
"val": 5.7868688,
"name": "kjbkjbkj"
}
Update:
As I mentioned, the above requires all the float values to be wrapped before calling json.dump(). Fortunately doing that could be automated by adding and using the following (minimally tested) utility:
def wrap_type(obj, kind, wrapper):
""" Recursively wrap instances of type kind in dictionary and list
objects.
"""
if isinstance(obj, dict):
new_dict = {}
for key, value in obj.items():
if not isinstance(value, (dict, list)):
new_dict[key] = wrapper(value) if isinstance(value, kind) else value
else:
new_dict[key] = wrap_type(value, kind, wrapper)
return new_dict
elif isinstance(obj, list):
new_list = []
for value in obj:
if not isinstance(value, (dict, list)):
new_list.append(wrapper(value) if isinstance(value, kind) else value)
else:
new_list.append(wrap_type(value, kind, wrapper))
return new_list
else:
return obj
d = dict()
d['val'] = 5.78686876876089075543
d['name'] = 'kjbkjbkj'
with open('float_test.json', 'w') as file:
json.dump(wrap_type(d, float, FloatWrapper), file, cls=MyEncoder, indent=4)
Here is a python code snippet that shows how to quantize json output to the specified number of digits:
#python example code, error handling not shown
#open files
fin = open(input_file_name)
fout = open(output_file_name, "w+")
#read file input (note this could be done in one step but breaking it up allows more flexibilty )
indata = fin.read()
# example quantization function
def quant(n):
return round((float(n) * (10 ** args.prec))) / (
10 ** args.prec
) # could use decimal.quantize
# process the data streams by parsing and using call back to quantize each float as it parsed
outdata = json.dumps(json.loads(indata, parse_float=quant), separators=(",", ":"))
#write output
fout.write(outdata)
The above is what the jsonvice command-line tool uses to quantize the floating-point json numbers to whatever precision is desired to save space.
https://pypi.org/project/jsonvice/
This can be installed with pip or pipx (see docs).
pip3 install jsonvice
Disclaimer: I wrote this when needing to test quantized machine learning model weights.
I found the above options within the python standard library to be very limiting and cumbersome, so if you're not strictly limited to the python standard lib, pandas has a json module that includes a dumps method which has a double_precision parameter to control the number of digits in a float (default 10):
import json
import pandas.io.json
d = {
'val': 5.78686876876089075543,
'name': 'kjbkjbkj',
}
print(json.dumps(d))
print(pandas.io.json.dumps(d))
print(pandas.io.json.dumps(d, double_precision=5))
gives:
{"val": 5.786868768760891, "name": "kjbkjbkj"}
{"val":5.7868687688,"name":"kjbkjbkj"}
{"val":5.78687,"name":"kjbkjbkj"}
Doesn't answer this question, but for the decoding side, you could do something like this, or override the hook method.
To solve this problem with this method though would require encoding, decoding, then encoding again, which is overly convoluted and no longer the best choice. I assumed Encode had all the bells and whistles Decode did, my mistake.
# d = dict()
class Round7FloatEncoder(json.JSONEncoder):
def iterencode(self, obj):
if isinstance(obj, float):
yield format(obj, '.7f')
with open('test.json', 'w') as f:
json.dump(d, f, cls=Round7FloatEncoder)
Inspired by this answer, here is a solution that works for Python >= 3.6 (tested with 3.9) and that allows customization of the format on a case by case basis. It works for both json and simplejson (tested with json=2.0.9 and simplejson=3.17.6).
Note however that this is not thread-safe.
from contextlib import contextmanager
class FormattedFloat(float):
def __new__(self, value, fmt=None):
return float.__new__(self, value)
def __init__(self, value, fmt=None):
float.__init__(value)
if fmt:
self.fmt = fmt
def __repr__(self):
if hasattr(self, 'fmt'):
return f'{self:{self.fmt}}'
return float.__repr__(self)
#contextmanager
def formatted_floats():
c_make_encoder = json.encoder.c_make_encoder
json_float = json.encoder.float
json.encoder.c_make_encoder = None
json.encoder.float = FormattedFloat
try:
yield
finally:
json.encoder.c_make_encoder = c_make_encoder
json.encoder.float = json_float
Example
x = 12345.6789
d = dict(
a=x,
b=FormattedFloat(x),
c=FormattedFloat(x, '.4g'),
d=FormattedFloat(x, '.08f'),
)
>>> d
{'a': 12345.6789, 'b': 12345.6789, 'c': 1.235e+04, 'd': 12345.67890000}
Now,
with formatted_floats():
out = json.dumps(d)
>>> out
'{"a": 12345.6789, "b": 12345.6789, "c": 1.235e+04, "d": 12345.67890000}'
>>> json.loads(out)
{'a': 12345.6789, 'b': 12345.6789, 'c': 12350.0, 'd': 12345.6789}
Note that the original json.encoder attributes are restored by the context manager, so:
>>> json.dumps(d)
'{"a": 12345.6789, "b": 12345.6789, "c": 12345.6789, "d": 12345.6789}'

Best way to add dictionary entry and append to JSON file in Python

I have a need to add entries to a dictionary with the following keys:
name
element
type
I want each entry to append to a JSON file, where I will access them for another piece of the project.
What I have below technically works, but there are couple things(at least) wrong with this.
First, it doesn't prevent duplicates being entered. For example I can have 'xyz', '4444' and 'test2' appear as JSON entries multiple times. Is there a way to correct this?
Is there a cleaner way to write the actual data entry piece so when I am entering these values into the dictionary it's not directly there in the parentheses?
Finally, is there a better place to put the JSON piece? Should it be inside the function?
Just trying to clean this up a bit. Thanks
import json
element_dict = {}
def add_entry(name, element, type):
element_dict["name"] = name
element_dict["element"] = element
element_dict["type"] = type
return element_dict
#add entry
entry = add_entry('xyz', '4444', 'test2')
#export to JSON
with open('elements.json', 'a', encoding="utf-8") as file:
x = json.dumps(element_dict, indent=4)
file.write(x + '\n')
There are several questions here. The main points worth mentioning:
Use can use a list to hold your arguments and use *args to unpack when you supply them to add_entry.
To check / avoid duplicates, you can use set to track items already added.
For writing to JSON, now you have a list, you can simply iterate your list and write in one function at the end.
Putting these aspects together:
import json
res = []
seen = set()
def add_entry(res, name, element, type):
# check if in seen set
if (name, element, type) in seen:
return res
# add to seen set
seen.add(tuple([name, element, type]))
# append to results list
res.append({'name': name, 'element': element, 'type': type})
return res
args = ['xyz', '4444', 'test2']
res = add_entry(res, *args) # add entry - SUCCESS
res = add_entry(res, *args) # try to add again - FAIL
args2 = ['wxy', '3241', 'test3']
res = add_entry(res, *args2) # add another - SUCCESS
Result:
print(res)
[{'name': 'xyz', 'element': '4444', 'type': 'test2'},
{'name': 'wxy', 'element': '3241', 'type': 'test3'}]
Writing to JSON via a function:
def write_to_json(lst, fn):
with open(fn, 'a', encoding='utf-8') as file:
for item in lst:
x = json.dumps(item, indent=4)
file.write(x + '\n')
#export to JSON
write_to_json(res, 'elements.json')
you can try this way
import json
import hashlib
def add_entry(name, element, type):
return {hashlib.md5(name+element+type).hexdigest(): {"name": name, "element": element, "type": type}}
#add entry
entry = add_entry('xyz', '4444', 'test2')
#Update to JSON
with open('my_file.json', 'r') as f:
json_data = json.load(f)
print json_data.values() # View Previous entries
json_data.update(entry)
with open('elements.json', 'w') as f:
f.write(json.dumps(json_data))

Create named variables in the local scope from JSON keys

Is there a way I can create named variables in the local scope from a json file?
document json
This is my json file, I would like to create variables in the local scope named as the path of my json dictionary
This is how I manually create them, I would like to do it automatically for all the json file. Is it possible?
class board(object):
def __init__(self, json, image):
self.json = json
self.image = image
def extract_json(self, *args):
with open(self.json) as data_file:
data = json.load(data_file)
jsonpath_expr = parse(".".join(args))
return jsonpath_expr.find(data)[0].value
MyAgonism = board('document.json', './tabellone.jpg')
boxes_time_minutes_coord = MyAgonism.extract_json("boxes", "time_minutes", "coord")
boxes_time_seconds_coord = MyAgonism.extract_json("boxes", "time_seconds", "coord")
boxes_score_home_coord = MyAgonism.extract_json("boxes", "score_home", "coord")
I think you're making this much more complicated than it needs to be.
with open('document.json') as f:
d = json.load(f)
time_minutes_coords = d['boxes']['time_minutes']['coord']
time_seconds_coords = d['boxes']['time_seconds']['coord']
score_home_coords = d['boxes']['score_home']['coord']
If you actually want to create named variables in the local scope from the keys in your json file, you can use the locals() dictionary (but this is a terrible idea, it's far better just to reference them from the json dictionary).
# Flatten the dictionary keys.
# This turns ['boxes']['time_minutes']['coord']
# into "boxes_time_minutes_coord"
def flatten_dict(d, k_pre=None, delim='_', fd=None):
if fd is None:
fd = {}
for k, v in d.iteritems():
if k_pre is not None:
k = '{0}{1}{2}'.format(k_pre, delim, k)
if isinstance(v, dict):
flatten_dict(v, k, delim, fd)
else:
fd[k] = v
return fd
fd = flatten_dict(d)
locals().update(fd)
print boxes_time_minutes_coord
Lots of caveats, like the possibility of overwriting some other variable in your local scope, or the possibility that two dictionary keys could be identical after flattening unless you choose a delimiter that doesn't appear in any of the dictionary keys. Or that this won't work if your keys contain invalid characters for variable names (like spaces for example).

using ConfigParser and dictionary in Python

I am trying some basic python scripts using ConfigParser and converting to a dictionary. I am reading a file named "file.cfg" which contains three sections - root, first, second. Currently the code reads the file and converts everything within the file to a dictionary.
My requirement is to convert only sections named "first" and "second" and so on, its key value pair to a dictionary. What would be best way of excluding the section "root" and its key value pair?
import urllib
import urllib2
import base64
import json
import sys
from ConfigParser import SafeConfigParser
parser = SafeConfigParser()
parser.read('file.cfg')
print parser.get('root', 'auth')
config_dict = {}
for sect in parser.sections():
config_dict[sect] = {}
for name, value in parser.items(sect):
config_dict[sect][name] = value
print config_dict
Contents of file.cfg -
~]# cat file.cfg
[root]
username = admin
password = admin
auth = http://192.168.1.1/login
[first]
username = pete
password = sEcReT
url = http://192.168.1.1/list
[second]
username = ron
password = SeCrET
url = http://192.168.1.1/status
Output of the script -
~]# python test4.py
http://192.168.1.1/login
{'second': {'username': 'ron', 'url': 'http://192.168.1.1/status', 'password': 'SeCrEt'}, 'root': {'username': 'admin', 'password': 'admin', 'auth': 'http://192.168.1.1/login'}, 'first': {'username': 'pete', 'url': 'http://192.168.1.1/list', 'password': 'sEcReT'}}
You can remove root section from parser.sections() as follows:
parser.remove_section('root')
Also you don't have to iterate over each pair in each section. You can just convert them to dict:
config_dict = {}
for sect in parser.sections():
config_dict[sect] = dict(parser.items(sect))
Here is one liner:
config_dict = {sect: dict(parser.items(sect)) for sect in parser.sections()}
Bypass the root section by comparison.
for sect in parser.sections():
if sect == 'root':
continue
config_dict[sect] = {}
for name, value in parser.items(sect):
config_dict[sect][name] = value
Edit after acceptance:
ozgur's one liner is a much more concise solution. Upvote from me. If you don't feel like removing sections from the parser directly, the entry can be deleted afterwards.
config_dict = {sect: dict(parser.items(sect)) for sect in parser.sections()} # ozgur's one-liner
del config_dict['root']
Maybe a bit off topic, but ConfigParser is a real pain when in comes to store int, floats and booleans. I prefer using dicts which I dump into configparser.
I also use funtcions to convert between ConfigParser objects and dicts, but those deal with variable type changing, so ConfigParser is happy since it requests strings, and my program is happy since 'False' is not False.
def configparser_to_dict(config: configparser.ConfigParser) -> dict:
config_dict = {}
for section in config.sections():
config_dict[section] = {}
for key, value in config.items(section):
# Now try to convert back to original types if possible
for boolean in ['True', 'False', 'None']:
if value == boolean:
value = bool(boolean)
# Try to convert to float or int
try:
if isinstance(value, str):
if '.' in value:
value = float(value)
else:
value = int(value)
except ValueError:
pass
config_dict[section][key] = value
# Now drop root section if present
config_dict.pop('root', None)
return config_dict
def dict_to_configparser(config_dict: dict) -> configparser.ConfigParser:
config = configparser.ConfigParser()
for section in config_dict.keys():
config.add_section(section)
# Now let's convert all objects to strings so configparser is happy
for key, value in config_dict[section].items():
config[section][key] = str(value)
return config

Categories