Python: Using Map/Lambda instead of For-Loop - python

I am working to convert CSV file to structured Json file.
CSV.File
address,type,floor,door
"1","is","an","example"
"2","is","an","example"
"3","is","an","example"
"4","is","an","example"
"5","is","an","example"
"6","is","an","example"
"7","is","an","example"
First, I read the csv file and list all the column's items to lists.
import pandas as pd
with open('data.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
df = pd.read_csv(csvfile)
listAddress= df.address
listType= df.type
listFloor= df.floor
listDoor=df.door
In order to get lists like this one :
listAddress=["1","2","3","4","5","6","7"]
Now, I want to automate the process and make a list of dictionaries
Output= []
tempJson = {"1": 1,"2": "is"}
for i in range(len(listAddress)):
tempJson["Address"]= listAddress[i]
tempJson["Type"]= listType[j]
Output.(tempJson)
json.dumps(output)
Here is the problem, I'd like to make a list of dictionaries, I can do this in JS using map. But I am not proficient in Python.
Can I simplify the this last piece of code using Map
Output:
[
{
"1": 1,
"2": "is"
},
{
"1": 2,
"2":"is"
},
{
"1": 3,
"2": "is"
},{
"1": 4,
"2": "is"
},
{
"1": 5,
"2":"is"
},
{
"1": 6,
"2": "is"
},
{
"1": 7,
"2": "is"
}
]

You can use a list comprehension here:
>>> from pprint import pprint
>>> import csv
>>> with open('data.csv') as f:
next(f) #skip the header
reader = csv.reader(f, delimiter=',', quotechar='"')
d = [{'1':int(row[0]), '2': row[1]} for row in reader]
...
>>> pprint(d)
[{'1': 1, '2': 'is'},
{'1': 2, '2': 'is'},
{'1': 3, '2': 'is'},
{'1': 4, '2': 'is'},
{'1': 5, '2': 'is'},
{'1': 6, '2': 'is'},
{'1': 7, '2': 'is'}]

A simple change would be
Output= []
tempJson = {"1": 1,"2": "is"}
for i in range(len(listAddress)):
tempJson["Address"]= listAddress[i]
tempJson["Type"]= listType[i]
Output.append(tempJson.copy())
something probably considered more pythonic would be:
output = [{"1": 1, "2": "is", "Address": a, "Type": t}
for a, t in zip(listAddress, listType)]

Related

Change in tab indentation upon opening the file for writing [duplicate]

So I'm using Python 2.7, using the json module to encode the following data structure:
'layer1': {
'layer2': {
'layer3_1': [ long_list_of_stuff ],
'layer3_2': 'string'
}
}
My problem is that I'm printing everything out using pretty printing, as follows:
json.dumps(data_structure, indent=2)
Which is great, except I want to indent it all, except for the content in "layer3_1" — It's a massive dictionary listing coordinates, and as such, having a single value set on each one makes pretty printing create a file with thousands of lines, with an example as follows:
{
"layer1": {
"layer2": {
"layer3_1": [
{
"x": 1,
"y": 7
},
{
"x": 0,
"y": 4
},
{
"x": 5,
"y": 3
},
{
"x": 6,
"y": 9
}
],
"layer3_2": "string"
}
}
}
What I really want is something similar to the following:
{
"layer1": {
"layer2": {
"layer3_1": [{"x":1,"y":7},{"x":0,"y":4},{"x":5,"y":3},{"x":6,"y":9}],
"layer3_2": "string"
}
}
}
I hear it's possible to extend the json module: Is it possible to set it to only turn off indenting when inside the "layer3_1" object? If so, would somebody please tell me how?
(Note:
The code in this answer only works with json.dumps() which returns a JSON formatted string, but not with json.dump() which writes directly to file-like objects. There's a modified version of it that works with both in my answer to the question Write two-dimensional list to JSON file.)
Updated
Below is a version of my original answer that has been revised several times. Unlike the original, which I posted only to show how to get the first idea in J.F.Sebastian's answer to work, and which like his, returned a non-indented string representation of the object. The latest updated version returns the Python object JSON formatted in isolation.
The keys of each coordinate dict will appear in sorted order, as per one of the OP's comments, but only if a sort_keys=True keyword argument is specified in the initial json.dumps() call driving the process, and it no longer changes the object's type to a string along the way. In other words, the actual type of the "wrapped" object is now maintained.
I think not understanding the original intent of my post resulted in number of folks downvoting it—so, primarily for that reason, I have "fixed" and improved my answer several times. The current version is a hybrid of my original answer coupled with some of the ideas #Erik Allik used in his answer, plus useful feedback from other users shown in the comments below this answer.
The following code appears to work unchanged in both Python 2.7.16 and 3.7.4.
from _ctypes import PyObj_FromPtr
import json
import re
class NoIndent(object):
""" Value wrapper. """
def __init__(self, value):
self.value = value
class MyEncoder(json.JSONEncoder):
FORMAT_SPEC = '##{}##'
regex = re.compile(FORMAT_SPEC.format(r'(\d+)'))
def __init__(self, **kwargs):
# Save copy of any keyword argument values needed for use here.
self.__sort_keys = kwargs.get('sort_keys', None)
super(MyEncoder, self).__init__(**kwargs)
def default(self, obj):
return (self.FORMAT_SPEC.format(id(obj)) if isinstance(obj, NoIndent)
else super(MyEncoder, self).default(obj))
def encode(self, obj):
format_spec = self.FORMAT_SPEC # Local var to expedite access.
json_repr = super(MyEncoder, self).encode(obj) # Default JSON.
# Replace any marked-up object ids in the JSON repr with the
# value returned from the json.dumps() of the corresponding
# wrapped Python object.
for match in self.regex.finditer(json_repr):
# see https://stackoverflow.com/a/15012814/355230
id = int(match.group(1))
no_indent = PyObj_FromPtr(id)
json_obj_repr = json.dumps(no_indent.value, sort_keys=self.__sort_keys)
# Replace the matched id string with json formatted representation
# of the corresponding Python object.
json_repr = json_repr.replace(
'"{}"'.format(format_spec.format(id)), json_obj_repr)
return json_repr
if __name__ == '__main__':
from string import ascii_lowercase as letters
data_structure = {
'layer1': {
'layer2': {
'layer3_1': NoIndent([{"x":1,"y":7}, {"x":0,"y":4}, {"x":5,"y":3},
{"x":6,"y":9},
{k: v for v, k in enumerate(letters)}]),
'layer3_2': 'string',
'layer3_3': NoIndent([{"x":2,"y":8,"z":3}, {"x":1,"y":5,"z":4},
{"x":6,"y":9,"z":8}]),
'layer3_4': NoIndent(list(range(20))),
}
}
}
print(json.dumps(data_structure, cls=MyEncoder, sort_keys=True, indent=2))
Output:
{
"layer1": {
"layer2": {
"layer3_1": [{"x": 1, "y": 7}, {"x": 0, "y": 4}, {"x": 5, "y": 3}, {"x": 6, "y": 9}, {"a": 0, "b": 1, "c": 2, "d": 3, "e": 4, "f": 5, "g": 6, "h": 7, "i": 8, "j": 9, "k": 10, "l": 11, "m": 12, "n": 13, "o": 14, "p": 15, "q": 16, "r": 17, "s": 18, "t": 19, "u": 20, "v": 21, "w": 22, "x": 23, "y": 24, "z": 25}],
"layer3_2": "string",
"layer3_3": [{"x": 2, "y": 8, "z": 3}, {"x": 1, "y": 5, "z": 4}, {"x": 6, "y": 9, "z": 8}],
"layer3_4": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
}
}
}
A bodge, but once you have the string from dumps(), you can perform a regular expression substitution on it, if you're sure of the format of its contents. Something along the lines of:
s = json.dumps(data_structure, indent=2)
s = re.sub('\s*{\s*"(.)": (\d+),\s*"(.)": (\d+)\s*}(,?)\s*', r'{"\1":\2,"\3":\4}\5', s)
The following solution seems to work correctly on Python 2.7.x. It uses a workaround taken from Custom JSON encoder in Python 2.7 to insert plain JavaScript code to avoid custom-encoded objects ending up as JSON strings in the output by using a UUID-based replacement scheme.
class NoIndent(object):
def __init__(self, value):
self.value = value
class NoIndentEncoder(json.JSONEncoder):
def __init__(self, *args, **kwargs):
super(NoIndentEncoder, self).__init__(*args, **kwargs)
self.kwargs = dict(kwargs)
del self.kwargs['indent']
self._replacement_map = {}
def default(self, o):
if isinstance(o, NoIndent):
key = uuid.uuid4().hex
self._replacement_map[key] = json.dumps(o.value, **self.kwargs)
return "##%s##" % (key,)
else:
return super(NoIndentEncoder, self).default(o)
def encode(self, o):
result = super(NoIndentEncoder, self).encode(o)
for k, v in self._replacement_map.iteritems():
result = result.replace('"##%s##"' % (k,), v)
return result
Then this
obj = {
"layer1": {
"layer2": {
"layer3_2": "string",
"layer3_1": NoIndent([{"y": 7, "x": 1}, {"y": 4, "x": 0}, {"y": 3, "x": 5}, {"y": 9, "x": 6}])
}
}
}
print json.dumps(obj, indent=2, cls=NoIndentEncoder)
produces the follwing output:
{
"layer1": {
"layer2": {
"layer3_2": "string",
"layer3_1": [{"y": 7, "x": 1}, {"y": 4, "x": 0}, {"y": 3, "x": 5}, {"y": 9, "x": 6}]
}
}
}
It also correctly passes all options (except indent) e.g. sort_keys=True down to the nested json.dumps call.
obj = {
"layer1": {
"layer2": {
"layer3_1": NoIndent([{"y": 7, "x": 1, }, {"y": 4, "x": 0}, {"y": 3, "x": 5, }, {"y": 9, "x": 6}]),
"layer3_2": "string",
}
}
}
print json.dumps(obj, indent=2, sort_keys=True, cls=NoIndentEncoder)
correctly outputs:
{
"layer1": {
"layer2": {
"layer3_1": [{"x": 1, "y": 7}, {"x": 0, "y": 4}, {"x": 5, "y": 3}, {"x": 6, "y": 9}],
"layer3_2": "string"
}
}
}
It can also be combined with e.g. collections.OrderedDict:
obj = {
"layer1": {
"layer2": {
"layer3_2": "string",
"layer3_3": NoIndent(OrderedDict([("b", 1), ("a", 2)]))
}
}
}
print json.dumps(obj, indent=2, cls=NoIndentEncoder)
outputs:
{
"layer1": {
"layer2": {
"layer3_3": {"b": 1, "a": 2},
"layer3_2": "string"
}
}
}
UPDATE: In Python 3, there is no iteritems. You can replace encode with this:
def encode(self, o):
result = super(NoIndentEncoder, self).encode(o)
for k, v in iter(self._replacement_map.items()):
result = result.replace('"##%s##"' % (k,), v)
return result
This yields the OP's expected result:
import json
class MyJSONEncoder(json.JSONEncoder):
def iterencode(self, o, _one_shot=False):
list_lvl = 0
for s in super(MyJSONEncoder, self).iterencode(o, _one_shot=_one_shot):
if s.startswith('['):
list_lvl += 1
s = s.replace('\n', '').rstrip()
elif 0 < list_lvl:
s = s.replace('\n', '').rstrip()
if s and s[-1] == ',':
s = s[:-1] + self.item_separator
elif s and s[-1] == ':':
s = s[:-1] + self.key_separator
if s.endswith(']'):
list_lvl -= 1
yield s
o = {
"layer1":{
"layer2":{
"layer3_1":[{"y":7,"x":1},{"y":4,"x":0},{"y":3,"x":5},{"y":9,"x":6}],
"layer3_2":"string",
"layer3_3":["aaa\nbbb","ccc\nddd",{"aaa\nbbb":"ccc\nddd"}],
"layer3_4":"aaa\nbbb",
}
}
}
jsonstr = json.dumps(o, indent=2, separators=(',', ':'), sort_keys=True,
cls=MyJSONEncoder)
print(jsonstr)
o2 = json.loads(jsonstr)
print('identical objects: {}'.format((o == o2)))
You could try:
mark lists that shouldn't be indented by replacing them with NoIndentList:
class NoIndentList(list):
pass
override json.Encoder.default method to produce a non-indented string representation for NoIndentList.
You could just cast it back to list and call json.dumps() without indent to get a single line
It seems the above approach doesn't work for the json module:
import json
import sys
class NoIndent(object):
def __init__(self, value):
self.value = value
def default(o, encoder=json.JSONEncoder()):
if isinstance(o, NoIndent):
return json.dumps(o.value)
return encoder.default(o)
L = [dict(x=x, y=y) for x in range(1) for y in range(2)]
obj = [NoIndent(L), L]
json.dump(obj, sys.stdout, default=default, indent=4)
It produces invalid output (the list is serialized as a string):
[
"[{\"y\": 0, \"x\": 0}, {\"y\": 1, \"x\": 0}]",
[
{
"y": 0,
"x": 0
},
{
"y": 1,
"x": 0
}
]
]
If you can use yaml then the method works:
import sys
import yaml
class NoIndentList(list):
pass
def noindent_list_presenter(dumper, data):
return dumper.represent_sequence(u'tag:yaml.org,2002:seq', data,
flow_style=True)
yaml.add_representer(NoIndentList, noindent_list_presenter)
obj = [
[dict(x=x, y=y) for x in range(2) for y in range(1)],
[dict(x=x, y=y) for x in range(1) for y in range(2)],
]
obj[0] = NoIndentList(obj[0])
yaml.dump(obj, stream=sys.stdout, indent=4)
It produces:
- [{x: 0, y: 0}, {x: 1, y: 0}]
- - {x: 0, y: 0}
- {x: 0, y: 1}
i.e., the first list is serialized using [] and all items are on one line, the second list uses one line per item.
Here's a post-processing solution if you have too many different types of objects contributing to the JSON to attempt the JSONEncoder method and too many varying types to use a regex. This function collapses whitespace after a specified level, without needing to know the specifics of the data itself.
def collapse_json(text, indent=12):
"""Compacts a string of json data by collapsing whitespace after the
specified indent level
NOTE: will not produce correct results when indent level is not a multiple
of the json indent level
"""
initial = " " * indent
out = [] # final json output
sublevel = [] # accumulation list for sublevel entries
pending = None # holder for consecutive entries at exact indent level
for line in text.splitlines():
if line.startswith(initial):
if line[indent] == " ":
# found a line indented further than the indent level, so add
# it to the sublevel list
if pending:
# the first item in the sublevel will be the pending item
# that was the previous line in the json
sublevel.append(pending)
pending = None
item = line.strip()
sublevel.append(item)
if item.endswith(","):
sublevel.append(" ")
elif sublevel:
# found a line at the exact indent level *and* we have sublevel
# items. This means the sublevel items have come to an end
sublevel.append(line.strip())
out.append("".join(sublevel))
sublevel = []
else:
# found a line at the exact indent level but no items indented
# further, so possibly start a new sub-level
if pending:
# if there is already a pending item, it means that
# consecutive entries in the json had the exact same
# indentation and that last pending item was not the start
# of a new sublevel.
out.append(pending)
pending = line.rstrip()
else:
if pending:
# it's possible that an item will be pending but not added to
# the output yet, so make sure it's not forgotten.
out.append(pending)
pending = None
if sublevel:
out.append("".join(sublevel))
out.append(line)
return "\n".join(out)
For example, using this structure as input to json.dumps with an indent level of 4:
text = json.dumps({"zero": ["first", {"second": 2, "third": 3, "fourth": 4, "items": [[1,2,3,4], [5,6,7,8], 9, 10, [11, [12, [13, [14, 15]]]]]}]}, indent=4)
here's the output of the function at various indent levels:
>>> print collapse_json(text, indent=0)
{"zero": ["first", {"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]], "second": 2, "fourth": 4, "third": 3}]}
>>> print collapse_json(text, indent=4)
{
"zero": ["first", {"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]], "second": 2, "fourth": 4, "third": 3}]
}
>>> print collapse_json(text, indent=8)
{
"zero": [
"first",
{"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]], "second": 2, "fourth": 4, "third": 3}
]
}
>>> print collapse_json(text, indent=12)
{
"zero": [
"first",
{
"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]],
"second": 2,
"fourth": 4,
"third": 3
}
]
}
>>> print collapse_json(text, indent=16)
{
"zero": [
"first",
{
"items": [
[1, 2, 3, 4],
[5, 6, 7, 8],
9,
10,
[11, [12, [13, [14, 15]]]]
],
"second": 2,
"fourth": 4,
"third": 3
}
]
}
Best performance code (10MB text costs 1s):
import json
def dumps_json(data, indent=2, depth=2):
assert depth > 0
space = ' '*indent
s = json.dumps(data, indent=indent)
lines = s.splitlines()
N = len(lines)
# determine which lines to be shortened
is_over_depth_line = lambda i: i in range(N) and lines[i].startswith(space*(depth+1))
is_open_bracket_line = lambda i: not is_over_depth_line(i) and is_over_depth_line(i+1)
is_close_bracket_line = lambda i: not is_over_depth_line(i) and is_over_depth_line(i-1)
#
def shorten_line(line_index):
if not is_open_bracket_line(line_index):
return lines[line_index]
# shorten over-depth lines
start = line_index
end = start
while not is_close_bracket_line(end):
end += 1
has_trailing_comma = lines[end][-1] == ','
_lines = [lines[start][-1], *lines[start+1:end], lines[end].replace(',','')]
d = json.dumps(json.loads(' '.join(_lines)))
return lines[line_index][:-1] + d + (',' if has_trailing_comma else '')
#
s = '\n'.join([
shorten_line(i)
for i in range(N) if not is_over_depth_line(i) and not is_close_bracket_line(i)
])
#
return s
UPDATE:
Here's my explanation:
First we use json.dumps to get json string has been indented.
Example:
>>> print(json.dumps({'0':{'1a':{'2a':None,'2b':None},'1b':{'2':None}}}, indent=2))
[0] {
[1] "0": {
[2] "1a": {
[3] "2a": null,
[4] "2b": null
[5] },
[6] "1b": {
[7] "2": null
[8] }
[9] }
[10] }
If we set indent=2 and depth = 2, then too depth lines start with 6 white-spaces
We has 4 types of line:
Normal line
Open bracket line (2,6)
Exceed depth line (3,4,7)
Close bracket line (5,8)
We will try to merge a sequence of lines (type 2 + 3 + 4) into one single line.
Example:
[2] "1a": {
[3] "2a": null,
[4] "2b": null
[5] },
will be merged into:
[2] "1a": {"2a": null, "2b": null},
NOTE: Close bracket line may has trailing comma
Answer for me and Python 3 users
import re
def jsonIndentLimit(jsonString, indent, limit):
regexPattern = re.compile(f'\n({indent}){{{limit}}}(({indent})+|(?=(}}|])))')
return regexPattern.sub('', jsonString)
if __name__ == '__main__':
jsonString = '''{
"layer1": {
"layer2": {
"layer3_1": [
{
"x": 1,
"y": 7
},
{
"x": 0,
"y": 4
},
{
"x": 5,
"y": 3
},
{
"x": 6,
"y": 9
}
],
"layer3_2": "string"
}
}
}'''
print(jsonIndentLimit(jsonString, ' ', 3))
'''print
{
"layer1": {
"layer2": {
"layer3_1": [{"x": 1,"y": 7},{"x": 0,"y": 4},{"x": 5,"y": 3},{"x": 6,"y": 9}],
"layer3_2": "string"
}
}
}'''
This solution is not so elegant and generic as the others and you will not learn much from it but it's quick and simple.
def custom_print(data_structure, indent):
for key, value in data_structure.items():
print "\n%s%s:" % (' '*indent,str(key)),
if isinstance(value, dict):
custom_print(value, indent+1)
else:
print "%s" % (str(value)),
Usage and output:
>>> custom_print(data_structure,1)
layer1:
layer2:
layer3_2: string
layer3_1: [{'y': 7, 'x': 1}, {'y': 4, 'x': 0}, {'y': 3, 'x': 5}, {'y': 9, 'x': 6}]
As a side note, this website has a built-in JavaScript that will avoid line feeds in JSON strings when lines are shorter than 70 chars:
http://www.csvjson.com/json_beautifier
(was implemented using a modified version of JSON-js)
Select "Inline short arrays"
Great for quickly viewing data that you have in the copy buffer.
Indeed, one of things YAML is better than JSON.
I can't get NoIndentEncoder to work..., but I can use regex on JSON string...
def collapse_json(text, list_length=5):
for length in range(list_length):
re_pattern = r'\[' + (r'\s*(.+)\s*,' * length)[:-1] + r'\]'
re_repl = r'[' + ''.join(r'\{}, '.format(i+1) for i in range(length))[:-2] + r']'
text = re.sub(re_pattern, re_repl, text)
return text
The question is, how do I perform this on a nested list?
Before:
[
0,
"any",
[
2,
3
]
]
After:
[0, "any", [2, 3]]

Generate a JSON file with Python

I'm trying to generate a JSON file with python. But I can't figure out how to append each object correctly and write all of them at once to JSON file. Could you please help me solve this? a, b, and values for x, y, z are calculated in the script.
Thank you so much
This is how the generated JSON file should look like
{
"a": {
"x": 2,
"y": 3,
"z": 4
},
"b": {
"x": 5,
"y": 4,
"z": 4
}
}
This is python script
import json
for i in range(1, 5):
a = geta(i)
x = getx(i)
y = gety(i)
z = getz(i)
data = {
a: {
"x": x,
"y": y,
"z": z
}}
with open('data.json', 'a') as f:
f.write(json.dumps(data, ensure_ascii=False, indent=4))
Just use normal dictionaries in python when constructing the JSON then use the JSON package to export to JSON files.
You can construct them like this (long way):
a_dict = {}
a_dict['id'] = {}
a_dict['id']['a'] = {'properties' : {}}
a_dict['id']['a']['properties']['x'] = '9'
a_dict['id']['a']['properties']['y'] = '3'
a_dict['id']['a']['properties']['z'] = '17'
a_dict['id']['b'] = {'properties' : {}}
a_dict['id']['b']['properties']['x'] = '3'
a_dict['id']['b']['properties']['y'] = '2'
a_dict['id']['b']['properties']['z'] = '1'
or you can use a function:
def dict_construct(id, x, y, z):
new_dic = {id : {'properties': {} } }
values = [{'x': x}, {'y': y}, {'z':z}]
for val in values:
new_dic[id]['properties'].update(val)
return new_dic
return_values = [('a', '9', '3', '17'), ('b', '3', '2', '1')]
a_dict = {'id': {} }
for xx in return_values:
add_dict = dict_construct(*xx)
a_dict['id'].update(add_dict)
print(a_dict)
both give you as a dictionary:
{'id': {'a': {'properties': {'x': '9', 'y': '3', 'z': '17'}}, 'b': {'properties': {'x': '3', 'y': '2', 'z': '1'}}}}
using json.dump:
with open('data.json', 'w') as outfile:
json.dump(a_dict, outfile)
you get as a file:
{
"id": {
"a": {
"properties": {
"x": "9",
"y": "3",
"z": "17"
}
},
"b": {
"properties": {
"x": "3",
"y": "2",
"z": "1"
}
}
}
}
Make sure you have a valid python dictionary (it seems like you already do)
I see you are trying to write your json in a file with
with open('data.json', 'a') as f:
f.write(json.dumps(data, ensure_ascii=False, indent=4))
You are opening data.json on "a" (append) mode, so you are adding your json to the end of the file, that will result on a bad json data.json contains any data already. Do this instead:
with open('data.json', 'w') as f:
# where data is your valid python dictionary
json.dump(data, f)
One way will be to create whole dict at once:
data = {}
for i in range(1, 5):
name = getname(i)
x = getx(i)
y = gety(i)
z = getz(i)
data[name] = {
"x": x,
"y": y,
"z": z
}
And then save
with open('data.json', 'w') as f:
json.dump(data, f, indent=4)

Use json.dumps() to append to json multi dimensional array with custom formatting? [duplicate]

So I'm using Python 2.7, using the json module to encode the following data structure:
'layer1': {
'layer2': {
'layer3_1': [ long_list_of_stuff ],
'layer3_2': 'string'
}
}
My problem is that I'm printing everything out using pretty printing, as follows:
json.dumps(data_structure, indent=2)
Which is great, except I want to indent it all, except for the content in "layer3_1" — It's a massive dictionary listing coordinates, and as such, having a single value set on each one makes pretty printing create a file with thousands of lines, with an example as follows:
{
"layer1": {
"layer2": {
"layer3_1": [
{
"x": 1,
"y": 7
},
{
"x": 0,
"y": 4
},
{
"x": 5,
"y": 3
},
{
"x": 6,
"y": 9
}
],
"layer3_2": "string"
}
}
}
What I really want is something similar to the following:
{
"layer1": {
"layer2": {
"layer3_1": [{"x":1,"y":7},{"x":0,"y":4},{"x":5,"y":3},{"x":6,"y":9}],
"layer3_2": "string"
}
}
}
I hear it's possible to extend the json module: Is it possible to set it to only turn off indenting when inside the "layer3_1" object? If so, would somebody please tell me how?
(Note:
The code in this answer only works with json.dumps() which returns a JSON formatted string, but not with json.dump() which writes directly to file-like objects. There's a modified version of it that works with both in my answer to the question Write two-dimensional list to JSON file.)
Updated
Below is a version of my original answer that has been revised several times. Unlike the original, which I posted only to show how to get the first idea in J.F.Sebastian's answer to work, and which like his, returned a non-indented string representation of the object. The latest updated version returns the Python object JSON formatted in isolation.
The keys of each coordinate dict will appear in sorted order, as per one of the OP's comments, but only if a sort_keys=True keyword argument is specified in the initial json.dumps() call driving the process, and it no longer changes the object's type to a string along the way. In other words, the actual type of the "wrapped" object is now maintained.
I think not understanding the original intent of my post resulted in number of folks downvoting it—so, primarily for that reason, I have "fixed" and improved my answer several times. The current version is a hybrid of my original answer coupled with some of the ideas #Erik Allik used in his answer, plus useful feedback from other users shown in the comments below this answer.
The following code appears to work unchanged in both Python 2.7.16 and 3.7.4.
from _ctypes import PyObj_FromPtr
import json
import re
class NoIndent(object):
""" Value wrapper. """
def __init__(self, value):
self.value = value
class MyEncoder(json.JSONEncoder):
FORMAT_SPEC = '##{}##'
regex = re.compile(FORMAT_SPEC.format(r'(\d+)'))
def __init__(self, **kwargs):
# Save copy of any keyword argument values needed for use here.
self.__sort_keys = kwargs.get('sort_keys', None)
super(MyEncoder, self).__init__(**kwargs)
def default(self, obj):
return (self.FORMAT_SPEC.format(id(obj)) if isinstance(obj, NoIndent)
else super(MyEncoder, self).default(obj))
def encode(self, obj):
format_spec = self.FORMAT_SPEC # Local var to expedite access.
json_repr = super(MyEncoder, self).encode(obj) # Default JSON.
# Replace any marked-up object ids in the JSON repr with the
# value returned from the json.dumps() of the corresponding
# wrapped Python object.
for match in self.regex.finditer(json_repr):
# see https://stackoverflow.com/a/15012814/355230
id = int(match.group(1))
no_indent = PyObj_FromPtr(id)
json_obj_repr = json.dumps(no_indent.value, sort_keys=self.__sort_keys)
# Replace the matched id string with json formatted representation
# of the corresponding Python object.
json_repr = json_repr.replace(
'"{}"'.format(format_spec.format(id)), json_obj_repr)
return json_repr
if __name__ == '__main__':
from string import ascii_lowercase as letters
data_structure = {
'layer1': {
'layer2': {
'layer3_1': NoIndent([{"x":1,"y":7}, {"x":0,"y":4}, {"x":5,"y":3},
{"x":6,"y":9},
{k: v for v, k in enumerate(letters)}]),
'layer3_2': 'string',
'layer3_3': NoIndent([{"x":2,"y":8,"z":3}, {"x":1,"y":5,"z":4},
{"x":6,"y":9,"z":8}]),
'layer3_4': NoIndent(list(range(20))),
}
}
}
print(json.dumps(data_structure, cls=MyEncoder, sort_keys=True, indent=2))
Output:
{
"layer1": {
"layer2": {
"layer3_1": [{"x": 1, "y": 7}, {"x": 0, "y": 4}, {"x": 5, "y": 3}, {"x": 6, "y": 9}, {"a": 0, "b": 1, "c": 2, "d": 3, "e": 4, "f": 5, "g": 6, "h": 7, "i": 8, "j": 9, "k": 10, "l": 11, "m": 12, "n": 13, "o": 14, "p": 15, "q": 16, "r": 17, "s": 18, "t": 19, "u": 20, "v": 21, "w": 22, "x": 23, "y": 24, "z": 25}],
"layer3_2": "string",
"layer3_3": [{"x": 2, "y": 8, "z": 3}, {"x": 1, "y": 5, "z": 4}, {"x": 6, "y": 9, "z": 8}],
"layer3_4": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
}
}
}
A bodge, but once you have the string from dumps(), you can perform a regular expression substitution on it, if you're sure of the format of its contents. Something along the lines of:
s = json.dumps(data_structure, indent=2)
s = re.sub('\s*{\s*"(.)": (\d+),\s*"(.)": (\d+)\s*}(,?)\s*', r'{"\1":\2,"\3":\4}\5', s)
The following solution seems to work correctly on Python 2.7.x. It uses a workaround taken from Custom JSON encoder in Python 2.7 to insert plain JavaScript code to avoid custom-encoded objects ending up as JSON strings in the output by using a UUID-based replacement scheme.
class NoIndent(object):
def __init__(self, value):
self.value = value
class NoIndentEncoder(json.JSONEncoder):
def __init__(self, *args, **kwargs):
super(NoIndentEncoder, self).__init__(*args, **kwargs)
self.kwargs = dict(kwargs)
del self.kwargs['indent']
self._replacement_map = {}
def default(self, o):
if isinstance(o, NoIndent):
key = uuid.uuid4().hex
self._replacement_map[key] = json.dumps(o.value, **self.kwargs)
return "##%s##" % (key,)
else:
return super(NoIndentEncoder, self).default(o)
def encode(self, o):
result = super(NoIndentEncoder, self).encode(o)
for k, v in self._replacement_map.iteritems():
result = result.replace('"##%s##"' % (k,), v)
return result
Then this
obj = {
"layer1": {
"layer2": {
"layer3_2": "string",
"layer3_1": NoIndent([{"y": 7, "x": 1}, {"y": 4, "x": 0}, {"y": 3, "x": 5}, {"y": 9, "x": 6}])
}
}
}
print json.dumps(obj, indent=2, cls=NoIndentEncoder)
produces the follwing output:
{
"layer1": {
"layer2": {
"layer3_2": "string",
"layer3_1": [{"y": 7, "x": 1}, {"y": 4, "x": 0}, {"y": 3, "x": 5}, {"y": 9, "x": 6}]
}
}
}
It also correctly passes all options (except indent) e.g. sort_keys=True down to the nested json.dumps call.
obj = {
"layer1": {
"layer2": {
"layer3_1": NoIndent([{"y": 7, "x": 1, }, {"y": 4, "x": 0}, {"y": 3, "x": 5, }, {"y": 9, "x": 6}]),
"layer3_2": "string",
}
}
}
print json.dumps(obj, indent=2, sort_keys=True, cls=NoIndentEncoder)
correctly outputs:
{
"layer1": {
"layer2": {
"layer3_1": [{"x": 1, "y": 7}, {"x": 0, "y": 4}, {"x": 5, "y": 3}, {"x": 6, "y": 9}],
"layer3_2": "string"
}
}
}
It can also be combined with e.g. collections.OrderedDict:
obj = {
"layer1": {
"layer2": {
"layer3_2": "string",
"layer3_3": NoIndent(OrderedDict([("b", 1), ("a", 2)]))
}
}
}
print json.dumps(obj, indent=2, cls=NoIndentEncoder)
outputs:
{
"layer1": {
"layer2": {
"layer3_3": {"b": 1, "a": 2},
"layer3_2": "string"
}
}
}
UPDATE: In Python 3, there is no iteritems. You can replace encode with this:
def encode(self, o):
result = super(NoIndentEncoder, self).encode(o)
for k, v in iter(self._replacement_map.items()):
result = result.replace('"##%s##"' % (k,), v)
return result
This yields the OP's expected result:
import json
class MyJSONEncoder(json.JSONEncoder):
def iterencode(self, o, _one_shot=False):
list_lvl = 0
for s in super(MyJSONEncoder, self).iterencode(o, _one_shot=_one_shot):
if s.startswith('['):
list_lvl += 1
s = s.replace('\n', '').rstrip()
elif 0 < list_lvl:
s = s.replace('\n', '').rstrip()
if s and s[-1] == ',':
s = s[:-1] + self.item_separator
elif s and s[-1] == ':':
s = s[:-1] + self.key_separator
if s.endswith(']'):
list_lvl -= 1
yield s
o = {
"layer1":{
"layer2":{
"layer3_1":[{"y":7,"x":1},{"y":4,"x":0},{"y":3,"x":5},{"y":9,"x":6}],
"layer3_2":"string",
"layer3_3":["aaa\nbbb","ccc\nddd",{"aaa\nbbb":"ccc\nddd"}],
"layer3_4":"aaa\nbbb",
}
}
}
jsonstr = json.dumps(o, indent=2, separators=(',', ':'), sort_keys=True,
cls=MyJSONEncoder)
print(jsonstr)
o2 = json.loads(jsonstr)
print('identical objects: {}'.format((o == o2)))
You could try:
mark lists that shouldn't be indented by replacing them with NoIndentList:
class NoIndentList(list):
pass
override json.Encoder.default method to produce a non-indented string representation for NoIndentList.
You could just cast it back to list and call json.dumps() without indent to get a single line
It seems the above approach doesn't work for the json module:
import json
import sys
class NoIndent(object):
def __init__(self, value):
self.value = value
def default(o, encoder=json.JSONEncoder()):
if isinstance(o, NoIndent):
return json.dumps(o.value)
return encoder.default(o)
L = [dict(x=x, y=y) for x in range(1) for y in range(2)]
obj = [NoIndent(L), L]
json.dump(obj, sys.stdout, default=default, indent=4)
It produces invalid output (the list is serialized as a string):
[
"[{\"y\": 0, \"x\": 0}, {\"y\": 1, \"x\": 0}]",
[
{
"y": 0,
"x": 0
},
{
"y": 1,
"x": 0
}
]
]
If you can use yaml then the method works:
import sys
import yaml
class NoIndentList(list):
pass
def noindent_list_presenter(dumper, data):
return dumper.represent_sequence(u'tag:yaml.org,2002:seq', data,
flow_style=True)
yaml.add_representer(NoIndentList, noindent_list_presenter)
obj = [
[dict(x=x, y=y) for x in range(2) for y in range(1)],
[dict(x=x, y=y) for x in range(1) for y in range(2)],
]
obj[0] = NoIndentList(obj[0])
yaml.dump(obj, stream=sys.stdout, indent=4)
It produces:
- [{x: 0, y: 0}, {x: 1, y: 0}]
- - {x: 0, y: 0}
- {x: 0, y: 1}
i.e., the first list is serialized using [] and all items are on one line, the second list uses one line per item.
Here's a post-processing solution if you have too many different types of objects contributing to the JSON to attempt the JSONEncoder method and too many varying types to use a regex. This function collapses whitespace after a specified level, without needing to know the specifics of the data itself.
def collapse_json(text, indent=12):
"""Compacts a string of json data by collapsing whitespace after the
specified indent level
NOTE: will not produce correct results when indent level is not a multiple
of the json indent level
"""
initial = " " * indent
out = [] # final json output
sublevel = [] # accumulation list for sublevel entries
pending = None # holder for consecutive entries at exact indent level
for line in text.splitlines():
if line.startswith(initial):
if line[indent] == " ":
# found a line indented further than the indent level, so add
# it to the sublevel list
if pending:
# the first item in the sublevel will be the pending item
# that was the previous line in the json
sublevel.append(pending)
pending = None
item = line.strip()
sublevel.append(item)
if item.endswith(","):
sublevel.append(" ")
elif sublevel:
# found a line at the exact indent level *and* we have sublevel
# items. This means the sublevel items have come to an end
sublevel.append(line.strip())
out.append("".join(sublevel))
sublevel = []
else:
# found a line at the exact indent level but no items indented
# further, so possibly start a new sub-level
if pending:
# if there is already a pending item, it means that
# consecutive entries in the json had the exact same
# indentation and that last pending item was not the start
# of a new sublevel.
out.append(pending)
pending = line.rstrip()
else:
if pending:
# it's possible that an item will be pending but not added to
# the output yet, so make sure it's not forgotten.
out.append(pending)
pending = None
if sublevel:
out.append("".join(sublevel))
out.append(line)
return "\n".join(out)
For example, using this structure as input to json.dumps with an indent level of 4:
text = json.dumps({"zero": ["first", {"second": 2, "third": 3, "fourth": 4, "items": [[1,2,3,4], [5,6,7,8], 9, 10, [11, [12, [13, [14, 15]]]]]}]}, indent=4)
here's the output of the function at various indent levels:
>>> print collapse_json(text, indent=0)
{"zero": ["first", {"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]], "second": 2, "fourth": 4, "third": 3}]}
>>> print collapse_json(text, indent=4)
{
"zero": ["first", {"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]], "second": 2, "fourth": 4, "third": 3}]
}
>>> print collapse_json(text, indent=8)
{
"zero": [
"first",
{"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]], "second": 2, "fourth": 4, "third": 3}
]
}
>>> print collapse_json(text, indent=12)
{
"zero": [
"first",
{
"items": [[1, 2, 3, 4], [5, 6, 7, 8], 9, 10, [11, [12, [13, [14, 15]]]]],
"second": 2,
"fourth": 4,
"third": 3
}
]
}
>>> print collapse_json(text, indent=16)
{
"zero": [
"first",
{
"items": [
[1, 2, 3, 4],
[5, 6, 7, 8],
9,
10,
[11, [12, [13, [14, 15]]]]
],
"second": 2,
"fourth": 4,
"third": 3
}
]
}
Best performance code (10MB text costs 1s):
import json
def dumps_json(data, indent=2, depth=2):
assert depth > 0
space = ' '*indent
s = json.dumps(data, indent=indent)
lines = s.splitlines()
N = len(lines)
# determine which lines to be shortened
is_over_depth_line = lambda i: i in range(N) and lines[i].startswith(space*(depth+1))
is_open_bracket_line = lambda i: not is_over_depth_line(i) and is_over_depth_line(i+1)
is_close_bracket_line = lambda i: not is_over_depth_line(i) and is_over_depth_line(i-1)
#
def shorten_line(line_index):
if not is_open_bracket_line(line_index):
return lines[line_index]
# shorten over-depth lines
start = line_index
end = start
while not is_close_bracket_line(end):
end += 1
has_trailing_comma = lines[end][-1] == ','
_lines = [lines[start][-1], *lines[start+1:end], lines[end].replace(',','')]
d = json.dumps(json.loads(' '.join(_lines)))
return lines[line_index][:-1] + d + (',' if has_trailing_comma else '')
#
s = '\n'.join([
shorten_line(i)
for i in range(N) if not is_over_depth_line(i) and not is_close_bracket_line(i)
])
#
return s
UPDATE:
Here's my explanation:
First we use json.dumps to get json string has been indented.
Example:
>>> print(json.dumps({'0':{'1a':{'2a':None,'2b':None},'1b':{'2':None}}}, indent=2))
[0] {
[1] "0": {
[2] "1a": {
[3] "2a": null,
[4] "2b": null
[5] },
[6] "1b": {
[7] "2": null
[8] }
[9] }
[10] }
If we set indent=2 and depth = 2, then too depth lines start with 6 white-spaces
We has 4 types of line:
Normal line
Open bracket line (2,6)
Exceed depth line (3,4,7)
Close bracket line (5,8)
We will try to merge a sequence of lines (type 2 + 3 + 4) into one single line.
Example:
[2] "1a": {
[3] "2a": null,
[4] "2b": null
[5] },
will be merged into:
[2] "1a": {"2a": null, "2b": null},
NOTE: Close bracket line may has trailing comma
Answer for me and Python 3 users
import re
def jsonIndentLimit(jsonString, indent, limit):
regexPattern = re.compile(f'\n({indent}){{{limit}}}(({indent})+|(?=(}}|])))')
return regexPattern.sub('', jsonString)
if __name__ == '__main__':
jsonString = '''{
"layer1": {
"layer2": {
"layer3_1": [
{
"x": 1,
"y": 7
},
{
"x": 0,
"y": 4
},
{
"x": 5,
"y": 3
},
{
"x": 6,
"y": 9
}
],
"layer3_2": "string"
}
}
}'''
print(jsonIndentLimit(jsonString, ' ', 3))
'''print
{
"layer1": {
"layer2": {
"layer3_1": [{"x": 1,"y": 7},{"x": 0,"y": 4},{"x": 5,"y": 3},{"x": 6,"y": 9}],
"layer3_2": "string"
}
}
}'''
This solution is not so elegant and generic as the others and you will not learn much from it but it's quick and simple.
def custom_print(data_structure, indent):
for key, value in data_structure.items():
print "\n%s%s:" % (' '*indent,str(key)),
if isinstance(value, dict):
custom_print(value, indent+1)
else:
print "%s" % (str(value)),
Usage and output:
>>> custom_print(data_structure,1)
layer1:
layer2:
layer3_2: string
layer3_1: [{'y': 7, 'x': 1}, {'y': 4, 'x': 0}, {'y': 3, 'x': 5}, {'y': 9, 'x': 6}]
As a side note, this website has a built-in JavaScript that will avoid line feeds in JSON strings when lines are shorter than 70 chars:
http://www.csvjson.com/json_beautifier
(was implemented using a modified version of JSON-js)
Select "Inline short arrays"
Great for quickly viewing data that you have in the copy buffer.
Indeed, one of things YAML is better than JSON.
I can't get NoIndentEncoder to work..., but I can use regex on JSON string...
def collapse_json(text, list_length=5):
for length in range(list_length):
re_pattern = r'\[' + (r'\s*(.+)\s*,' * length)[:-1] + r'\]'
re_repl = r'[' + ''.join(r'\{}, '.format(i+1) for i in range(length))[:-2] + r']'
text = re.sub(re_pattern, re_repl, text)
return text
The question is, how do I perform this on a nested list?
Before:
[
0,
"any",
[
2,
3
]
]
After:
[0, "any", [2, 3]]

How to convert all values of a nested dictionary into strings?

I am writing a python application where I have a variable dictionary that can be nested upto any level.
The keys in any level can be either int or string. But I want to convert all keys and values at all levels into strings. How nested the dictionary will be is variable which makes it a bit complicated.
{
"col1": {
"0": 0,
"1": 8,
"2": {
0: 2,
}
"3": 4,
"4": 5
},
"col2": {
"0": "na",
"1": 1,
"2": "na",
"3": "na",
"4": "na"
},
"col3": {
"0": 1,
"1": 3,
"2": 3,
"3": 6,
"4": 3
},
"col4": {
"0": 5,
"1": "na",
"2": "9",
"3": 9,
"4": "na"
}
}
I am looking for the shortest and quickest function to achieve that. There are other questions like Converting dictionary values in python from str to int in nested dictionary that suggest ways of doing it but none of them deals with the "variable nesting" nature of the dictionary.
Any ideas will be appreciated.
This is the most straightforward way I can think of doing it:
import json
data = {'col4': {'1': 'na', '0': 5, '3': 9, '2': '9', '4': 'na'}, 'col2': {'1': 1, '0': 'na', '3': 'na', '2': 'na', '4': 'na'}, 'col3': {'1': 3, '0': 1, '3': 6, '2': 3, '4': 3}, 'col1': {'1': 8, '0': 0, '3': 4, '2': {0: 2}, '4': 5}}
stringified_dict = json.loads(json.dumps(data), parse_int=str, parse_float=str)
Here are some links to the documentation for json loads and parse_int: Python3, Python2
You could check the dictionary recursively:
def iterdict(d):
for k, v in d.items():
if isinstance(v, dict):
iterdict(v)
else:
if type(v) == int:
v = str(v)
d.update({k: v})
return d
In case it's helpful to anyone, I modified Jose's answer to handle lists as well. The output converts all values to strings. I haven't thoroughly tested it, but I just ran it against around 40k records, and it passed schema validation:
def iterdict(d):
"""Recursively iterate over dict converting values to strings."""
for k, v in d.items():
if isinstance(v, dict):
iterdict(v)
elif isinstance(v, list):
for x, i in enumerate(v):
if isinstance(i, (dict, list)):
iterdict(i)
else:
if type(i) != str:
i = str(i)
v[x] = i
else:
if type(v) != str:
v = str(v)
d.update({k: v})
return d

create JSON with multiple dictionaries, Python

I have this code:
>>> import simplejson as json
>>> keys = dict([(x, x**3) for x in xrange(1, 3)])
>>> nums = json.dumps(keys, indent=4)
>>> print nums
{
"1": 1,
"2": 8
}
But I want to create a loop to make my output looks like this:
[
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
}
]
You'd need to create a list, append all the mappings to that before conversion to JSON:
output = []
for something in somethingelse:
output.append(dict([(x, x**3) for x in xrange(1, 3)])
json.dumps(output)
Your desired output is not valid JSON. I think what you probably meant to do was to append multiple dictionaries to a list, like this:
>>> import json
>>> multikeys = []
>>> for i in range(3):
... multikeys.append(dict([(x, x**3) for x in xrange(1, 3)]))
...
>>> print json.dumps(multikeys, indent=4)
[
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
}
]

Categories