I have list of dict with UTF-8 and I want to save it in txt file
ls_dict = [
{ 'a': 'میلاد'},
{ 'b': 'علی'},
{ 'c': 'رضا'}
]
I want it to save in csv or txt with UTF-8
You just need to make sure you specify the relevant encoding when you create/open the output file.
import json
ls_dict = [
{ 'a': 'میلاد'},
{ 'b': 'علی'},
{ 'c': 'رضا'}
]
with open('j.json', 'w', encoding='utf-8') as j:
j.write(json.dumps(ls_dict))
Subsequently...
with open('j.json', encoding='utf-8') as j:
j = json.load(j)
print(j)
Output:
[{'a': 'میلاد'}, {'b': 'علی'}, {'c': 'رضا'}]
you can save it as a csv using pandas
ls_dict = [
{ 'a': 'میلاد'},
{ 'b': 'علی'},
{ 'c': 'رضا'}
]
# you could flatten the list of dicts into a proper DataFrame
result = {}
for k in ls_dict:
result.update(k)
# output from above {'a': 'میلاد', 'b': 'علی', 'c': 'رضا'}
# create DataFrame
df = pd.DataFrame(result)
# a b c
# 0 میلاد علی رضا
# the default encoding for Pandas is utf-8
# https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html
df.to_csv('filename.csv')
Saving the ls_dict as txt file:
import json
ls_dict = [
{ 'a': 'میلاد'},
{ 'b': 'علی'},
{ 'c': 'رضا'}
]
with open('ls_dict.txt', 'w', encoding='utf-8') as f:
json.dump(log_stats, f,indent=2)
Related
I have file1.txt with following contents;
[
{
"SERIAL": "124584",
"X": "30024.1",
},
{
"SERIAL": "114025",
"X": "14006.2",
}
]
I have file2.txt with following contents;
[
{
"SERIAL": "344588",
"X": "48024.1",
},
{
"SERIAL": "255488",
"X": "56006.2",
}
]
I want to combine the 2 files into single file output.txt that looks like this;
[
{
"SERIAL": "124584",
"X": "30024.1",
},
{
"SERIAL": "114025",
"X": "14006.2",
},
{
"SERIAL": "344588",
"X": "48024.1",
},
{
"SERIAL": "255488",
"X": "56006.2",
},
]
The tricky part is the [] at the end of each individual file.
I am using python v3.7
Firstly to be JSON compliant, you may remove all the trailing commas (ref: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas)
Then you can use the following code:
import json
with open("file1.txt") as f1:
d1 = json.load(f1)
with open("file2.txt") as f2:
d2 = json.load(f2)
d3 = d1 + d2
with open("output.txt", "w") as out:
json.dump(d3, out)
Here is the solution to read content from file and then append them.
from ast import literal_eval
with open("/home/umesh/Documents/text1.txt", "r") as data
first_file_data = data.read()
with open("/home/umesh/Documents/text2.txt", "r") as data:
second_file_data = data.read()
first_file_data = literal_eval(first_file_data)
second_file_data = literal_eval(second_file_data)
for item in second_file_data:
first_file_data.append(item)
print(first_file_data)
OUTPUT
[{'SERIAL': '124584', 'X': '30024.1'},{'SERIAL': '114025', 'X': '14006.2'},{'SERIAL': '344588', 'X': '48024.1'},{'SERIAL': '255488', 'X': '56006.2'}]
text file content
This solves your problem
import ast
import json
with open('file1.txt') as f:
data = ast.literal_eval(f.read())
with open('file2.txt') as f:
data2 = ast.literal_eval(f.read())
data.extend(data2)
print(data)
with open('outputfile', 'w') as fout: # write to a file
json.dump(data, fout)
OUTPUT:
[{'SERIAL': '124584', 'X': '30024.1'}, {'SERIAL': '114025', 'X': '14006.2'}, {'SERIAL': '344588', 'X': '48024.1'}, {'SERIAL': '255488', 'X': '56006.2'}]
Since both of the content of the files are lists you can concatenate them together as following
file1 = [{'SERIAL': '124584', 'X': '30024.1'}, {'SERIAL': '114025', 'X': '14006.2'}]
file2 = [{'SERIAL': '344588', 'X': '48024.1'}, {'SERIAL': '255488', 'X': '56006.2'}]
totals = file1 + file2
Result
[{'SERIAL': '124584', 'X': '30024.1'},
{'SERIAL': '114025', 'X': '14006.2'},
{'SERIAL': '344588', 'X': '48024.1'},
{'SERIAL': '255488', 'X': '56006.2'}]
I have a dictionary like this
d = {
'Benefits': {
1: {
'BEN1': {
'D': [{'description': 'D1'}],
'C': [{'description': 'C1'}]
}
},
2: {
'BEN2': {
'D': [{'description': 'D2'}],
'C': [{'description': 'C2'}]
}
}
}
}
I am trying to sort dictionary based on KEY OF LAST VALUES(LIST).
FOR EXAMPLE
I am looking for get dictionary value like 'C' IN first and 'D' in second
I'm trying to get correct order. Here is code:
d1 = collections.OrderedDict(sorted(d.items()))
Unfortunately didn't get correct result
This is my expected output
{'Benefits':
{1:
{'BEN1':
{'C':[{'description': 'C1'}], 'D': [{'description': 'D1'}]
}
},
2:
{'BEN2':
{'C': [{'description': 'C2'}], 'D': [{'description': 'D2'}]
}
}
}
}
I am using python 3.5 . I am trying to get order like this
{'C':[{'description': 'C1'}], 'D': [{'description': 'D1'}]}
The following code will sort any dictionary by its key and recursively sort any dictionary value that is also a dictionary by its key and makes no assumption as to the content of the dictionary being sorted. It uses an OrderedDict but if you can be sure it will always run on Python 3.6 or greater, a simple change can be made to use a dict.
from collections import OrderedDict
d = {
'Benefits': {
1: {
'BEN1': {
'D': [{'description': 'D1'}],
'C': [{'description': 'C1'}]
}
},
2: {
'BEN2': {
'D': [{'description': 'D2'}],
'C': [{'description': 'C2'}]
}
}
}
}
def sort_dict(d):
items = [[k, v] for k, v in sorted(d.items(), key=lambda x: x[0])]
for item in items:
if isinstance(item[1], dict):
item[1] = sort_dict(item[1])
return OrderedDict(items)
#return dict(items)
print(sort_dict(d))
See demo
d1 = collections.OrderedDict(sorted(d.items()))
This is not working because it is sorting only on the Benefits item. Here you want to sort inner items, so we have to reach the inner items and sort them.
d1 = {'Benefits': {}}
for a_benefit in d['Benefits']:
d1['Benefits'][a_benefit] = {}
for a_ben in d['Benefits'][a_benefit]:
d1['Benefits'][a_benefit][a_ben] = dict(collections.OrderedDict(sorted(d['Benefits'][a_benefit][a_ben].items())))
I'm trying to generate a JSON file with python. But I can't figure out how to append each object correctly and write all of them at once to JSON file. Could you please help me solve this? a, b, and values for x, y, z are calculated in the script.
Thank you so much
This is how the generated JSON file should look like
{
"a": {
"x": 2,
"y": 3,
"z": 4
},
"b": {
"x": 5,
"y": 4,
"z": 4
}
}
This is python script
import json
for i in range(1, 5):
a = geta(i)
x = getx(i)
y = gety(i)
z = getz(i)
data = {
a: {
"x": x,
"y": y,
"z": z
}}
with open('data.json', 'a') as f:
f.write(json.dumps(data, ensure_ascii=False, indent=4))
Just use normal dictionaries in python when constructing the JSON then use the JSON package to export to JSON files.
You can construct them like this (long way):
a_dict = {}
a_dict['id'] = {}
a_dict['id']['a'] = {'properties' : {}}
a_dict['id']['a']['properties']['x'] = '9'
a_dict['id']['a']['properties']['y'] = '3'
a_dict['id']['a']['properties']['z'] = '17'
a_dict['id']['b'] = {'properties' : {}}
a_dict['id']['b']['properties']['x'] = '3'
a_dict['id']['b']['properties']['y'] = '2'
a_dict['id']['b']['properties']['z'] = '1'
or you can use a function:
def dict_construct(id, x, y, z):
new_dic = {id : {'properties': {} } }
values = [{'x': x}, {'y': y}, {'z':z}]
for val in values:
new_dic[id]['properties'].update(val)
return new_dic
return_values = [('a', '9', '3', '17'), ('b', '3', '2', '1')]
a_dict = {'id': {} }
for xx in return_values:
add_dict = dict_construct(*xx)
a_dict['id'].update(add_dict)
print(a_dict)
both give you as a dictionary:
{'id': {'a': {'properties': {'x': '9', 'y': '3', 'z': '17'}}, 'b': {'properties': {'x': '3', 'y': '2', 'z': '1'}}}}
using json.dump:
with open('data.json', 'w') as outfile:
json.dump(a_dict, outfile)
you get as a file:
{
"id": {
"a": {
"properties": {
"x": "9",
"y": "3",
"z": "17"
}
},
"b": {
"properties": {
"x": "3",
"y": "2",
"z": "1"
}
}
}
}
Make sure you have a valid python dictionary (it seems like you already do)
I see you are trying to write your json in a file with
with open('data.json', 'a') as f:
f.write(json.dumps(data, ensure_ascii=False, indent=4))
You are opening data.json on "a" (append) mode, so you are adding your json to the end of the file, that will result on a bad json data.json contains any data already. Do this instead:
with open('data.json', 'w') as f:
# where data is your valid python dictionary
json.dump(data, f)
One way will be to create whole dict at once:
data = {}
for i in range(1, 5):
name = getname(i)
x = getx(i)
y = gety(i)
z = getz(i)
data[name] = {
"x": x,
"y": y,
"z": z
}
And then save
with open('data.json', 'w') as f:
json.dump(data, f, indent=4)
I have a JSON data with structure like this:
{ "a":"1",
"b":[{ "a":"4",
"b":[{}],
"c":"6"}]
"c":"3"
}
Here the key a is always unique even if nested.
I want to separate my JSON data so that it should look like this:
{"a":"1"
"b":[]
"c":"3"
},
{"a":"4",
"b":[],
"c":"6"
}
JSON data can be nested up to many times.
How to do that?
I'd use an input and output stack:
x = {
"a":1,
"b":[
{
"a":2,
"b":[ { "a":3, }, { "a":4, } ]
}
]
}
input_stack = [x]
output_stack = []
while input_stack:
# for the first element in the input stack
front = input_stack.pop(0)
b = front.get('b')
# put all nested elements onto the input stack:
if b:
input_stack.extend(b)
# then put the element onto the output stack:
output_stack.append(front)
output_stack ==
[{'a': 1, 'b': [{'a': 2, 'b': [{'a': 3}, {'a': 4}]}]},
{'a': 2, 'b': [{'a': 3}, {'a': 4}]},
{'a': 3},
{'a': 4}]
output_stack can be a dict of cause. Then replace
output_stack.append(front)
with
output_dict[front['a']] = front
Not sure about a Python implementation, but in JavaScript this could be done using recursion:
function flatten(objIn) {
var out = [];
function unwrap(obj) {
var arrayItem = {};
for(var idx in obj) {
if(!obj.hasOwnProperty(idx)) {continue;}
if(typeof obj[idx] === 'object') {
if(isNaN(parseInt(idx)) === true) {
arrayItem[idx] = [];
}
unwrap(obj[idx]);
continue;
}
arrayItem[idx] = obj[idx];
}
if(JSON.stringify(arrayItem) !== '{}') {
out.unshift(arrayItem);
}
}
unwrap(objIn);
return out;
}
This will only work as expected if the object key names are not numbers.
See JSFiddle.
I am looking to efficiently merge two (fairly arbitrary) data structures: one representing a set of defaults values and one representing overrides. Example data below. (Naively iterating over the structures works, but is very slow.) Thoughts on the best approach for handling this case?
_DEFAULT = { 'A': 1122, 'B': 1133, 'C': [ 9988, { 'E': [ { 'F': 6666, }, ], }, ], }
_OVERRIDE1 = { 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }
_ANSWER1 = { 'A': 1122, 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }
_OVERRIDE2 = { 'C': [ 6543, { 'E': [ { 'G': 9876, }, ], }, ], }
_ANSWER2 = { 'A': 1122, 'B': 1133, 'C': [ 6543, { 'E': [ { 'F': 6666, 'G': 9876, }, ], }, ], }
_OVERRIDE3 = { 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }
_ANSWER3 = { 'A': 1122, 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }
This is an example of how to run the tests:
(The dictionary update doesn't work, just an stub function.)
import itertools
def mergeStuff( default, override ):
# This doesn't work
result = dict( default )
result.update( override )
return result
def main():
for override, answer in itertools.izip( _OVERRIDES, _ANSWERS ):
result = mergeStuff(_DEFAULT, override)
print('ANSWER: %s' % (answer) )
print('RESULT: %s\n' % (result) )
You cannot do that by "iterating", you'll need a recursive routine like this:
def merge(a, b):
if isinstance(a, dict) and isinstance(b, dict):
d = dict(a)
d.update({k: merge(a.get(k, None), b[k]) for k in b})
return d
if isinstance(a, list) and isinstance(b, list):
return [merge(x, y) for x, y in itertools.izip_longest(a, b)]
return a if b is None else b
If you want your code to be fast, don't copy like crazy
You don't really need to merge two dicts. You can just chain them.
A ChainMap class is provided for quickly linking a number of mappings so they can be treated as a single unit. It is often much faster than creating a new dictionary and running multiple update() calls.
class ChainMap(UserDict.DictMixin):
"""Combine multiple mappings for sequential lookup"""
def __init__(self, *maps):
self._maps = maps
def __getitem__(self, key):
for mapping in self._maps:
try:
return mapping[key]
except KeyError:
pass
raise KeyError(key)
def main():
for override, answer in itertools.izip( _OVERRIDES, _ANSWERS ):
result = ChainMap(override, _DEFAULT)
http://docs.python.org/dev/library/collections#chainmap-objects
http://code.activestate.com/recipes/305268/
If you know one structure is always a subset of the other, then just iterate the superset and in O(n) time you can check element by element whether it exists in the subset and if it doesn't, put it there. As far as I know there's no magical way of doing this other than checking it manually element by element. Which, as I said, is not bad as it can be done in with O(n) complexity.
dict.update() is what you need. But it overrides the original dict, so make a copy of the original one if you want to keep it.