How to save python dictionary into json files? - python

I have a dictionary, for example:
a = {'a':1,'b':2,'c':3}
And I want it to be saved in a json file.
How can I do this with the original python json library?
Please note that I am running Python 3.5.2, which has a build-in json library.

You can also dump json file directly using json.dump instead of json.dumps.
import json
a = {'a':1,'b':2,'c':3}
with open("your_json_file", "w") as fp:
json.dump(a , fp)
json.dumps is mainly used to display dictionaries as a json format with the type of string. While dump is used for saving to file. Using this to save to file is obsolete.
The previous example only save the file to json but does not make it very pretty. So instead you could do this:
json.dump(a, fp, indent = 4) # you can also do sort_keys=True as well
# this work the same for json.dumps
This makes the json file more user friendly to read. the pydoc has some good description on how to use the json module.
To retrieve your data back, you can use the load function.
a = json.load(fp) # load back the original dictionary

This may help you...
import json
a = {'name': 'John Doe', 'age': 24}
js = json.dumps(a)
# Open new json file if not exist it will create
fp = open('test.json', 'a')
# write to json file
fp.write(js)
# close the connection
fp.close()

show you code
#!/usr/bin/env python
# coding:utf-8
'''黄哥Python'''
import json
a = {'a': 1, 'b': 2, 'c': 3}
with open("json.txt", "w") as f:
f.write(json.dumps(a))

check this-
import json
a = {'a':1,'b':2,'c':3}
json_str = json.dumps(a)
with open('data.json', 'w') as f:json.dump(data, f)

Most of other answers are correct but this will also print data in prettify and sorted manner.
import json
a = {'name': 'John Doe', 'age': 24}
js = json.dumps(a, sort_keys=True, indent=4, separators=(',', ': '))
with open('test.json', 'w+') as f:
f.write(js)

Related

Python code to create JSON with Marathi language Giving Unreadable JSON

I am trying to create JSON file using python code. file is created successfully with English language but not properly working with Marathi Language.
Please check out code:
import os
import json
jsonFilePath = "E:/file/"
captchaImgLocation = "E:/file/captchaimg/"
path_to_tesseract = r"C:/Program Files/Tesseract-OCR/tesseract.exe"
image_path = r"E:/file/captchaimg/captcha.png"
x = {
"FName": "प्रवीण",
}
# convert into JSON:
y = json.dumps(x, ensure_ascii=False).encode('utf8')
# the result is a JSON string:
print(y.decode())
completeName = os.path.join(jsonFilePath, "searchResult_Unicode.json")
print(str(completeName))
file1 = open(completeName, "w")
file1.write(str(y))
file1.close()
O/P on console:
{"FName": "प्रवीण"}
<br>
File created inside folder like this:
b'{"FName": "\xe0\xa4\xaa\xe0\xa5\x8d\xe0\xa4\xb0\xe0\xa4\xb5\xe0\xa5\x80\xe0\xa4\xa3"}'
There is no run time or compile time error but JSON is created with with above format.
Please suggest me any solution.
Open the file in the encoding you need and then json.dump to it:
import os
import json
data = { "FName": "प्रवीण" }
# Writing human-readable. Note some text viewers on Windows required UTF-8 w/ BOM
# to *display* correctly. It's not a problem with writing, but you can use
# encoding='utf-8-sig' to hint to those programs that the file is UTF-8 if
# you see that issue. MUST use encoding='utf8' to read it back correctly.
with open('out.json', 'w', encoding='utf8') as f:
json.dump(data, f, ensure_ascii=False)
# Writing non-human-readable for non-ASCII, but others will have few
# problems reading it back into Python because all common encodings are ASCII-compatible.
# Using the default encoding this will work. I'm being explicit about encoding
# because it is good practice.
with open('out2.json', 'w', encoding='ascii') as f:
json.dump(data, f, ensure_ascii=True) # True is the default anyway
# reading either one is the same
with open('out.json', encoding='utf8') as f:
data2 = json.load(f)
with open('out2.json', encoding='utf8') as f: # UTF-8 is ASCII-compatible
data3 = json.load(f)
# Round-tripping test
print(data == data2, data2)
print(data == data3, data3)
Output:
True {'FName': 'प्रवीण'}
True {'FName': 'प्रवीण'}
out.json (UTF-8-encoded):
{"FName": "प्रवीण"}
out2.json (ASCII-encoded):
{"FName": "\u092a\u094d\u0930\u0935\u0940\u0923"}
You have encoded the JSON string, so you must either open the file in binary mode or decode the JSON before writing to file, so:
file1 = open(completeName, "wb")
file1.write(y)
or
file1 = open(completeName, "w")
file1.write(y.decode('utf-8'))
Doing
file1 = open(completeName, "w")
file1.write(str(y))
writes the string representation of the bytes to the file, which always the wrong thing to do.
Do you want your json to be human readable? It's usually bad practice since you would never know what encoding to use.
You can write/read your json files with the json module without worrying about encoding:
import json
json_path = "test.json"
x = {"FName": "प्रवीण"}
with open(json_path, "w") as outfile:
json.dump(x, outfile, indent=4)
with open(json_path, "r") as infile:
print(json.load(infile))

Uploading a csv type data using python request.put without reading from a saved csv file?

i have an api end point where i am uploading data to using python. end point accepts
putHeaders = {
'Authorization': user,
'Content-Type': 'application/octet-stream' }
My current code is doing this
.Save a dictionary as csv file
.Encode csv to utf8
dataFile = open(fileData['name'], 'r').read()).encode('utf-8')
.Upload file to api end point
fileUpload = requests.put(url,
headers=putHeaders,
data=(dataFile))
What i am trying to acheive is
loading the data without saving
so far i tried
converting my dictionary to bytes using
data = json.dumps(payload).encode('utf-8')
and loading to api end point . This works but the output in api end point is not correct.
Question
Does anyone know how to upload csv type data without actually saving the file ?
EDIT: use io.StringIO() as your file-like object when your writing your dict to csv. Then call get_value() and pass that as your data param to requests.put().
See this question for more details: How do I write data into CSV format as string (not file)?.
Old answer:
If your dict is this:
my_dict = {'col1': 1, 'col2': 2}
then you could convert it to a csv format like so:
csv_data = ','.join(list(my_dict.keys()))
csv_data += ','.join(list(my_dict.values()))
csv_data = csv_data.encode('utf8')
And then do your requests.put() call with data=csv_data.
Updated answer
I hadn't realized your input was a dictionary, you had mentioned the dictionary was being saved as a file. I assumed the dictionary lookup in your code was referencing a file. More work needs to be done if you want to go from a dict to a CSV file-like object.
Based on the I/O from your question, it appears that your input dictionary has this structure:
file_data = {"name": {"Col1": 1, "Col2": 2}}
Given that, I'd suggest trying the following using csv and io:
import csv
import io
import requests
session = requests.Session()
session.headers.update(
{"Authorization": user, "Content-Type": "application/octet-stream"}
)
file_data = {"name": {"Col1": 1, "Col2": 2}}
with io.StringIO() as f:
name = file_data["name"]
writer = csv.DictWriter(f, fieldnames=name)
writer.writeheader()
writer.writerows([name]) # `data` is dict but DictWriter expects list of dicts
response = session.put(url, data=f)
You may want to test using the correct MIME type passed in the request header. While the endpoint may not care, it's best practice to use the correct type for the data. CSV should be text/csv. Python also provides a MIME types module:
>>> import mimetypes
>>>
>>> mimetypes.types_map[".csv"]
'text/csv'
Original answer
Just open the file in bytes mode and rather than worrying about encoding or reading into memory.
Additionally, use a context manager to handle the file rather than assigning to a variable, and pass your header to a Session object so you don't have to repeatedly pass header data in your request calls.
Documentation on the PUT method:
https://requests.readthedocs.io/en/master/api/#requests.put
data – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the Request.
import requests
session = requests.Session()
session.headers.update(
{"Authorization": user, "Content-Type": "application/octet-stream"}
)
with open(file_data["name"], "rb") as f:
response = session.put(url, data=f)
Note: I modified your code to more closely follow python style guides.

how to clean a JSON file and store it to another file in Python

I am trying to read a JSON file with Python. This file is described by the authors as not strict JSON. In order to convert it to strict JSON, they suggest this approach:
import json
def parse(path):
g = gzip.open(path, 'r')
for l in g:
yield json.dumps(eval(l))
however, not being familiar with Python, I am able to execute the script but I am not able to produce any output file with the new clean JSON. How should I modify the script in order to produce a new JSON file? I have tried this:
import json
class Amazon():
def parse(self, inpath, outpath):
g = open(inpath, 'r')
out = open(outpath, 'w')
for l in g:
yield json.dumps(eval(l), out)
amazon = Amazon()
amazon.parse("original.json", "cleaned.json")
but the output is an empty file. Any help more than welcome
import json
class Amazon():
def parse(self, inpath, outpath):
g = open(inpath, 'r')
with open(outpath, 'w') as fout:
for l in g:
fout.write(json.dumps(eval(l)))
amazon = Amazon()
amazon.parse("original.json", "cleaned.json")
another shorter way of doing this
import json
class Amazon():
def parse(readpath, writepath):
with open(readpath) as g, open(writepath, 'w') as fout:
for l in g:
json.dump(eval(l), fout)
amazon = Amazon()
amazon.parse("original.json", "cleaned.json")
While handling json data it is better to use json modules json.dump(json, output_file) for dumping json in file and json.load(file_path) to load the data. In this way you can get maintain json wile saving and reading json data.
For very large amount of data say 1k+ use python pandas module.

Writing JSON in Python

I want to append onto an employees history every time they clock in. I have appended within python but cant get it to write back to the JSON file.
import json
json_data = open("app.json")
data = json.load(json_data)
for d in data['employees']:
d['history'].append({'day': 01.01.15, 'historyId': 44, 'time': 12.00})
json.dump(d['history'])
json.dump() takes two arguments, the Python object to dump and the file to write it to.
Make your changes first, then after the loop, re-open the file for writing and write out the whole data object:
with open("app.json") as json_data:
data = json.load(json_data)
for d in data['employees']:
d['history'].append({'day': 01.01.15, 'historyId': 44, 'time': 12.00})
with open("app.json", 'w') as json_data:
json.dump(data, json_data)
This essentially replaces the file contents with the JSON-serialised new data structure.

How to append to a JSON file in Python?

I have the a JSON file which contains {"67790": {"1": {"kwh": 319.4}}}. Now I create a dictionary a_dict which I need to append to the JSON file.
I tried this code:
with open(DATA_FILENAME, 'a') as f:
json_obj = json.dump(a_dict, json.load(f)
f.write(json_obj)
f.close()
What is wrong with the code? How can I fix the problem?
Assuming you have a test.json file with the following content:
{"67790": {"1": {"kwh": 319.4}}}
Then, the code below will load the json file, update the data inside using dict.update() and dump into the test.json file:
import json
a_dict = {'new_key': 'new_value'}
with open('test.json') as f:
data = json.load(f)
data.update(a_dict)
with open('test.json', 'w') as f:
json.dump(data, f)
Then, in test.json, you'll have:
{"new_key": "new_value", "67790": {"1": {"kwh": 319.4}}}
Hope this is what you wanted.
You need to update the output of json.load with a_dict and then dump the result.
And you cannot append to the file but you need to overwrite it.
json_obj=json.dumps(a_dict, ensure_ascii=False)

Categories