I am trying to make a script that will delete everything within abc2. But right now, it just deletes all the json code.
The json code is located in a file named "demo".
there are multiple
Python:
with open('demo.json', 'w') as destnationF:
with open('demo.json', 'r') as source_file:
for parameters in source_file:
element = json.loads(parameters.strip())
if 'abc1' in element:
del element['abc1']
dest_file.write(json.dumps(element))
snippet of Json:
{
"parameters": [{
"abc1": {
"type": "string",
"defaultValue": "HELLO1"
},
"abc2": {
"type": "string",
"defaultValue": "HELLO2"
}
}]
}
When openning a file with w it clears it, so do it in 2 steps
read the content, keep what you need, delete what you need
write the new content
to_keep = []
with open('demo.json') as file:
content = json.load(file)
for parameter in content['parameters']:
print(parameter)
if 'abc1' in parameter:
del parameter['abc1']
to_keep.append(parameter)
with open('demo.json', 'w') as file:
json.dump({'parameters': to_keep}, file, indent=4)
Opening the file for writing is truncating the file before you can read it.
You should read the entire file into memory, then you can overwrite the file.
You also need to loop through the parameters list, and delete the abc2 properties in its elements. And when you write the JSON back to the file, you need to separate each of them with newline (but it's generally a bad idea to put multiple JSON strings in a single file -- it would be better to collect them all in a list and load and dump it all at once).
with with open('demo.json', 'r+') as source_file:
lines = source_file.readlines()
source_file.seek(0) # overwrite the file
for parameters in lines:
element = json.loads(parameters.strip())
for param in element['parameters']:
if 'abc2' in element:
del element['abc2']
source_file.write(json.sumps(element) + '\n')
source_file.truncate()
Related
I'm trying to read a text file contains dictionaries separated by comma, and convert to a list of dictionaries.
How can I do this with python?
I tried to read as json file or use split method
{
"id": "b1",
"name": "Some Name"
},
{
"id": "b2",
"name": "Another Name"
},
....
result should be:
[ {"id" : "b1", "name" : "Some Name"} , {"id" : "b2", "name" : "Another Name"}, .... ]
If your file is not too big, you can do the following:
import json
with open('filename.txt', 'r') as file:
result = json.loads('[' + file.read() + ']')
You can use json module in python
JSON (JavaScript Object Notation), specified by RFC 7159 (which obsoletes RFC 4627) and by ECMA-404, is a lightweight data interchange format inspired by JavaScript object literal syntax (although it is not a strict subset of JavaScript).
json exposes an API familiar to users of the standard library marshal and pickle modules.
https://docs.python.org/2/library/json.html
In your case if your file is a valid json file
then you can use json.loads() method directly
import json
with open('test.txt', 'r') as file:
result = json.loads(file.read())
Welcome to Stack Overflow..!
If you are using a JSON file to store data, then I highly recommend the Python JSON Library. How to use it you ask..? Read this
If you plan on using a Text file to store data, I recommend that you store the data a bit differently than the JSON format
b1,Some Name
b2,Another Name
.
.
This would make reading the text file way easier. Just use the split() command to separate the lines from one another and then use split(",") on each of the lines to separate the ID from the Name.
Here is the code:
list = open("filename.txt").read().split("\n")
dictionaryList = []
for item in list:
id, name = item.split(",")
dictionaryList.append({id:name})
Hope this works..! SDA
I have a json file that I am using as a Dictionary in python.
The json file is really long with 10k+ records. I need to replace the $home part in the "iscategorical" with the value of "id". After making the changes, I want to save this file so that I can use it again as a dictionary. Thank you for the help. Here is a sample:
{
"maps": [
{
"id": "xyzp",
"iscategorical": "/u/$home/app/home"
},
{
"id": "trtn",
"iscategorical": "/u/app/$home/user"
}
]}
I am understanding that you are able to load the file successfully, and all you want to do is replace the strings and save the structure to file again.
For this, we can traverse the list of dictionaries in the data, and modify the value of item['iscategorical'] by replacing $home with the value of item['id'].
We can then dump the modified structure back to (a new) json file.
import json
with open('data.json') as f:
data = json.load(f)
for item in data['maps']:
item['iscategorical'] = item['iscategorical'].replace('$home', item['id'])
with open('new_data.json', 'w') as f:
json.dump(data, f)
Your question seems similar to - Parsing values from a JSON file? .
However for your case below snippet should work.
import json
with open('idata.json') as infile:
data = json.load(infile)
for elem in data["maps"]:
elem['iscategorical']=elem['iscategorical'].replace('$home',elem['id'])
with open('odata.json', 'w') as outfile:
json.dump(data, outfile)
If it's a file, one thing you can do is load the file in and read line by line.
for everyline, you can use regex to find and replace. Then you can either overwrite the file or write onto a new file.
For example,
line.replace('$home', 'id')
Alternatively, you can load the json python in and convert it into a string. Then replace the text using the regex. Finally, converts back to Python dictionary using json.load().
However, 10k line is too long. I think reading a file, line-by-line, would be a better solutions.
EDIT:
Here is the code sample.
from tempfile import mkstemp
from shutil import move
from os import fdopen, remove
def replace(file_path, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
with fdopen(fh,'w') as new_file:
with open(file_path) as old_file:
for line in old_file:
new_file.write(line.replace(pattern, subst))
#Remove original file
remove(file_path)
#Move new file
move(abs_path, file_path)
replace('./text.txt', '$home', 'id')
"The JSON file is really long with 10k+ records" -Try this way it should help for large files.
input.json
{"maps":[{"id":"xyzp","iscategorical":"/u/$home/app/home"},{"id":"trtn","iscategorical":"/u/app/$home/user"}]}
import json
with open('input.json') as f:
data = json.load(f)
my_list = []
def get_some_data():
for item in data['maps']:
yield(item['id'], item['iscategorical'])
for id, iscat in get_some_data():
temp_dict = {}
temp_dict['id'] = id
temp_dict['iscategorical'] = iscat.replace('$home', id)
my_list.append(temp_dict)
maps_dict = {}
maps_dict['maps'] = my_list
with open('output.json', 'w') as f:
json.dump(maps_dict, f)
output.json:
{"maps": [{"id": "xyzp", "iscategorical": "/u/**xyzp**/app/home"}, {"id": "trtn", "iscategorical": "/u/app/**trtn**/user"}]}
I currently have a son file created called "newsuser.json". I am trying to get a users name to be saved to the son file each time a new user is added. I can currently get a user to be saved, but when another user is added the previous user is overwritten, resulting in only having one user stored.
I am trying to get me JSON file to look like this:
{
"users":[
{
"name":"test1"
},
{
"name":"test2"
}
]
}
Instead my output looks like:
{"name": "test1"}
Here is my code for creating a JSON object and saving to "newsuser.json"
import json
userName = form.getvalue('userName')
newDict = {"name": "test1"}
with open("cgi-bin/newsuser.json") as f:
data = json.load(f)
data.update(newDict)
with open("cgi-bin/newsuser.json", "w") as f:
json.dump(data, f)
Does anyone have ideas how I can get a JSON file to print my objects under "user" and not overwrite each entry?
just replace 'w' with 'a' like below
with open("cgi-bin/newsuser.json", "a") as f:
json.dump(data, f)
I'm experiencing a problem when I dump json data into a CSV file. There is typically a block of json data that is missing from my the CSV file, but can be seen if I print the json in the console or to a file.
Essentially I am calling a service twice and receiving back two json responses that I parse and dump into a CSV file. The service can only be called for 7 day increments (unix time), so I have implemented logic to call the service for this increment over a period of time.
I'm using the python vanilla json and csv libraries.
First the CSV is created with headers:
with open ('history_' + datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.csv', 'wb') as outcsv:
writer = csv.writer(outcsv)
writer.writerow(["Column1","Column2", "Column3", "Column4", "Column5",
"Column6"])
Then, I have a counter that calls the service twice, fifty times (following the open of the CSV file):
while y<50:
jsoResponseOne = getJsonOne(7)
jsonResponseTwo = getJsonTwo(7)
Example json response:
{"Value":
[
{"ExampleName": "Test",
"ExampleNameTwo": "Test2",
"ExampleDate": "1436103790",
"ExampleCode": 00000001,
"ExampleofExample": "abcd",
"AnotherExample": "hello"},
{"ExampleName": "Test2",
"ExampleNameTwo": "Test3",
"ExampleDate": "1436103790",
"ExampleCode": 00000011,
"ExampleofExample": "abcd",
"AnotherExample": "hello2"},
]
}
The CSV output columns would look like:
ExampleName ExampleNameTwo ExampleDate ExampleCode ExampleofExample AnotherExample
Finally, the CSV is written as follows:
for item in jsonResponseOne['Value']:
row = []
row.append(str(item['ExampleName'].encode('utf-8')))
if item.get("ExampleNameTwo"):
row.append(str(item["ExampleNameTwo"]))
else:
row.append("None")
row.append(str(item['ExampleDate']))
row.append(str(item['ExampleCode'].encode('utf-8')))
row.append(str(item['ExampleofExample'].encode('utf-8')))
row.append(str(item['AnotherExample'].encode('utf-8')))
writer.writerow(row)
for item in jsonResponseTwo['Value']:
anotherRow= []
anotherRow.append(str(item['ExampleName'].encode('utf-8')))
if item.get("ExampleNameTwo"):
anotherRow.append(str(item["ExampleNameTwo"]))
else:
anotherRow.append("None")
anotherRow.append(str(item['ExampleDate']))
anotherRow.append(str(item['ExampleCode'].encode('utf-8')))
anotherRow.append(str(item['ExampleofExample'].encode('utf-8')))
anotherRow.append(str(item['AnotherExample'].encode('utf-8')))
writer.writerow(anotherRow)
Why could my CSV output be missing an entire row of data (a block of data from the JSON response)?
Resolved.
The Python script had an indentation issue in the one of the while loops, causing some data to be skipped over and not written to the CSV file.
I am using Python and I have a JSON file in which I would like to update a value related to a given key. That is, I have the my_file.json containing the following data
{"a": "1", "b": "2", "c": "3"}
and I would like to just change the value related to the b key from 2 to 9 so that the updated file look as like:
{"a": "1", "b": "9", "c": "3"}
How can I make that?
I tried the following but without success (the changes are not saved to the file):
with open('my_file.json', 'r+') as f:
json_data = json.load(f)
json_data['b'] = "9"
f.close()
You did not save the changed data at all. You have to first load, then modify, and only then save. It is not possible to modify JSON files in-place.
with open('my_file.json', 'r') as f:
json_data = json.load(f)
json_data['b'] = "9"
with open('my_file.json', 'w') as f:
f.write(json.dumps(json_data))
You may also do this:
with open('my_file.json', 'r+') as f:
json_data = json.load(f)
json_data['b'] = "9"
f.seek(0)
f.write(json.dumps(json_data))
f.truncate()
If you want to make it safe, you first write the new data into a temporary file in the same folder, and then rename the temporary file onto the original file. That way you will not lose any data even if something happens in between.
If you come to think of that, JSON data is very difficult to change in-place, as the data length is not fixed, and the changes may be quite significant.
You are almost there, you only have to write the updated json_data back to the file. Get rid of f.close(), as the with statement will ensure that the file is closed. Then, issue
with open('my_file.json', 'w') as f:
f.write(json.dumps(json_data))
This is simplest way to do the json file updation/writing.
where you are creating instance of json file as 'f' and the writing the 'data' into the json file,
#write json file
with open('data.json', 'w') as f:
json.dump(data, f)
#Read json file
with open('data.json', 'r') as f:
json.load(data, f)
Open the file and store in one variable all the content using json.load function
Update your key stored in the previous variable
Open the file another time and update your content with the variable updated
def updateJsonFile():
jsonFile = open("my_file.json", "r") # Open the JSON file for reading
data = json.load(jsonFile) # Read the JSON into the buffer
jsonFile.close() # Close the JSON file
## Working with buffered content
data["b"] = "9"
## Save our changes to JSON file
jsonFile = open("my_file.json", "w+")
jsonFile.write(json.dump(data))
jsonFile.close()