I have a 1000 json files, I need to change the value of a specific line with numeric sequence in all files.
An example
the specific line is - "name": "carl 00",
I need it to be like following
File 1
"name": "carl 1",
File 1
"name": "carl 2",
File 3
"name": "carl 3",
What is the right script to achieve the above using python
This should do the trick. But you're not very clear about how the data is stored in the actual json file. I listed two different approaches. The first is to parse the json file into a python dict then manipulate the data and then turn it back into a character string and then save it. The second is what I think you mean by "line". You can split the file's character string into a list then change the line you want, and remake the full string again, then save it.
This also assumes your json files are in the same folder as the python script.
import os
import json
my_files = [name1, name2, name3, ...] # ['file_name.json', ...]
folder_path = os.path.dirname(__file__)
for i, name in enumerate(my_files):
path = f'{folder_path}/{name}'
with open(path, 'r') as f:
json_text = f.read()
# if you know the key(s) in the json file...
json_dict = json.loads(json_text)
json_dict['name'] = json_dict['name'].replace('00', str(i))
new_json_str = json.dumps(json_dict)
# if you know the line number in the file...
line_list = json_text.split('\n')
line_list[line_number - 1] = line_list[line_number - 1].replace('00', str(i))
new_json_str = '\n'.join(line_list)
with open(path, 'w') as f:
f.write(new_json_str)
Based on your edit, this is what you want:
import os
import json
my_files = [f'{i}.json' for i in range(1, 1001)]
folder_path = os.path.dirname(__file__) # put this .py file in same folder as json files
for i, name in enumerate(my_files):
path = f'{folder_path}/{name}'
with open(path, 'r') as f:
json_text = f.read()
json_dict = json.loads(json_text)
json_dict['name'] = f'carl {i}'
# include these lines if you want "symbol" and "subtitle" changed
json_dict['symbol'] = f'carl {i}'
json_dict['subtitle'] = f'carl {i}'
new_json_str = json.dumps(json_dict)
with open(path, 'w') as f:
f.write(new_json_str)
Without knowing more, the below loop will accomplish the posts requirements.
name = 'carl'
for i in range(0,1001):
print(f'name: {name} {i}')
Related
I need to change some key words in multiple .txt files, using dictionary strucure for this. Then, save changed files in new localization. I write code attached below, but when I run it is warking all the time, and when I break it there is only one empty file cretead.
import os
import os.path
from pathlib import Path
dir_path = Path("C:\\Users\\myuser\\Documents\\scripts_new")
#loading pair of words from txt file into dictionary
myfile = open("C:\\Users\\myuser\\Desktop\\Python\\dictionary.txt")
data_dict = {}
for line in myfile:
k, v = line.strip().split(':')
data_dict[k.strip()] = v.strip()
myfile.close()
# Get the list of all files and directories
path_dir = "C:\\Users\\myuser\\Documents\\scripts"
# iterate over files in
# that directory
for filename in os.listdir(path_dir):
f = os.path.join(path_dir, filename)
name = os.path.join(filename)
text_file = open(f)
#read whole file to a string
sample_string = text_file.read()
# Iterate over all key-value pairs in dictionary
for key, value in data_dict.items():
# Replace key character with value character in string
sample_string = sample_string.replace(key, value)
with open(os.path.join(dir_path,name), "w") as file1:
toFile = input(sample_string)
file1.write(toFile)
I have found a solution, with a little different approach. Maybe this code might be usefull for someone:
import os
#loading pair of words from txt file into dictionary
myfile = open("C:\\Users\\user\\Desktop\\Python\\dictionary.txt")
data_dict = {}
for line in myfile:
k, v = line.strip().split(':')
data_dict[k.strip()] = v.strip()
myfile.close()
sourcepath = os.listdir("C:\\Users\\user\\Documents\\scripts\\")
for file in sourcepath:
input_file = "C:\\Users\\user\\Documents\\scripts\\" + file
print('Conversion is ongoing for: ' + input_file)
with open(input_file, 'r') as input_file:
filedata = input_file.read()
destination_path = "C:\\Users\\user\\Documents\\scripts_new\\"+ file
# Iterate over all key-value pairs in dictionary
for key, value in data_dict.items():
filedata = filedata.replace(key,value)
with open(destination_path,'w') as file:
file.write(filedata)
Hmmm... I think your problem might actually be use of the line
toFile = input(sample_string)
As that'll halt the program awaiting a user input
Anyway, it could probably do with a little organisation into functions. Even this below is a bit... meh.
import os
import os.path
from pathlib import Path
dir_path = Path("C:\\Users\\myuser\\Documents\\scripts_new")
# -----------------------------------------------------------
def load_file(fileIn):
#loading pair of words from txt file into dictionary
with open(fileIn) as myfile:
data_dict = {}
for line in myfile:
k, v = line.strip().split(':')
data_dict[k.strip()] = v.strip()
return data_dict
# -----------------------------------------------------------
def work_all_files(starting_dir, moved_dir, data_dict):
# Iterate over files within the dir - note non recursive
for filename in os.listdir(starting_dir):
f = os.path.join(starting_dir, filename)
with open(f, 'r') as f1:
#read whole file to a string
sample_string = f1.read()
new_string = replace_strings(sample_string, data_dict)
with open(os.path.join(moved_dir, filename), "w") as file1:
file1.write(new_string)
# -----------------------------------------------------------
def replace_strings(sample_string, data_dict):
# Iterate over all key-value pairs in dictionary
# and if they exist in sample_string, replace them
for key, value in data_dict.items():
# Replace key character with value character in string
sample_string = sample_string.replace(key, value)
return sample_string
# -----------------------------------------------------------
if __name__ == "__main__":
# Get the dict-val pairings first
data_dict = load_file("C:\\Users\\myuser\\Desktop\\Python\\dictionary.txt")
#Then run over all the files within dir
work_all_files("C:\\Users\\myuser\\Documents\\scripts", "C:\\Users\\myuser\\Documents\\new_scripts", data_dict)
We could have housed all this in a class and then transported a few variables around using the instance (i.e. "self") - would have been cleaner. But first step is learning to break things into functions.
I have this ".txt" file image so I want to convert it to a JSON file using python
I've tried a lot of solutions but It didn't work because of the format of the file.
can anyone help me, please!
can I convert it so it will be easy to manipulate it?
This is my file
Teste: 89
IGUAL
{
"3C:67:8C:E7:F5:C8": ["b''", "-83"],
"64:23:15:3D:25:FC": ["b'HUAWEI-B311-25FC'", "-83"],
"98:00:6A:1D:6F:CA": ["b'WE'", "-83"],
"64:23:15:3D:25:FF": ["b''", "-83"],
"D4:6B:A6:C7:36:24": ["b'Wudi'", "-51"],
"00:1E:2A:1B:A5:74": ["b'NETGEAR'", "-54"],
"3C:67:8C:63:70:54": ["b'Vodafone_ADSL_2018'", "-33"],
"90:F6:52:67:EA:EE": ["b'Akram'", "-80"],
"04:C0:6F:1F:07:40": ["b'memo'", "-60"],
"80:7D:14:5F:A7:FC": ["b'WIFI 1'", "-49"]
}
and this is the code I tried
import json
filename = 'data_strength/dbm-2021-11-21_12-11-47.963190.txt'
dict1 = {}
with open(filename) as fh:
for line in fh:
command, description = line.strip().split(None, 10)
dict1[command] = description.strip()
out_file = open('test1.json', "w")
json.dump(dict1, out_file, indent=4, sort_key=False)
out_file.close()
The JSON structure in your file starts at the first occurrence of a left brace. Therefore, you can just do this:
import json
INPUT = 'igual.txt'
OUTPUT = 'igual.json'
with open(INPUT) as igual:
contents = igual.read()
if (idx := contents.find('{')) >= 0:
d = json.loads(contents[idx:])
with open(OUTPUT, 'w') as jout:
json.dump(d, jout, indent=4)
I have 25 json files in a folder, named 0.json through 24.json, and I am trying to batch open and rename a perimeter "image" inside of each, which currently all have a placeholder of "https://" in the "image" field.
The .json currently appears as follows for each json file:
{"image": "https://", "attributes": [{"trait_type": "box color", "value": "blue"}, {"trait_type": "box shape", "value": "square"}]}
but should be
{"image": "https://weburlofnewimage/0", "attributes": [{"trait_type": "box color", "value": "blue"}, {"trait_type": "box shape", "value": "square"}]}
I have a central folder on a site like dropbox, that has a url structure of https://weburlofnewimage/0, /1, /2 etc. And so I would like to open each file, and change the value of the "image" key to be replaced with "https://weburlofnewimage/ + current file number + '.png'".
So far I am able to iterate through the files and change the image perimeter successfully within the json files, however the files seem to iterate in a random order, so on loop 1, I am getting file 20, and as a result file 20 is given file 0's image url.
Code as follows:
import json
import os
folderPath = r'/path/FolderWithJson/'
fileNumber = 0
for filename in os.listdir(folderPath):
print('currently on file ' + str(fileNumber))
if not filename.endswith(".json"): continue
filePath = os.path.join(folderPath, filename)
with open(filePath, 'r+') as f:
data = json.load(f)
data['image'] = str('https://weburlofnewimage/' + str(fileNumber) + '.png')
print('opening file ' + str(filePath))
os.remove(filePath)
with open(filePath, 'w') as f:
json.dump(data, f, indent=4)
print('removing file ' + str(filePath))
fileNumber +=1
Which results in me getting the following printouts:
currently on file 10 (on loops 10)
currently preparing file 2.json (its working on file #2...)
opening file /path/FolderWithJson/2.json
removing file /path/FolderWithJson/2.json
And then when I look in 2.json I see the image is changed to "https://weburlofnewimage/10.png" instead of "https://weburlofnewimage/2.png"
Just pull the number from the file name. Don't use your own count. And please remember you never need to use the str function on a string. Many people seem to be getting that bad habit.
import json
import os
folderPath = '/path/FolderWithJson/'
for filename in os.listdir(folderPath):
if not filename.endswith(".json"):
continue
fileNumber = os.path.splitext(filename)[0]
print('currently on file', fileNumber)
filePath = os.path.join(folderPath, filename)
print('opening file', filePath)
with open(filePath, 'r') as f:
data = json.load(f)
data['image'] = 'https://weburlofnewimage/'+fileNumber +'.png'
print('rewriting file', filePath)
with open(filePath, 'w') as f:
json.dump(data, f, indent=4)
You can open a file with a direct path, instead of iterating through the directory. I would use a for loop to insert the numbers into the path, that way they iterate in order.
for fileNumber in range(0,24):
with open(f'my_file/{fileNumber}.json') as f:
...doMyCode...
Very simple question! I want to merge multiple JSON files and the
Check this out:
f1data = f2data = f3data = f4data = f5data = f6data = ""
with open('1.json') as f1:
f1data = f1.read()
with open('2.json') as f2:
f2data = f2.read()
with open('3.json') as f3:
f3data = f3.read()
with open('4.json') as f4:
f4data = f4.read()
with open('5.json') as f5:
f5data = f5.read()
with open('6.json') as f6:
f6data = f6.read()
f1data += "\n"
f1data += f2data += f3data += f4data += f5data += f6data
with open ('merged.json', 'a') as f3:
f3.write(f1data)
And the output should be like this:
[
{
"id": "1",
"name": "John",
},
{
"id": "2",
"name": "Tom",
}
]
The problem is that Visual Code Studio brings up a red line under:
f1data += f2data += f3data += f4data += f5data += f6data
I have no idea why! And the code can't run! there is no error so i can troubleshoot..Any advice?
There is several points to improve in this code:
You should consider doint it in a more "programatic" way:
if you declare a list with the names of the json files you want to access like this:
files_names = ["1","2","3","4","5","6"]
you can then do:
files_names = ["1","2","3","4","5","6"]
data = ""
for file_name in files_names :
with open(file_name+".json" ,"r") as file_handle :
temp_data = file_handle.read()
data = data + temp_data
with open ('merged.json', 'a') as file_handle :
file_handle.write(data)
which is more concise, more pythonic, and can be adapted easily if you ever need 7 json input files for example.
If your files are always 1,2,3,4, you can also just do the for loop iteration like this: knowing the highest json file number you want
max_file_name = 6
for file_name in range(1,max_file_name):#i added 1 as first arg of
#range, assuming your files naming start at 1 and not at 0
str(file_name) + ".json"
To be sure your Json is a valid json, you could use the json standard library. It will take a little bit more time though, as the file will be parsed instead of just dumped into another one, but if you do not have 100000 files to merge and you don't know for sure the code that creates your json files in the first plase is valid, you souldn't see the difference
To use it, just do
import json
max_file_name = 6
data = ""
for file_name in range(max_file_name):
with open(str(file_name) + ".json" ,"r") as file_handle :
temp_data = json.load(file_handle )
data = {**data , **temp_data}
# ** is used to "unload" every key value in a dict at runtime,
# as if you provided them one by one separated by comas :
# data["key1"],data["key2"]...
# doing so for both json objects and puttin them
# into a dictionnay is effectively just like merging them.
with open ('merged.json', 'a') as file_handle :
json.dump(data,file_handle)
You have several ways:
with open('a', 'w') as a, open('b', 'w') as b, ..:
do_something()
files_list = ['a', 'b', ..]
for file in files_list:
with open(file, 'w')...
Use contextlib.ExitStack
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
# Do something with "files"
The output to the 'merge' file won't be formatted exactly as you've specified but this shows the approach that I would personally use:-
import json
alist = []
with open('/Users/andy/merged.json', 'w') as outfile:
for k in range(6):
with open(f'/Users/andy/{k+1}.json') as infile:
alist.append(json.load(infile))
outfile.write(str(alist))
Hi I am trying to take the data from a json file and insert and id then perform POST REST.
my file data.json has:
{
'name':'myname'
}
and I would like to add an id so that the json data looks like:
{
'id': 134,
'name': 'myname'
}
So I tried:
import json
f = open("data.json","r")
data = f.read()
jsonObj = json.loads(data)
I can't get to load the json format file.
What should I do so that I can convert the json file into json object and add another id value.
Set item using data['id'] = ....
import json
with open('data.json', 'r+') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
f.seek(0) # <--- should reset file position to the beginning.
json.dump(data, f, indent=4)
f.truncate() # remove remaining part
falsetru's solution is nice, but has a little bug:
Suppose original 'id' length was larger than 5 characters. When we then dump with the new 'id' (134 with only 3 characters) the length of the string being written from position 0 in file is shorter than the original length. Extra chars (such as '}') left in file from the original content.
I solved that by replacing the original file.
import json
import os
filename = 'data.json'
with open(filename, 'r') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
os.remove(filename)
with open(filename, 'w') as f:
json.dump(data, f, indent=4)
I would like to present a modified version of Vadim's solution. It helps to deal with asynchronous requests to write/modify json file. I know it wasn't a part of the original question but might be helpful for others.
In case of asynchronous file modification os.remove(filename) will raise FileNotFoundError if requests emerge frequently. To overcome this problem you can create temporary file with modified content and then rename it simultaneously replacing old version. This solution works fine both for synchronous and asynchronous cases.
import os, json, uuid
filename = 'data.json'
with open(filename, 'r') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
# add, remove, modify content
# create randomly named temporary file to avoid
# interference with other thread/asynchronous request
tempfile = os.path.join(os.path.dirname(filename), str(uuid.uuid4()))
with open(tempfile, 'w') as f:
json.dump(data, f, indent=4)
# rename temporary file replacing old file
os.rename(tempfile, filename)
There is really quite a number of ways to do this and all of the above are in one way or another valid approaches... Let me add a straightforward proposition. So assuming your current existing json file looks is this....
{
"name":"myname"
}
And you want to bring in this new json content (adding key "id")
{
"id": "134",
"name": "myname"
}
My approach has always been to keep the code extremely readable with easily traceable logic. So first, we read the entire existing json file into memory, assuming you are very well aware of your json's existing key(s).
import json
# first, get the absolute path to json file
PATH_TO_JSON = 'data.json' # assuming same directory (but you can work your magic here with os.)
# read existing json to memory. you do this to preserve whatever existing data.
with open(PATH_TO_JSON,'r') as jsonfile:
json_content = json.load(jsonfile) # this is now in memory! you can use it outside 'open'
Next, we use the 'with open()' syntax again, with the 'w' option. 'w' is a write mode which lets us edit and write new information to the file. Here s the catch that works for us ::: any existing json with the same target write name will be erased automatically.
So what we can do now, is simply write to the same filename with the new data
# add the id key-value pair (rmbr that it already has the "name" key value)
json_content["id"] = "134"
with open(PATH_TO_JSON,'w') as jsonfile:
json.dump(json_content, jsonfile, indent=4) # you decide the indentation level
And there you go!
data.json should be good to go for an good old POST request
try this script:
with open("data.json") as f:
data = json.load(f)
data["id"] = 134
json.dump(data, open("data.json", "w"), indent = 4)
the result is:
{
"name":"mynamme",
"id":134
}
Just the arrangement is different, You can solve the problem by converting the "data" type to a list, then arranging it as you wish, then returning it and saving the file, like that:
index_add = 0
with open("data.json") as f:
data = json.load(f)
data_li = [[k, v] for k, v in data.items()]
data_li.insert(index_add, ["id", 134])
data = {data_li[i][0]:data_li[i][1] for i in range(0, len(data_li))}
json.dump(data, open("data.json", "w"), indent = 4)
the result is:
{
"id":134,
"name":"myname"
}
you can add if condition in order not to repeat the key, just change it, like that:
index_add = 0
n_k = "id"
n_v = 134
with open("data.json") as f:
data = json.load(f)
if n_k in data:
data[n_k] = n_v
else:
data_li = [[k, v] for k, v in data.items()]
data_li.insert(index_add, [n_k, n_v])
data = {data_li[i][0]:data_li[i][1] for i in range(0, len(data_li))}
json.dump(data, open("data.json", "w"), indent = 4)
This implementation should suffice:
with open(jsonfile, 'r') as file:
data = json.load(file)
data[id] = value
with open(jsonfile, 'w') as file:
json.dump(data, file)
using context manager for the opening of the jsonfile.
data holds the updated object and dumped into the overwritten jsonfile in 'w' mode.
Not exactly your solution but might help some people solving this issue with keys.
I have list of files in folder, and i need to make Jason out of it with keys.
After many hours of trying the solution is simple.
Solution:
async def return_file_names():
dir_list = os.listdir("./tmp/")
json_dict = {"responseObj":[{"Key": dir_list.index(value),"Value": value} for value in dir_list]}
print(json_dict)
return(json_dict)
Response look like this:
{
"responseObj": [
{
"Key": 0,
"Value": "bottom_mask.GBS"
},
{
"Key": 1,
"Value": "bottom_copper.GBL"
},
{
"Key": 2,
"Value": "copper.GTL"
},
{
"Key": 3,
"Value": "soldermask.GTS"
},
{
"Key": 4,
"Value": "ncdrill.DRD"
},
{
"Key": 5,
"Value": "silkscreen.GTO"
}
]
}