I have a JSON file that I had to change every instance of the string [1] to _e. To solve this I saved the file as a text file then altered it with this python code.
#!/usr/bin/env python3
import fileinput
with fileinput.FileInput('reactions.json', inplace=True, backup='.bak') as file:
for line in file:
print(line.replace('[0]', '_c'), end='')
print(line.replace('[1]', '_e'), end='')
with open('reactions.json') as data_file:
data_reactions = json.load(data_file)
This worked like a charm, but once I rename the file extension to txt, the file can no longer can be saved as a JSON and read properly. Is there a way to convert back? I noticed saving it as a txt file seemed to remove the ENTER delimiters... I think.
Related
I wrote a program to convert KML to GeoJSON. But, when I look at the output files, they are written without whitespace, making them very hard to read.
I tried to use the json module like so:
file = json.load("<filename>")
But it returned the following:
File "/usr/lib/python3.6/json/__init__.py", line 296, in load
return loads(fp.read())
AttributeError: 'str' has no attribute 'read'
load takes a file object, not a file name.
with open("filename") as fh:
d = json.load(fh)
Once you've parsed it, you can dump it again, but formatted a bit more nicely
with open("formatted-filename.json", "w") as fh:
json.dump(d, fh, indent=4)
I wrote python code to search a pattern in a tcl file and replace it with a string, it prints the output but the same is not saved in the tcl file
import re
import fileinput
filename=open("Fdrc.tcl","r+")
for i in filename:
if i.find("set qa_label")!=-1:
print(i)
a=re.sub(r'REL.*','harsh',i)
print(a)
filename.close()
actual result
set qa_label
REL_ts07n0g42p22sadsl01msaA04_2018-09-11-11-01
set qa_label harsh
Expected result is that in my file it should reflect the same result as above but it is not
You need to actually write your changes back to disk if you want to see them affected there. As #ImperishableNight says, you don't want to do this by trying to write to a file you're also reading from...you want to write to a new file. Here's an expanded version of your code that does that:
import re
import fileinput
fin=open("/tmp/Fdrc.tcl")
fout=open("/tmp/FdrcNew.tcl", "w")
for i in fin:
if i.find("set qa_label")!=-1:
print(i)
a=re.sub(r'REL.*','harsh',i)
print(a)
fout.write(a)
else:
fout.write(i)
fin.close()
fout.close()
Input and output file contents:
> cat /tmp/Fdrc.tcl
set qa_label REL_ts07n0g42p22sadsl01msaA04_2018-09-11-11-01
> cat /tmp/FdrcNew.tcl
set qa_label harsh
If you wanted to overwrite the original file, then you would want to read the entire file into memory and close the input file stream, then open the file again for writing, and write modified content to the same file.
Here's a cleaner version of your code that does this...produces an in memory result and then writes that out using a new file handle. I am still writing to a different file here because that's usually what you want to do at least while you're testing your code. You can simply change the name of the second file to match the first and this code will overwrite the original file with the modified content:
import re
lines = []
with open("/tmp/Fdrc.tcl") as fin:
for i in fin:
if i.find("set qa_label")!=-1:
print(i)
i=re.sub(r'REL.*','harsh',i)
print(i)
lines.append(i)
with open("/tmp/FdrcNew.tcl", "w") as fout:
fout.writelines(lines)
Open a tempfile for writing the updated file contents and open the file for writing.
After modifying the lines, write it back in the file.
import re
import fileinput
from tempfile import TemporaryFile
with TemporaryFile() as t:
with open("Fdrc.tcl", "r") as file_reader:
for line in file_reader:
if line.find("set qa_label") != -1:
t.write(
str.encode(
re.sub(r'REL.*', 'harsh', str(line))
)
)
else:
t.write(str.encode(line))
t.seek(0)
with open("Fdrc.tcl", "wb") as file_writer:
file_writer.writelines(t)
I have one ASCII file with .dat extention. The file has a data as shown below,
MPOL3_VPROFILE
{
ID="mpvp_1" Cycle="(720)[deg]" Lift="(9)[mm]" Period="(240)[deg]"
Phase="(0)[deg]" TimingHeight="(1.0)[mm]" RampTypeO="Const Velo"
RampHO="(0.3)[mm]" RampVO="(0.00625)[mm/deg]" RampTypeC="auto"
RampHC="(auto)[mm]" RampVC="(auto)[mm/deg]" bO="0.7" cO="0.6" dO="1.0"
eO="1.5" bC="auto" cC="auto" dC="auto" eC="auto" th1O="(14)[deg]"
Now I would like to read this file in Python and then change the value of RampHO="(0.3)[mm]" to lets say RampHO="(0.2)[mm]" and save it as a new .dat file. How can I do this ?
Currently I am able to read the file and line successfully using below code,
import sys
import re
import shutil
import os
import glob
import argparse
import copy
import fileinput
rampOpen = 'RampHO='
file = open('flatFollower_GenCam.dat','r')
#data = file.readlines()
#print (data)
for line in file:
line.strip().split('/n')
if rampOpen in line:
print (line[4:22])
But I am now stuck how to change the float value and save it as with different name.
First up, you should post your code inside your text and not in seperate images. Just indent each line with four spaces to format it as code.
You can simply read in a file line by line, change the lines you want to change and then write the output.
with open(infile, 'r') as f_in, open(outfile, 'w') as f_out:
for line in f_in:
output_line = edit_line(line)
f_out.write(output_line)
Then you just have to write a function that does the string replacement.
im quite new to programing and i don´t understand this error message i get, file was loaded in the wrong encoding utf-8 or it´s not really a error message in the code but i get it in my new .txt file where i write all found keywords to. The .txt file get upp to 4000+ rows with information that i sort to Excel in another program and later send it to Access. What dose the message mean and is thhere a way to fix it? Thanks
im using pycharm with anaconda36
import glob
def LogFile(filename, tester):
data = []
with open(filename) as filesearch: # open search file
filesearch = filesearch.readlines() # read file
file = filename[37:]
for line in filesearch:
if tester in line: # extract "Create Time"
short = line[30:]
data.append(short) # store all found wors in array
print (file)
with open('Msg.txt', 'a') as handler: # create .txt file
for i in range(len(data)):
handler.write(f"{file}|{data[i]}")
# open with 'w' to "reset" the file.
with open('LogFile.txt', 'w') as file_handler:
pass
# ---------------------------------------------------------------------------------
for filename in glob.glob(r'C:\Users\Documents\Access\\GTX797\*.log'):
LogFile(filename, 'Sending Request: Tester')
I just had the same error in pyCharm and fixed it by specifying UTF-8 when creating the file. You will need to import codecs to do this.
import codecs
with codecs.open(‘name.txt', 'a', 'utf-8-sig') as f:
I want to read a json file in which each line contains a new json object.
File looks like below -
{'P':'a1','D':'b1','T':'c1'}
{'P':'a2','D':'b2','T':'c2'}
{'P':'a3','D':'b3','T':'c3'}
{'P':'a4','D':'b4','T':'c4'}
I'm trying to read this file like below -
print pd.read_json("sample.json", lines = True)
I'm facing below exception -
ValueError: Expected object or value
Actually this sample.json file is of ~240mb. Format of this file is like this only. It's each line contains one new json object and I want to read this file using python pandas.
As others have said in the comments, it's not really JSON. You can use ast.literal_eval():
import pandas as pd
import ast
with open('sample.json') as f:
content = f.readlines()
pd.DataFrame([ast.literal_eval(line) for line in content])
Or replace the single quotes with doubles:
import pandas as pd
import json
with open('sample.json') as f:
content = f.readlines()
pd.DataFrame([json.loads(line.replace("'", '"')) for line in content])